If you do not want a broken GPU better do this

Most of your post was good info. But the fact the game is running OG d2 underneath the hood does not matter whatsoever. A modern pc could run hundreds or thousands of copies of OG d2 without breaking a sweat. Running a single copy it underneath D2R is no more taxing than having youtube or something running in the background.

Nvidia’s optimal range is between 70 and 85 degrees Celsius, AMD’s are around 60 to 70 Celsius

so this info isn’t the most helpful for nvidia users they will lose a ton of performance trying to chase temps under the “normal” range

1 Like

It does, actually. Because the underlying engine is not being displayed in the viewport, it is rendered offscreen until you switch to the engine you weren’t using. Offscreen renders are not limited by vSync. That’s where the FPS limit slider comes into play. That tells the GPU to only render a maximum number of frames per second. And yes, an unchecked offscreen renderer can chew up GPU cycles and heat it up something fierce, even in a game like this. On a really high refresh rate display, this adds up to a ton of wasted energy and thus, heat.

2 Likes

Isn’t OG d2 still locked at 25 fps regardless of what the remaster settings are on? It was an engine limit iirc.

So the driver was the issue, not the game per se. While Nvidia by example released a driver for D2R, the same vendor didn’t disclosed the details about the improvements they do to it, some of those improvements could negatively impact the user machine during the d2r gameplay if the game changed some stuff after that driver was released.

So, if the driver itself can’t handle properly the calls the game itself asks, is a driver issue not a game issue. While I would agree that d2r isn’t optimized right now, your advice should recommend them to install earlier drivers not updated ones not create some conclusions, the data I provided was related to myths that you quoted on your post.

I stopped using “game ready” drivers long time ago because the benefits were negligible and often Studio drivers had more stability than game ready ones.

Optimization on vendor side takes time and often doesn’t consider several impacts down the road, is better to stick with prior stable drivers until they release some driver that actually improves the performance without much caveats.

Currently using:
471.11 released on 06/23/21, the same driver I used on beta which on both cases had 0 hardware related issues, currently using GTX 1080 Amp Edition (Zotac).

So would be better to recommend for folks who doesn’t know much about stuff and would be easier to them to follow, suggesting them to install Nvidia Studio drivers on their Nvidia Geforce Experience software or on nvidia website. Because like you stated they didn’t used DLSS neither RTX so the most “updated” gpus wouldn’t benefit from those features on those drivers, unless the player uses the other games on optimized list they should stick with earlier versions or studio drivers for the time being.

Because if the driver was implemented in a software without much disclosure would be a bad driver to begin with for that application. They released a driver for d2r before even the game being released and the game had patch day 1 and other patches through the weekened.

That’s why folks would be better suited to avoid the driver itself, because the game devs are tweaking the game as we speak and an outdated driver like 09/20/2021 aren’t that great if the game itself changed.

They stated as Day-0 optimizations and enhancements for D2R. But if the game received several patches and the driver itself doesn’t provide any “actual” data about those improvements could mean nothing.

That’s why “game ready” drivers often are really bad drivers, at least when released prior the release, Cyberpunk 2077 had like about 6 “game ready”. Those drivers are immature and doesn’t reflect the reality. Is better stick with studio or earlier versions if the current version is problematic to them.

You should recommend DDU and install earlier drivers versions, not some “workaround” because of some messy driver that nvidia often provides.

1 Like

Few thins I’d like to mention

  1. A simple fix to the GPU temperature issue is just to turn the power down on the GPU. I have an RTX 3080. Set the power to 80% and you’ll still get most of the same framerate, but you’ll shave off 5-7 degrees.

  2. Set the frame limiter in the settings to match your monitor. I have a 120 HZ monitor so I set it to not render more than 120fps.

  3. You won’t kill a GPU running it at 80 degrees. As others have mentioned… GPU’s have throttling in place and most carry pretty comprehensive warranties that last several years so even if it does die (extremely unlikely), you can RMA a new one.

1 Like

I limited as well but it seemed to ignore it until I enabled Radeon Chill. I’m on the 5700XT and I’m cruising around 80+. Maybe I’ll pull it apart and clean it sometime this week and hopefully that’s the cause.

Maybe I’ll pull it apart and clean it sometime this week and hopefully that’s the cause.

I would do that too but in the past year I have tried to RMA this card twice(first time within 1 month), but support refuses to respond to me with an RMA#. So don’t want to void warranty.

Sometimes the higher framerate cap can lock the cpu to certain tasks preventing it to go to “power saving state” or “swapping to o from turbo modes” and making some tasks being a bit faster. But overall you’re right.

D3 by example you could get better load times if you limit up 80-120 after that the benefits are negligible because those caps could make loading times near to instant. At least on the past those thresholds helped a lot on loading screens, but I think they patched that perk.

More fps can generate better loading screens, but doesn’t mean that on d2r would work the same way like d3 worked in the past.

Just wanted to quote it as curiosity stuff.

I think there is a feature that works similar to chill one but would disable the chill one, while you set the minimum and max fps. If you set both the same value you would have better benefits than the chill.

1 Like

The displayed framerate is limited to that, yes (at least in offline mode - Battle.net mode allowed display refresh framerates). When offscreen however, the GPU will render everything it can without a limit.

You should be able to do this by using the Performance power setting in Windows 10’s Power Options control panel under Advanced Options. You can set the base CPU rate, which will prevent going into lower P-States. This does eat up more power as the CPU will always run at the rate you specify, but it also prevents games from having issues such as the ones you mention.

1 Like

a GPU can go to 108c before literally fusing to the PCB

Dude, just stop. PCBs don’t “fuse” into GPUs at 108C. They have >150C heat limits usually.
Also, tin melts at 232C. No fusing there either.

You’re just making a fool out of yourself.

1 Like

Fans are barely on for the 3070…75 degrees ambient temperature. Most settings maxed.

In Russia computer is to heat whole house :+1:

On a side note some peoples pc cases / room temperatures also have decent effects on the temp your pc goes

2 Likes

Well, as it should be I’d say. Better safe than sorry. And fans are really not expensive nowadays, even the good ones. Personally, I have stock fan on my i7-9700k which is more than adequate given the rest of my setup with 5 case fans (2 front intakes, two rear exhaust and a top exhaust) in a Meshify case with steelsides, rest is mesh. CPU temp usually hovers around the 30-40 under load and max temp I’ve noticed for my GPU before capping the frames, was around 70c. After capping the frames I’m down to around 50-65, sometimes dipping below 50 to 45 mark (Old Asus RoG Strix GTX970 OC ed.)

Oh yes they are. I have 4x Noctua NF-F12 fans (two side intake, two on CPU cooler), Noctua NF-A20 front intake, Noctua NF-A15 rear exhaust, and a Cooler Master 200mm MegaFlow fan for top exhaust (because the Noctua 200mm fans don’t fit in the top of the Cosmos II case without modification). That setup cost me north of $200. For fans.

I hit 65-67 with max all settings and I do not have any issues. 2070 super. Don’t even have the newer gen yet.

Friend, Reddit isn’t a source of truth — it’s actually where to go to find the worst information ever.

Cards can comfortably run 13-15 degrees hotter as well. They’re made for heat.

Please remove tinfoil hat

1 Like

I’d call that twitter, but reddit is probably a close second.

1 Like

Yeah well, everything is relative :slight_smile:

I don’t consider $200 to be expensive tbh. compared to what inadequate cooling would cost me in the long run! Besides, I’m in Northern Europe, and we have a weird computer market. Some pieces are mega expensive, while others are super cheap. fans here, really don’t cost much (compared to our general buying power here)

Commercial grade ICs are typically rated for at least 85C operating temps. Higher end parts commonly are rated up to 125. Some parts go beyond that. I think the people who designed, wrote the firmware and software and built the gpu have a better idea of safe operating temperature than randoms on reddit.

My Ryzen claims 95C max in the Ryzen Master software.

I’ll have to look at my temps again and pay attention to the cpu but never saw over mid 60s for my 5600xt. My case has lots of cooling though and my cpu has a Noctua NH-U12A so my setup runs pretty cool. Unfortunately ambient room temp is up to 78F but winter is coming so that’ll come down nicely soon.