RTX 4090 connectors burning up

For those of you that have heard about the 4090 issues with power connectors burning up

Solution:

I got on the email list to reserve one as I just bought a system that’s arriving next week with a 4090.

Igor’s Lab has done a deep dive into what’s actually happening. The skinny is that the connection between the wires and the foil busbar on the NVidia-provided adapter has been made so poorly that one or more conductors can break off the bus - and some of them even look like the busbar itself fractured and a piece broke off still attached to the conductor. Due to the nature of the failure even the right angle adaptor may not be enough to save some cables.

Having it happen is more likely if you have the wires bent under tension (especially sideways) but any flexing of the cable near the connector can cause it depending on just how badly done the connection is in your specific cable. While the remaining 2-3 wires should be able to carry the current, the now-loose conductor will brush against where it’s supposed to be connected and arc, generating significant heat until the plastic melts enough that it can properly separate.

The solution is to take the NVidia adaptor and toss it straight into the nearest fire/bin. Then source a replacement - either as an 8-pin to 16-pin adaptor or dedicated 16-pin adaptor from the PSU - from a reputable supplier. I would probably recommend doing this even if your cable hasn’t failed yet, as the poor QC makes it far too likely that a failure will eventually occur.

3 Likes

Another option is to buy or wait for power supply with actual 16-pin connector.

Some brands already offering that cable too. Just contact them for it.

1 Like

Isn’t AMD skipping the new connector this gen? I know some people have bad experiences with team red but for those who haven’t already bought a 4090 it might be worth waiting to see what the Radeon 7000 series looks like.

Wow PCI-Sig, Corsair, Nvidia knew about this.

Good solution:

https://www.amazon.com/gp/product/B0B4DFRX1G/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&th=1

Just don’t buy nvidia this generation.

2 Likes

That’s the word at the moment. Though by extension other rumours suggest they may not even need it - last I heard the 7900XTX will be a 355W card (20w higher than the 6950X), which means it only needs 2x 8-pin connectors (375W max draw by spec).

Too late for some. Looking for a solution.

So I installed this after my new pc arrived with the 4090. Removed the crappy nvidia adapter.

https://www.amazon.com/dp/B0B4DFRX1G?ref=ppx_yo2ov_dt_b_product_details&th=1

System runs like a champ. No issues

Uh oh. Even direct 16-pin connector melted.
https://new.reddit.com/r/nvidia/comments/yltzbt/maybe_the_first_burnt_connector_with_native_atx30/

Yeah, gonna wait and see till they solve it. Not touching PSU too.

The 7900’s just got announced. They’re cheaper, smaller, almost as powerful, and will only need two 8 pin rails.

Knowing this, I wouldn’t waste my time on a more expensive fire hazard that takes up 4 slots.

The cable is the problem not the card. I swapped that out and my 4090 is a beast. No burning, no nothing.

Might be good to keep an eye on it, apparently some are seeing even the 16-pin cables bundled with PSUs capable of 16-pin cables natively melt.

That’s not good. That would mean the cards need to be recalled. Nvidia’s worst nightmare.

Nvidia getting sued.
https://www.tomshardware.com/news/rtx-4090-owner-hits-nvidia-with-lawsuit-over-melting-16-pin-connector

1 Like

Good! Nvidia needs to own up to this. I haven’t had any issues (fingers crossed) but I don’t use the supplied Nvidia adapter. I use a third party connector and have only been playing WOW. WOW isn’t exactly stressful on the 4090. I’ve been reluctant to play much more graphic intense games for the time being until this all gets figured out. Since prepatch is out and expansion hitting soon I’ll just be playing WOW for a while anyway lol so not too worried for the time being.

The official response to this, at least following GamersNexus’ recent video, is that it’s user error. And it seems highly plausible that it is - though it may not necessarily be entirely the user’s fault.

Essentially the only way that the connectors have been made to fail is if you don’t plug them in properly. And not just almost not seated, but a good few millimetres of the connector still poking out. Slightly unseated even tests fine under overclocked circumstances. Whether or not the connector is providing sufficient feedback to let you know it is properly seated, or if it’s tolerances are poor and it feels seated before it isn’t, is still a big question.

Of course NVidia sidesteps the issue of the poorly made contacts which GN covered. They don’t necessarily deform (and even when deliberately deformed that still doesn’t create the problem) but the coatings do come off extremely easily, potentially introducing small, conductive debris into the connector, further exacerbating the above. I’m also not convinced that six (three on each side) tiny little dimples are sufficient to carry the 150w (up to 12.5A) load each contact is meant to allow for, but that doesn’t seem to be the cause of any problems.

NVidia official response: https://nvidia.custhelp.com/app/answers/detail/a_id/5413?s=31
GN video: https://www.youtube.com/watch?v=ig2px7ofKhQ

It’s surprising that they screwed up this badly on connector design considering the innovations we’ve seen in other connectors over the years, some of which would make this sort of error impossible.

They could’ve done a magnetic connector (like that used on Macbooks and Microsoft Surface devices) that uses strong magnets which not only makes it difficult to plug in partially but could allow the card to “know” when the connector is seated properly by simply measuring magnetic force on both ends of the connector. If it detects that the connector is ajar the card can throw an error at bootup and everybody is happy and free of housefires.

An idea like that might seem like overkill at first glance but with the amount of power now being carried I think it’s warranted.

Last I heard on the matter IDC was considering reducing the length of the sense connectors. In theory this would mean that the GPU should be aware that the cable is not seated properly without additional hardware, or at the very least restrict power usage from the connector to only 150W because of the missing pins dictating that it can go higher. This of course requires the card to actually pay attention to the sense wires (it’s established that the 4090 at least pays attention to the 450W vs 600W configurations, but I’ve not seen anyone test the 300W or 150W options), while also assuming that the 12.5A minimum config is low enough to not cause problems.

Realistically I’d like to see some kind of latching socket for this sort of power consumption, perhaps even with a ZIF hook-up requiring the latch to be in place before contact is even made. This way you either plug it in all the way and secure it, or it simply doesn’t work at all - no middle ground.

I think PC’s in general could use a better plug/connection system, especially if parts are going to be using more power etc …

Most connectors haven’t changed at all, prob cause of cost factors and noone wants to make a better standard.

1 Like