*There is something on the horizon that could change that though, and it’s something that Nvidia terms RTX/IO. Essentially this is a technology that will see the loading of assets shifted away from the CPU and handled directly by your graphics card. It’s based on Microsoft’s DirectStorage API, the same thing making the next-gen Xbox Series X a bit damned quick. *
There’s some cleverness here as well, because your card can handle the compressed data and not have to decompress it first. So not only is it freeing up your CPU to do more important things (and loading data can take up a surprising amount of your CPU’s time), but it’s more efficient at the same time.
*Nvidia showed the technology loading Marbles (that real-time ray traced slither of loveliness that I can’t way to play for myself) from a PCIe 4.0 NVMe drive using RTX I/O as opposed to getting the CPU to do all the hard work. You’re looking at 1.61 second to load the entire thing using RTX I/O, as opposed to the 5.02 seconds it took a 24-core Threadripper to do the same thing, with every core maxed in the process. And as an aside Nvidia showed the whole thing loading uncompressed from a hard drive too, and that took 62 seconds. *
So PCIe 4.0 is potentially going to be important for games. Eventually. There is a bit of a downer here though, and that is because games aren’t going to start using this for a while yet. Nvidia isn’t releasing the SDK for it until next year, and so probably won’t be getting games which utilize it until 2022, and more likely later than that. By which time Intel will definitely have made the switch to PCIe 4.0. It’s recently revealed Tiger Lake laptop CPUs already support PCIe 4.0, and its future desktop platform will surely make the shift next year.
So with PCIe 4.0 probably not relevant for a little while yet and the Zen 3 question almost impossible to answer, your choice of processor, as far as we currently know, comes down to.
Nvidia seems to be holding onto the SDK for a while?
Still think it was a mistake for Intel not to include PCIE4 in their 10th gen. Maybe they just needed to release something, which I get, but it seems like they’re always a day late and a dollar short.
very true but I can’t recall them coming out of the gate so quickly with an initial tech. Think someone wants some attention to take away from the soon to be launched consoles?
Eh, I see people saying this all the time…and my personal belief is that console gamers and PC gamers aren’t by and large the same demographic. People who play PC usually play them for the games specific to PC, like WoW, CS:GO, etc. They may get a console for exclusives, but the hardware specs on the consoles don’t even matter at that point, it’s the games. Since all Xbox games end up on PC anyway, I think PS5 is the only real thing this demographic is likely to pursue, although I don’t think they are in direct competition to one another. Kind of like PC parts vs TVs or phones. They may or may not get both and it doesn’t necessarily affect PC parts sales.
People who play consoles shouldn’t really give two craps about how fast PC graphics cards are, especially ones that cost $500 and $700, since they don’t play the games associated with PC anyway and tend to choose their console based on brand loyalty and by extension, exclusives.
The one thing that I can see is that they are competing against consoles not in actual sales demographics, but technological domination. Even though nobody really cares in reality, shareholders might see the PS5/Xbonx Series X with “the same power” as a 2080ti with a complete package for less money as a threat, and for nvidia, being better at everything is important to their brand image. So in that respect, I think the consoles could be a battleground for Nvidia.
Obviously this is all pulled out of my butt and I don’t know the analytics, and I’m sure bean counters at Nvidia have determined the consoles are a threat somehow, but that’s my gut feeling.
I see it more as “people are looking at AMD APU and they are doing 4k 60FPS w/ray tracing in consoles” (just don’t look too closely at the details) and ray tracing being Nvidia’s trump card vs AMD. They now have a new tech to bang their drums with.
Exactly, it’s them just wanting to be on top of the tech, not the demographic of consumers.
It’s why I truly believe they’re holding back 3070/3080 ti cards with more vram - once big navi launches with similar performance to 3070 for less money, they’ll drop fat man on them with the ti’s.
Isn’t this what DMA controllers have been doing for decades? I mean they’re primarily for RAM-to-RAM transfer but there’s nothing at all which would stop them transferring from, say, the buffer of a harddrive controller to vRAM. And graphics cards have been able to handle compressed data for a while, too - fixed formats back with S3TC and DXTC, but shaders and now compute means it’s probably up to the devs’ whims.
I’m not convinced of the real-world applications here, though. I mean it’s great that it takes under 2 seconds to load instead of a hair over 5 for their demo. But when was the last time you saw a game take 5 seconds to load? That uncompressed 62 seconds is more real-worldy, but that’s the load times I’ve seen from SSDs for some things anyway (mostly due to the amount of time spent not actually loading).
And that’s not due to decompression, either - decompression’s been basically real-time since the days of P2s. Not that the 7th generation console return to uncompressed assets would lead you to believe that, mind…