• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

RTX I/O and DirectStorage coming... next year, possibly in 2022

Let me try this again...
I meant how they could be developing the solution for YEARS in advance if it isn't ready yet? Everything points to a "patch" hardware level hoping that the drivers will pick up. Hence my analogy with the trains. How can you still assume that this wasn't a response to PS5 solution and something "in development for years" is amazing to me. :messenger_grinning_sweat:

There are no games yet because there wasn't a solution available to work around this limitation (again, you are welcome guys). This will change drastically in the future, as developers are now (not 2 years from now) free to realize their visions of the games.

It's not about "maxing out" the I/O is always maxed out, it's what devs will be able do with it.

I'm not talking rocket science here and yet it looks like all I said "went over your head like an airplane". :messenger_neutral:
Nvidia, the only company with raytracing, for over 2 years now, which has had a tremendous lead over AMD for many years. They have been above and beyond just about every company in not only graphics, but AI. What makes you think Nvidia is going to listen to a guy who hasn't been about to keep his consoles cool or quiet over the last few console cycles? Nvidia has held the crown for the longest, and the gap only increases.

Look at the dates when Jensen and Microsoft first talked about this, vs when Cerny did. I'll give you a hint, one came before the other. You do realize my 2 year old 2080TI will be able to implement direct storage and DX12U? Of course I'll be upgrading to the 3090 or 3080Ti, so I'll get even better performance. But if the 2080 Ti can implement this, and it was designed a few years before it released, how do you know it wasn't Cerny trying to take a hint from the graphics king? Nvidia leads, never follows.

Further more, the entire industry would be moving to faster I/O, as that's a part of natural progression in technology. Hopefully this pill won't be hard to swallow. You should see all the videos being released the past few days on these beasts of gpu's. You'll see the clear winner when those Sony ps5 games are compared with the PC version of the same game. I'll keep this bookmarked just to remind you.
 

Faithless83

Banned
Nvidia, the only company with raytracing, for over 2 years now, which has had a tremendous lead over AMD for many years. They have been above and beyond just about every company in not only graphics, but AI. What makes you think Nvidia is going to listen to a guy who hasn't been about to keep his consoles cool or quiet over the last few console cycles? Nvidia has held the crown for the longest, and the gap only increases.

Look at the dates when Jensen and Microsoft first talked about this, vs when Cerny did. I'll give you a hint, one came before the other. You do realize my 2 year old 2080TI will be able to implement direct storage and DX12U? Of course I'll be upgrading to the 3090 or 3080Ti, so I'll get even better performance. But if the 2080 Ti can implement this, and it was designed a few years before it released, how do you know it wasn't Cerny trying to take a hint from the graphics king? Nvidia leads, never follows.

Further more, the entire industry would be moving to faster I/O, as that's a part of natural progression in technology. Hopefully this pill won't be hard to swallow. You should see all the videos being released the past few days on these beasts of gpu's. You'll see the clear winner when those Sony ps5 games are compared with the PC version of the same game. I'll keep this bookmarked just to remind you.
What's keeping them to implement it now then?
It was the direction, why there is NOTHING besides the PS5 that's able to do so?
Do you think PS5 concept is recent or it was being thought out since PS4 pro?
Not enough time to develop DRIVERS to make use of it?
Why devs were so excited about it as "something groundbreaking" if everyone and their mothers "saw it coming"?

"We knew the answer was SSD all along, we just didn't implemented yet because reasons. But in the future, we'll have the tecnology edge. We never follow."

jlaw-whtvr.gif


Looks like they are not leading now, huh? :messenger_tears_of_joy:

Thanks for the discussion man, it was a blast but I believe we can ended it.

You think they are the masters of the universe or something, I disagree and both of us will keep thinking that it seems.

You are a good sport for not calling me names so far. :messenger_beaming:
 
Last edited:
What's keeping them to implement it now then?
It was the direction, why there is NOTHING besides the PS5 that's able to do so?
Do you think PS5 concept is recent or it was being thought out since PS4 pro?
Not enough time to develop DRIVERS to make use of it?
Why devs were so excited about it as "something groundbreaking" if everyone and their mothers "saw it coming"?

"We knew the answer was SSD all along, we just didn't implemented yet because reasons. But in the future, we'll have the tecnology edge. We never follow."

jlaw-whtvr.gif


Looks like they are not leading now, huh? :messenger_tears_of_joy:

Thanks for the discussion man, it was a blast but I believe we can ended it.

You think they are the masters of the universe or something, I disagree and both of us will keep thinking that it seems.

You are a good sport for not calling me names so far. :messenger_beaming:
Its being implemented on XSX already. But since PC is an open platform, there's several different generations of hardware, in a multitude of configurations. Much harder to implement than a single hardware config like XSX or ps5 for instance.

Faster I/O would have been in discussion for several years now, from all hardware manufacturers to software developers. But I can assure you, it would be crazy to think cerny convinced AMD and Nvidia, and not the other way around.

Think about it like this. UE5 demo is coming out soon for PC. Even without direct storage, the demo could run better in all aspects on older hardware than a ps5. I'm not trying to be funny here, but when you have more than capable hardware, you'll get good results. Now when direct storage comes out, things will get even better for PC. But for now, there are no games that require it, and you haven't provided any examples either.

Nvidia is obviously leading. How else do you think you can run 8K60+ fps w/RT? And that's WITHOUT using direct storage. Look at control running on the 3090 maxed out for instance. That's more than proof of flexing hardware proficiency. Instead ps5 will play 1440p~30fps w/RT WITH the I/O improvements by cerny.



 

Faithless83

Banned
Its being implemented on XSX already. But since PC is an open platform, there's several different generations of hardware, in a multitude of configurations. Much harder to implement than a single hardware config like XSX or ps5 for instance.

Faster I/O would have been in discussion for several years now, from all hardware manufacturers to software developers. But I can assure you, it would be crazy to think cerny convinced AMD and Nvidia, and not the other way around.

Think about it like this. UE5 demo is coming out soon for PC. Even without direct storage, the demo could run better in all aspects on older hardware than a ps5. I'm not trying to be funny here, but when you have more than capable hardware, you'll get good results. Now when direct storage comes out, things will get even better for PC. But for now, there are no games that require it, and you haven't provided any examples either.

Nvidia is obviously leading. How else do you think you can run 8K60+ fps w/RT? And that's WITHOUT using direct storage. Look at control running on the 3090 maxed out for instance. That's more than proof of flexing hardware proficiency. Instead ps5 will play 1440p~30fps w/RT WITH the I/O improvements by cerny.





Yet you failed to present a reason why it wasn't implemented in the past, since we already have the compatible hardware.
Or why it was announced by MS after the Road to PS5 presentation;
Or why it wasn't part of the tear down of XSX.

You are still grasping in the "different hardware", but please correct me if I'm wrong, RT was implemented day one right? It was a feature that Nvidia boosted to high heavens, when they released the hardware. Yet this RT I/O will come 2 years down the road (even though it could be implemented in existing hardware).

You are an Olympic level mental gymnast, man. Congratz. :messenger_tears_of_joy:
 
Last edited:
Yet you failed to present a reason why it wasn't implemented in the past, since we already have the compatible hardware.
Or why it was announced by MS after the Road to PS5 presentation;
Or why it wasn't part of the tear down of XSX.

You are still grasping in the "different hardware", but please correct me if I'm wrong, RT was implemented day one right? It was a feature that Nvidia boosted to high heavens, when they released the hardware. Yet this RT I/O will come 2 years down the road (even though it could be implemented in existing hardware).

You are an Olympic level mental gymnast, man. Congratz. :messenger_tears_of_joy:
You failed to read my first paragraph apparently. Nvidia announced it a good bit before road to ps5 though, not sure why you keep avoiding those facts?

So what is the reason PC games will still look better without direct storage, even on 2 year old hardware? Please explain that. What about my previous comment about 8K@60+ fps vs 1440@~30 fps? Why doesn't cerny implementation for hardware i/o yield better results than that? Same with UE5 demo? How will UE5 run better without direct storage? Where is all that power going to? For someone to try and call another person out for "mental gymnastics", you sure are jumping and flipping all over the place with this.
 

Faithless83

Banned
You failed to read my first paragraph apparently. Nvidia announced it a good bit before road to ps5 though, not sure why you keep avoiding those facts?

Serious question, do you happen to have a source on this?
I mean, that they have RT I/O in their pipeline.
 
Last edited:

Faithless83

Banned
It's on Nvidia website. Straight from the source themselves. It's over a year old. Someone linked it in the OP of one of the threads debuting the 30xx gpu's.

Debuting the gpus, but any mention on the I/O?
Is there anything mentioning RT I/O prior to this week presentation at all?
 
Last edited:
Debuting the gpus, but any mention on the I/O?
Is there anything mentioning RT I/O prior to this week presentation at all?
Yes, over a year ago. They were talking about Nvidia Magnum IO, and now the matured RTX IO. Specifically mentioned GPU Direct Storage as well, over a year ago. So yes a good bit before before Cerny's presentation. Look it up if you don't believe me.
 

Faithless83

Banned
Yes, over a year ago. They were talking about Nvidia Magnum IO, and now the matured RTX IO. Specifically mentioned GPU Direct Storage as well, over a year ago. So yes a good bit before before Cerny's presentation. Look it up if you don't believe me.
Sorry, you misunderstood nvidia magnum and GPUDirect storage. Both are meant for Data processing GPGPU based servers (mining and A.I. focused tech mostly, but has other uses as well).
"is the use of a (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the (CPU)"

What we are talking about is gaming, it's a different beast altogether.

So much so that when they released UE5 demo, Sweeny said that PCs will need to catch up. The guys behind Epic would have mentioned the nvidia tech by the time UE5 demo was released, or any of the PCMR warriors if it was on Nvidia pipeline so long ago. And even so, it's on Epic interests as developers to have this available in their engine. It simply doesn't add up.

It's not a matter of just pushing data to the gpu (as Magnum or GPUdirect do, for processing application data faster than a CPU would), but pushing data in native gpu format, in order to display graphics without drivers to translate the data into gpu format in the first place.

That's what so ground breaking and allow something like this: (timestamped)
 
Last edited:

Shmunter

Member
Thought it relevant here....



“upcoming consoles due to efficiency should perform around 3070 line”

“10 gig on the 3080 is likely insufficient for a 4k card”

“Sweet spot is the 3070 for gaming setups”

Direct storage needs to be implemented on a case by case basis when available, unlike consoles where it’s an integral part of the system. Consoles will get games that are not possible on PC for a while yet“
 

Stooky

Member
DirectStorage is part of DirectX 12U.
You think developers are not going to use DirectX 12U when nextgen is in full swing?

Its the API, Nvidia is just using DX12 DirectStorage on their GPU, AMD are likely doing the same as their new GPUs are DX12U compliant.
It doesnt matter the subset of PC Gamers who are going to use it, the fact that its an option for the millions and millions who have NVMes is all that counts.
Are there crappy PCs out there....sure, are there alot of great PCs out there....absolutely.
The XSX is literally also using DirectStorage, so i dont understand why it would work better on XSX and not on PC?
Devs don't use all the specs in D12 they use whats needed for their game .Not everyone is getting gt 30's day one. For some requires motherboard, ram, cpu cpu upgrades, thats expensive. How ever Every xsex and pS5 sold has a super fast ssd. Devs will use it. What are the minimum system specs needed to take advantage of Direcstorage or nvidias Gpu? All of this is new tech/hardware will take time to adopt it. I would like to know how many PC users have NVEes drives vs how many are slower than the xSex and PS5.

YOu then say it will take years and years for this I/O to be adopted?
The technology being adopted is already there right now.....how many people you think have DX12U compliant GPUs and NVMes?
Ill tell you right now that number is much much higher than the number of XSXs and PS5s in the wild.
Its not there, not yet, haven't seen implemented yet. Only thing shown have been on consoles. Consoles will adopted faster simply because its cheaper and the specs are the same. PC gamers are an elite bunch, proof is can be found even on the forum. Paradigm shifts on pc takes a while to become the standard.

And saying games will be held back by the weaker systems is again a sign of your low understanding of how any of this shit works.
If your PC cant use DirectStorage, it just wont....if it can it will.
Its the same as if you have a DXR GPU, the game will be able to use DXR, if it cant it wont.
When Crysis needed DX10 for certain features if you PC didnt have a DX10GPU, those features just didnt work.
Computers NOT having the technology dont slow down computers that do have the tech....thats not how any of this works.
PC games will be held back compared to console. Devs need to have minimum specs to their games and they design around that. This new I/O is deffernt from RT or running a higher resolution. I/O is fundamental to how your game is designed and functions. Having thousands of different hardware configurations effects development that until you have a minumum spec that meets your needs. That is why Consoles this gen will push this tech furthest first. Maybe because of that multiplat devs will be able to see this tech sooner on PC, but the already saying 2022 until well see something. Compare to the this fall with consoles.

Hell how long were games using two separate exes for different DX versions?.....Ohh yeah they still do.
So devs will have DirectStorage in their pipeline once its on a computer it will know whether it can be used or not....if it cant they fall back to "classic" methods....if they can then DirectStorage it is.
That takes a lot of work from devs, its not as simple as you think. I think you would develope a game with a direct storage capable pc being your minimum spec. Especially if it is an exotic design. It won't run without it.
 
Sorry, you misunderstood nvidia magnum and GPUDirect storage. Both are meant for Data processing GPGPU based servers (mining and A.I. focused tech mostly, but has other uses as well).
"is the use of a (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the (CPU)"

What we are talking about is gaming, it's a different beast altogether.

So much so that when they released UE5 demo, Sweeny said that PCs will need to catch up. The guys behind Epic would have mentioned the nvidia tech by the time UE5 demo was released, or any of the PCMR warriors if it was on Nvidia pipeline so long ago. And even so, it's on Epic interests as developers to have this available in their engine. It simply doesn't add up.

It's not a matter of just pushing data to the gpu (as Magnum or GPUdirect do, for processing application data faster than a CPU would), but pushing data in native gpu format, in order to display graphics without drivers to translate the data into gpu format in the first place.

That's what so ground breaking and allow something like this: (timestamped)

Lmao I couldn't even read the rest of your post as you obviously are not comprehending or fully reading things (and starting now, I'm guilty of it myself).

Read the write up from last year, and see where it says

Just as GPUDirect RDMA (Remote Direct Memory Address) improved bandwidth and latency when moving data directly between a network interface card (NIC) and GPU memory, a new technology called GPUDirect Storage enables a direct data path between local or remote storage, like NVMe or NVMe over Fabric (NVMe-oF), and GPU memory. Both GPUDirect RDMA and GPUDirect Storage avoid extra copies through a bounce buffer in the CPU’s memory and enable a direct memory access (DMA) engine near the NIC or storage to move data on a direct path into or out of GPU memory – all without burdening the CPU or GPU. This is illustrated in Figure 1. For GPUDirect Storage, storage location doesn’t matter; it could be inside an enclosure, within the rack, or connected over the network. Whereas the bandwidth from CPU system memory (SysMem) to GPUs in an NVIDIA DGX-2 is limited to 50 GB/s, the bandwidth from SysMem, from many local drives and from many NICs can be combined to achieve an upper bandwidth limit of nearly 200 GB/s in a DGX-2.

"GPUDirect Storage: A Direct Path Between Storage and GPU Memory | NVIDIA Developer Blog" https://developer.nvidia.com/blog/gpudirect-storage/

Not sure how you missed that? Also look up the dates on this article and other ones, compared to cerny road to ps5, and tell me which one was publicly available first.
 

Papacheeks

Banned
Thought it relevant here....



“upcoming consoles due to efficiency should perform around 3070 line”

“10 gig on the 3080 is likely insufficient for a 4k card”

“Sweet spot is the 3070 for gaming setups”

Direct storage needs to be implemented on a case by case basis when available, unlike consoles where it’s an integral part of the system. Consoles will get games that are not possible on PC for a while yet“


So this validates my whole view of DLSS.
 

Shmunter

Member
So this validates my whole view of DLSS.

DLSS is pushed by DF based on sponsored content. And people eat it up.

It has its pluses, but also minuses. Needs dev engine implementation so small uptake, rendering errors etc. I’ve said it before, on balance as off now a traditional reconstruction technique is better. May not be as sharp, but it’s predictable and doesn’t introduce undesirable rendering errors.

DLSS will get better, but it’s not as it seems. A cursory google brings up many instances of people not satisfied, switching it off. People can take it or leave it
 
Last edited:

Papacheeks

Banned
DLSS is pushed by DF based on sponsored content. And people eat it up.

It has its pluses, but also minuses. Needs dev engine implementation so small uptake, rendering errors etc. I’ve said it before, on balance as off now a traditional reconstruction technique is better. May not be as sharp, but it’s predictable and doesn’t introduce undesirable rendering errors.

DLSS will get better, but it’s not as it seems. People can take it or leave it.

yea, I think as we go into chiplet design for video cards in the next year or so, we will see better rendering of 4k as it becomes more standard. Especially with the increase in asset quality alone.
 

mckmas8808

Banned
So this validates my whole view of DLSS.

DLSS is pushed by DF based on sponsored content. And people eat it up.

It has its pluses, but also minuses. Needs dev engine implementation so small uptake, rendering errors etc. I’ve said it before, on balance as off now a traditional reconstruction technique is better. May not be as sharp, but it’s predictable and doesn’t introduce undesirable rendering errors.

DLSS will get better, but it’s not as it seems. A cursory google brings up many instances of people not satisfied, switching it off. People can take it or leave it

Are you saying you believe current Checkerboard Rendering is better than DLSS 2.0 for the average dev team to use? That's surprising to read anybody say. Maybe I'm drinking the DF juice, because DLSS 2.0 seems revolutionary; whereas CB seems just decent.
 

Shmunter

Member
Are you saying you believe current Checkerboard Rendering is better than DLSS 2.0 for the average dev team to use? That's surprising to read anybody say. Maybe I'm drinking the DF juice, because DLSS 2.0 seems revolutionary; whereas CB seems just decent.
Results are excellent for both. DLSS is superior. But the rendering bugs and bespoke implementation let it down for now. Will get better, just for now it’s not perfect. On balance CB has less issue, but not as pristine in result. Opinion.
 

Papacheeks

Banned
Are you saying you believe current Checkerboard Rendering is better than DLSS 2.0 for the average dev team to use? That's surprising to read anybody say. Maybe I'm drinking the DF juice, because DLSS 2.0 seems revolutionary; whereas CB seems just decent.

No, my point was that checkerboard gives in some cases similar results to DLSS depending on the game/engine.

Everyone including Digital Foundry act like DLSS is the second coming for image reconstruction. But it all depends on the game/engine that the reconstruction is helping make sometimes a better looking image.

And thats not the case for all engines like Unreal 4, Unreal 5.

I mean if engines dont make the changes in optimization in how ray tracing works and is being renderd with the hardware then DLSS will be something that gets used probably even more. But the information thats out there currently is showing engines are straight up changing how pixel rendering even works now, making it less be about fill, and amount of ROPS/Shaders.
 

Rubim

Member
Direct api will be needed for UE5 nanite tech, maybe they could create a very large RAM stream pool or just cut down on asset quality, or maybe you insert elevators and doors ?


According to who?

Digital Foundry made a video about this, compiled versions of the UT5
Let me try this again...
I meant how they could be developing the solution for YEARS in advance if it isn't ready yet? Everything points to a "patch" hardware level hoping that the drivers will pick up. Hence my analogy with the trains. How can you still assume that this wasn't a response to PS5 solution and something "in development for years" is amazing to me. :messenger_grinning_sweat:

There are no games yet because there wasn't a solution available to work around this limitation (again, you are welcome guys). This will change drastically in the future, as developers are now (not 2 years from now) free to realize their visions of the games.

It's not about "maxing out" the I/O is always maxed out, it's what devs will be able do with it.

I'm not talking rocket science here and yet it looks like all I said "went over your head like an airplane". :messenger_neutral:
The reason why there was never a solution back is: Consoles.

PCs will never hold nor advanced the industry.
Consoles are the only one capable of driving the industry foward and also the only ones that can also hinder the industry.

It's odd that the PS5 Version of Resident Evil Village uses the super fast loading but the Xbox version don't. I'm gonna assume it has to do with: Not wanting to deal with variables from PC/Xbox.

The worse i can think is: DirectStorage is not really ready.
 

winjer

Gold Member
PCs will never hold nor advanced the industry.
Consoles are the only one capable of driving the industry foward and also the only ones that can also hinder the industry.

It's odd that the PS5 Version of Resident Evil Village uses the super fast loading but the Xbox version don't. I'm gonna assume it has to do with: Not wanting to deal with variables from PC/Xbox.

The worse i can think is: DirectStorage is not really ready.

Just recently we saw the PC forging the gaming future with DLSS, Ray-tracing, Variable Refresh Rate, Mesh Shaders, etc.
And you think the PC can't advance the industry?
 

M1chl

Currently Gif and Meme Champion
According to who?

Digital Foundry made a video about this, compiled versions of the UT5

The reason why there was never a solution back is: Consoles.

PCs will never hold nor advanced the industry.
Consoles are the only one capable of driving the industry foward and also the only ones that can also hinder the industry.

It's odd that the PS5 Version of Resident Evil Village uses the super fast loading but the Xbox version don't. I'm gonna assume it has to do with: Not wanting to deal with variables from PC/Xbox.

The worse i can think is: DirectStorage is not really ready.
Absolutely false, case in point nVidia Turing, NVMe SSD (as in HW solution), whatever else. Consoles are based on PC HW, most often than not already in stores, or at least with features in stores. If you meant "consoles drives adoption", then yes that's true. however one of the biggest breakthrough in gaming, DLSS, it's now adopted at high rate and until next generation of the consoles at least, we are not going to see it. That much is clear. Although the Unreal TSR is sort of nice in-between.

Yes you can see some innovation with PS5 in for of the ASIC for decompression of the data, which I don't you can meaningfully deploy to open platform and not have it mutli-platform in terms of SDKs. So ASIC in this form isn't really viable and PC have to carve the general approach to this type of problem. It's nice definitely, but since, nVidia has this tech in their professional architectures for quite some time, it's not something which is groundbreaking, it's just available to every day people.
 

LiquidMetal14

hide your water-based mammals
I thought this was new news for a second.

Unfortunately I haven't really seen the benefits of this or maybe all the games going forward will start taking advantage of this.

I'm sure if there were games that supported it they would surely are it to the bullet points.
 
I don't get it?

Microsoft are using AMD cards but are helping Nvidia? Wut? AMD can't be over-the-moon with the move from MS?

Is it a return to MS+Nvidia? Will we see MS+Nvidia vs Sony+AMD?!

So many questions
Its all internal politics, The guys at AMD are just the console division that prolly don't talk much with the PC CPU teams.

regardless it would be interesting to know if I'll need a new NVMe to utilize this direct I/O tech with my 3090.
 
Top Bottom