• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Godfall Dev Expects PS5 SSD To Be ‘Biggest Overall Game-Changer’ In Next Few Years

Gamerguy84

Member
PC hasn't fully exploited the tech either. No one has yet. But it's coming.

Solid State Drives were revolutionary for the PC market, providing immense improvements to overall system responsiveness. Games benefited mostly in the form of faster installation and level load times, but fast storage also helped reduce stalls and stuttering when a game needs to load data on the fly. In recent years, NVMe SSDs have provided speeds that are on paper several times faster than what is possible with SATA SSDs, but for gamers the benefits have been muted at best. Conventional wisdom holds that there are two main causes to suspect for this disappointment: First, almost all games and game engines are still designed to be playable off hard drives because current consoles and many low-end PCs lack SSDs. Game programmers cannot take full advantage of NVMe SSD performance without making their games unplayably slow on hard drives. Second, SATA SSDs are already fast enough to shift the bottleneck elsewhere in the system, often in the form of data decompression. Something aside from the SSD needs to be sped up before games can properly benefit from NVMe performance.

Microsoft and Sony are addressing both of those issues with their upcoming consoles. Game developers will soon be free to assume that users will have fast storage, both on consoles and on PCs. In addition, the new generation of consoles will add extra hardware features to address bottlenecks that would be present if they were merely mid-range gaming PCs equipped with cutting-edge SSDs.

Full article from Anand is interesting.
 
Quick resume is so handy. But I wish Skyrim would stay at 60fps boost mode when I resume after a few days... That shit needs fixing. But when it works damn it's like magic. Meanwhile those ever helpful cards on PS5 don't help when your playing it takes two and your mate presses Sqaure to resume and if restarts that entire chapter... They definitely didn't think that one through... Best thing about PS5 is Share Screen! All of us in party watching someone fight a souls boss is pretty special.
Yep Quick resume on Xbox is for PR only. It doesn't work as intended (as it should resume the game without more often than not crashes or performance problems). What's on PS5 (activity cards) may be a less interesting feature on paper, but it actually works in a reliable way.

I remember reading one of the DF guy saying they never used the suspend feature on XB1 because it caused too much problems while they had no problems with PS5 suspend / resume feature.
 

Elog

Member
a) You should look DirectStorage up
b) PC already has games with 4k and 8k textures that run just fine even without magical SSD or DirectStorage
you are not describing nothing incredibly new or revolutionary. Having a faster SSD that is still dozens and dozens and dozens of times slower than RAM. It certainly helps the system but let's stop with "game changer" or "a new era" bs, sure there will be shorter hidden loading times, if you want MUCH shorter but they will always be there and the demonstration is exactly in rachet and clank The "tunnel" serves to hide the loading time which unfortunately still exists. And since it takes loading time to empty and fill the ram completely and then to allow the gpu to render just like.rachet&c. , well we go back to the old concept of the ability of developers to know how to hide loadings which can be done on any machine from PC to Xbox. The thing is whether the ps5 concept would have been to completely change from one full load situation to another then it would have been so fast that there would have been a distinct difference between the competition (and it seems that in full load situations it cannot) , the fact of having seen the rachet tunnel suggests that on another machine it would be enough to stay in the tunnel for 1 or 2 seconds more to get the same result practically leaving the gaming experience intact.
You are both still missing the point and how limiting the VRAM pool is to what visuals you can display on a screen.

Firstly, we need to ask ourselves how many textures one actually need for great graphics. Let us use high quality CGI scenes in the movie industry as an example. In Avatar even a single model (e.g. an animal) uses around 100 different textures. In other words, while I do not know the exact amount it is fair to assume that one of those scenes in the tropical forest includes 1000+ textures in a single frame. This is of course impossible to use and display in any current PC/console environment. The point is that we are very far from being able to use an optimal amount of textures in gaming due to hardware limitations.

Secondly, does texture quality matter? The amount of awe people feel when then watch UE4 with 4K to 8K textures is stunning. Here however, one can only do that with a GPU with very high VRAM pool and in a limited environment since the system otherwise runs out of VRAM (it is not random that the landscapes are very homogenous such as a craggy rocky landscape when demoing these things). So yes - the visual impact is crazy good. And people are still barking up that resolution tree thinking that 1800p to 4K matters much...

Thirdly, increasing texture resolution puts very little strain in your GPU. The challenge is that the textures need to be in VRAM.

Fourthly, 4K textures are 50-100MB - 8K 100-200MB. Let's for the sake of this argument say that truly high-resolution textures averages 100 MB. Going back to that Avatar CGI frame we are talking about a VRAM requirement of more than 100 GB (1000 textures at 100 MB each). Now everyone can see the challenge.

The solution is both to increase the amount of VRAM and to allow for very quick swapping of the textures that currently resides in the VRAM pool. The issue in the PC world is latency. Texture files are small and multiple files need to be moved every second for the practical VRAM pool to be really expanded. The Microsoft platform plainly sucks in this department. It is not as if MS is not aware of it but the Windows platform needs to maintain backward compatibility and the file system is one of the most basic functions of any OS. If anyone is unsure how the Windows file system sucks (it is immensely slow) just reminded yourself how fast a file search is on your PC or if you are an advanced user compare copying a large amount of files from A to B in Windows and compare with doing the same in Linux on the same platform - you will see a 20-50x difference in speed on the same hardware.

So while MS is trying hard to create a faster system they are lightyears behind due to system BC. DirectStorage is simply a patchwork fix. Sony changed its file management system on the PS5 to allow even faster access and took away all driver overhead when moving files from the storage device to the (V)RAM along a 100% hardware path - the I/O complex. the Xbox still fundamentally has the Windows file system with significant driver overhead when moving files. And latency rather than throughput will be the key driver how much you expand the (V)RAM pool through texture streaming from an SSD.

I think this is super exciting for gamers and of course the PC world will move in this direction as well. It is not random that Nvidia spent a lot of time on texture I/O when releasing their latest cards.

What does this mean for Xbox and PS5? Well, I expect third party titles to more or less look the same on both since increasing the amount of textures used in environments will require a lot of extra work and a game needs to work on 4GB VRAM cards. So the consoles should - if utilised - be able to use higher resolution textures than the PC counterparts but I do not expect a big difference between the consoles themselves. However, I expect Sony first party titles to utilise a larger amount of textures at a higher resolution than what is currently possible on any other platform. I believe we saw a taste of that both in DS and in the UE5 demo.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Anyone interested in AAA gaming on PC, or high FPS will be there or above. Yes most PC gamers do not, but how many of them play indies, fortnite, etc? They don't care about playing AAA games on ultra. Build the machine for what you want to do.
… games tend not to be designed for this PCMR group because of how small it is despite how they preach PC as of it was a universal box that had the same always high specs for everyone.
 

yamaci17

Member
You are both still missing the point and how limiting the VRAM pool is to what visuals you can display on a screen.

Firstly, we need to ask ourselves how many textures one actually need for great graphics. Let us use high quality CGI scenes in the movie industry as an example. In Avatar even a single model (e.g. an animal) uses around 100 different textures. In other words, while I do not know the exact amount to is fair to assume that one of those scenes in the tropical forest includes 1000+ textures in a single frame. This is of course impossible to use and display in any current PC/console environment. The point is that we are very far from being able to use an optimal amount of textures in gaming due to hardware limitations.

Secondly, does texture quality matter? The amount of awe people feel when then watch UE4 with 4K to 8K textures is stunning. Here however, one can only do that with a GPU with very high VRAM pool and in a limited environment since the system otherwise runs out of VRAM (it is not random that the landscapes are very homogenous such as a craggy rocky landscape when demoing these things). So yes - the visual impact is crazy good. And people are still barking up that resolution tree thinking that 1800p to 4K matters much...

Thirdly, increasing texture resolution puts very little strain in your GPU. The challenge is that the textures need to be in VRAM.

Fourthly, 4K textures are 50-100MB - 8K 100-200MB. Let's for the sake of this argument say that truly high-resolution textures averages 100 MB. Going back to that Avatar CGI frame we are talking about a VRAM requirement of more than 100 GB (1000 textures at 100 MB each). Now everyone can see the challenge.

The solution is both to increase the amount of VRAM and to allow for very quick swapping of the textures that currently resides in the VRAM pool. The issue in the PC world is latency. Texture files are small and multiple files need to be moved every second for the practical VRAM pool to be really expanded. The Microsoft platform plainly sucks in this department. It is not as if MS is not aware of it but the Windows platform needs to maintain backward compatibility and the file system is one of the most basic functions of any OS. If anyone is unsure how the Windows file system sucks (it is immensely slow) just reminded yourself how fast a file search is on your PC or if you are an advanced user compare copying a large amount of files from A to B in Windows and compare with doing the same in Linux on the same platform - you will see a 20-50x difference in speed on the same hardware.

So while MS is trying hard to create a faster system they are lightyears behind due to system BC. DirectStorage is simply a patchwork fix. Sony changed its file management system on the PS5 to allow even faster access and took away all driver overhead when moving files from the storage device to the (V)RAM along a 100% hardware path - the I/O complex. the Xbox still fundamentally has the Windows file system with significant driver overhead when moving files. And latency rather than throughout will be the key driver how much you expand the (V)RAM pool through texture streaming from an SSD.

I think this is super exciting for gamers and of course the PC world will move in this direction as well. It is not random that Nvidia spent a lot of time on texture I/O when releasing their latest cards.

What does this mean for Xbox and PS5? Well, I expect third party titles to more or less look the same on both since increasing the amount of textures used in environments will require a lot of extra work and a game needs to work on 4GB VRAM cards. So the consoles should - if utilised - be able to use higher resolution textures than the PC counterparts but I do not expect a big difference between the consoles themselves. However, I expect Sony first party titles to utilise a larger amount of textures at a higher resolution than what is currently possible on any other platform. I believe we saw a taste of that both in DS and in the UE5 demo.
except it doesn't have to work on 4 gb vram cards. 2 gb vram cards quickly become obsolete once developers started to feel comfortable with 8 gb vram of xbone/ps4

even 4 gb cards had frametime issues in last 2-3 years

and now newgen consoles coming to bury the trusty old 8 gb vram cards
 

Elog

Member
except it doesn't have to work on 4 gb vram cards. 2 gb vram cards quickly become obsolete once developers started to feel comfortable with 8 gb vram of xbone/ps4

even 4 gb cards had frametime issues in last 2-3 years

and now newgen consoles coming to bury the trusty old 8 gb vram cards
The 'average' card on steam is a 4-6GB card as of March 2021. My point is that if a title is to be used for PC it is still a bit rough to design a game to not be able to run on 4GB cards - that is a lot of market out the window and hence why most developers still want their code to run on a card such as the 1050ti.
 

yamaci17

Member
The 'average' card on steam is a 4-6GB card as of March 2021. My point is that if a title is to be used for PC it is still a bit rough to design a game to not be able to run on 4GB cards - that is a lot of market out the window and hence why most developers still want their code to run on a card such as the 1050ti.
are we talking about eye candy aaa games or optimized multiplayer games?
 

sinnergy

Member
Yep Quick resume on Xbox is for PR only. It doesn't work as intended (as it should resume the game without more often than not crashes or performance problems). What's on PS5 (activity cards) may be a less interesting feature on paper, but it actually works in a reliable way.

I remember reading one of the DF guy saying they never used the suspend feature on XB1 because it caused too much problems while they had no problems with PS5 suspend / resume feature.
I use it all the time without errors .. do you have a series console ? What is your experience?
 

yamaci17

Member
proof that ps5/xbox cpu will also decimate a 3700x in future, and perform a tad bit better than a 5800x:

pit's simple, special console "sauce" makes the same cpu perform TWO times better than PC.

same sauce that will be used for 3.6 ghz zen 2 on ps5/sx. so in 3-4 years later, you will need a cpu that is 2 times faster in minimum to achieve what ps5 achieves

zen 3 5800x is already approximately %45-50 faster than consoles, but its not enough. zen 5 will probably be %100 faster than ps5 and will perform equal, due to special console CPU and GPU sauce, low level api and yadda yadda.


lpI3XtP.png


tbh i hate the performance disparities between consoles and pc myself. but it is there.

a ps5 cannot be simplified as "3.5 ghz 3700x and 2070s equivalent gpu". it's more than that, simple as that. that special whatever sauce they have on consoles make games run. take xbox one, jesus. that's a device with 1.3 tflops and ddr3 ram accompaneid with weird fast esram, and can run RDR2 at 900p 30 fps with a playable experience, with low powered 1.6 ghz jaguar cores.

you simply cant replicate the same performance on pc. you need overpowered, brute force cpus and gpus to match these specs. whether you like it, accept it, or deny it, truth won't change

a ryzen 3600 and a rtx 2080ti s literally what it takes to double what the xbox one x can provide in red dead redemption 2, 4k 30 fps vs 4k 60 fps. and that's with optimized, xbox one x settings.
 

longdi

Banned
ssd is such a huge leap from hdd,so no surprises once game engines are designed around ssd. nothing to do with ps5 exclusively.

besides this godfall guy is known for hyperbole, sailing according to the wind blow. as posted, helping amd to hype the 16gb rdna2 gpu
 
Last edited:

Md Ray

Member
lol, a gtx 770 was 2 times faster than a ps4

guess what happened with rdr2? ps4 run it MILES better.

what is 2 times faster than ps5? maybe rtx 3090.

in 4 years later, you may see a ps5 perform equal to a 3080/3090 in an aaa game. if you don't believe me, you can check out how horrible gtx 780 runs the horizon zero dawn, supposedly 3-4 times faster than a ps4



at ps4 settings (medium) and resolution, it can't even lock to 30 frames. crazy, huh? talk about the high end.

ps5 gpu will outpace any gpus that is released in 2021, simple as that. you will need at least 1.5-2 times more the performance of a ps5 to catch up with it in terms of performance, in FUTURE.

this ps4 was practically a competitor to 750ti, when it WAS released.



things change. ps5 seems like equivalent to a rtx 2070/2070s today, but it will change. it always change. consoles always have superior optimization, and this time unlike ps4, ps5 actually has a respectable gpu, unlike the 750ti equivalent ps4 which managed to outpace a 770 easily

I think this is in part due to 770 and 780's Kepler architecture were inferior to AMD's GCN. DX12/Vulkan low-level APIs further exposed hardware features on the AMD side that made them pull further away from NVIDIA once devs were starting to leave behind DX11. The biggest thing, IMO, that hurt Kepler and even Maxwell arch, to some degree, over the course of PS4/XB1 gen was the lack of hardware async compute. Mind you the Radeon 7870 from which the PS4 GPU was based, had 2 async compute engines whereas Cerny felt the need to customize the PS4 GPU to have 8 async compute engines, and he was a big believer in ACEs capability, and it paid dividends. AMD later incorporated 8 async compute engines into R9 290/290X.

I think the situation is a bit different now with Ampere and RDNA 2. And I feel Ampere will likely age better than Kepler/Maxwell did. Sure, there's a driver overhead problem rearing its ugly head on the nvidia side right now, but it's something that affects only if the CPU it's paired with is slower and can be mitigated to a certain degree by being GPU bound.
 
Last edited:

yamaci17

Member
And is 3080 struggling on your own source ? is there 12GB on this card ?
it's not, but i bet it's on its limits

it also has a whopping 760 gb/s of bandwidth and that probably helps

game actually runs fine on actual gameplay, but after 10-15 mins, it starts to drop frames. sometimes it happens quicker, sometimes slower. i provided two video proofs. my video proves that it can happen in quick fashion, other video shows that it can happen after 10 mins of gameplay

in this video it can be observed that 3080 gets very weird frame drops

 
Last edited:
The first game leveraging this is Demon's souls. It's the best looking next gen game and I predict no game on Xbox Series X will look like this during the whole generation.

I am talking specifically about density of world: polygons and textures.

So you think no game that Xbox puts out for the next 5-7 years will look better than demon souls?

Interesting prediction ..... 😆
 
Last edited:

Flutta

Banned
Games load faster currently on XSX, so yeah he’s seeing it. And with Quick Resume, they load faster than any game in the future will load that doesn’t employ QR.
You still dont get it do you? not surprised. Go compare games made by Sony's firstparty with 3party cross gen games and then come back and say that again. We are talking about ingame load, Quick Resume has nothing to do with what we're talking about or the subject at hand.
 

Concern

Member
Yep Quick resume on Xbox is for PR only. It doesn't work as intended (as it should resume the game without more often than not crashes or performance problems). What's on PS5 (activity cards) may be a less interesting feature on paper, but it actually works in a reliable way.

I remember reading one of the DF guy saying they never used the suspend feature on XB1 because it caused too much problems while they had no problems with PS5 suspend / resume feature.


Must one of "those" guys who claimed to have both consoles in that pole where Xbox still won as the preferred console by actual users with both 🤣🤣
 
So you think no game that Xbox puts out for the next 5-7 years will look better than demon souls?

Interesting take .....
Well "look better" is very subjective. What I claim is that no game on Xbox will have the same world density in a semi-open world as Demon's souls. Because All Xbox exclusives have to run on PC using HDDs so they'll never be able to 100% exploit pcie 4.0 + M.2 speeds (and low latency) to dynamically stream stupidely high res textures and assets "just before the corner" the way they do it in Demon's souls.
 

MonarchJT

Banned
You are both still missing the point and how limiting the VRAM pool is to what visuals you can display on a screen.

Firstly, we need to ask ourselves how many textures one actually need for great graphics. Let us use high quality CGI scenes in the movie industry as an example. In Avatar even a single model (e.g. an animal) uses around 100 different textures. In other words, while I do not know the exact amount it is fair to assume that one of those scenes in the tropical forest includes 1000+ textures in a single frame. This is of course impossible to use and display in any current PC/console environment. The point is that we are very far from being able to use an optimal amount of textures in gaming due to hardware limitations.

Secondly, does texture quality matter? The amount of awe people feel when then watch UE4 with 4K to 8K textures is stunning. Here however, one can only do that with a GPU with very high VRAM pool and in a limited environment since the system otherwise runs out of VRAM (it is not random that the landscapes are very homogenous such as a craggy rocky landscape when demoing these things). So yes - the visual impact is crazy good. And people are still barking up that resolution tree thinking that 1800p to 4K matters much...

Thirdly, increasing texture resolution puts very little strain in your GPU. The challenge is that the textures need to be in VRAM.

Fourthly, 4K textures are 50-100MB - 8K 100-200MB. Let's for the sake of this argument say that truly high-resolution textures averages 100 MB. Going back to that Avatar CGI frame we are talking about a VRAM requirement of more than 100 GB (1000 textures at 100 MB each). Now everyone can see the challenge.

The solution is both to increase the amount of VRAM and to allow for very quick swapping of the textures that currently resides in the VRAM pool. The issue in the PC world is latency. Texture files are small and multiple files need to be moved every second for the practical VRAM pool to be really expanded. The Microsoft platform plainly sucks in this department. It is not as if MS is not aware of it but the Windows platform needs to maintain backward compatibility and the file system is one of the most basic functions of any OS. If anyone is unsure how the Windows file system sucks (it is immensely slow) just reminded yourself how fast a file search is on your PC or if you are an advanced user compare copying a large amount of files from A to B in Windows and compare with doing the same in Linux on the same platform - you will see a 20-50x difference in speed on the same hardware.

So while MS is trying hard to create a faster system they are lightyears behind due to system BC. DirectStorage is simply a patchwork fix. Sony changed its file management system on the PS5 to allow even faster access and took away all driver overhead when moving files from the storage device to the (V)RAM along a 100% hardware path - the I/O complex. the Xbox still fundamentally has the Windows file system with significant driver overhead when moving files. And latency rather than throughput will be the key driver how much you expand the (V)RAM pool through texture streaming from an SSD.

I think this is super exciting for gamers and of course the PC world will move in this direction as well. It is not random that Nvidia spent a lot of time on texture I/O when releasing their latest cards.

What does this mean for Xbox and PS5? Well, I expect third party titles to more or less look the same on both since increasing the amount of textures used in environments will require a lot of extra work and a game needs to work on 4GB VRAM cards. So the consoles should - if utilised - be able to use higher resolution textures than the PC counterparts but I do not expect a big difference between the consoles themselves. However, I expect Sony first party titles to utilise a larger amount of textures at a higher resolution than what is currently possible on any other platform. I believe we saw a taste of that both in DS and in the UE5 demo.
You say a lot of things right and there is no doubt that both manufacturers have been trying to get the same results taking different paths .Sony has invested more in I/O while trying to get more "bang for the buck" out of the less specced gpu. The same with reversed parts did Ms gone with a higher spec GPU and trying to get smartly more from the I/O. But let me say one thing, if we are to believe and trust what Cerny explained and said for a level playing field, we must do the same with what Microsoft engineers do. Microsoft worked a lot to optimize the streaming of the textures in particular to achieve precisely to get more bang for it's bucks









Microsoft simply found a way to do the same thing without brute forcing
and the xbox too can stream directly from ssd
 
Last edited:

longdi

Banned
You say a lot of things right and there is no doubt that both manufacturers have been trying to get the same results taking different paths .Sony has invested more in I/O while trying to get more "bang for the buck" out of the less specced gpu. The same with reversed parts did Ms gone with a higher spec GPU and trying to get smartly more from the I/O. But let me say one thing, if we are to believe and trust what Cerny explained and said for a level playing field, we must do the same with what Microsoft engineers do. Microsoft worked a lot to optimize the streaming of the textures in particular to achieve precisely to get more bang for it's bucks









Microsoft simply found a way to do the same thing without brute forcing
and the xbox too can stream directly from ssd


wow reminds me of ps2 vs gc.
ps2 has a big raw fillrates but that gets eaten up once you apply multi texturing on your game. whereas gc have some hardware mt acceleration.

that's why ps2 games that uses its huge fillrates, all look dull singled'ish textured
 

Elog

Member
You say a lot of things right and there is no doubt that both manufacturers have been trying to get the same results taking different paths .Sony has invested more in I/O while trying to get more "bang for the buck" out of the less specced gpu. The same with reversed parts did Ms gone with a higher spec GPU and trying to get smartly more from the I/O. But let me say one thing, if we are to believe and trust what Cerny explained and said for a level playing field, we must do the same with what Microsoft engineers do. Microsoft worked a lot to optimize the streaming of the textures in particular to achieve precisely to get more bang for it's bucks

Microsoft simply found a way to do the same thing without brute forcing
and the xbox too can stream directly from ssd
Just to make it clear - MS has worked very hard to optimize this process and hence why e.g. the XSX will be vastly superior to a PC in this regard. However, it is still a software based solution with driver overhead. This increases latency that will be dominant when multiple small files are involved per time unit. However, their end-goal has always been to create a solution under the DirectX umbrella within the framework of the MS file system and without a doubt they have created a great solution in light of those two boundary conditions. This system latency is probably the largest difference in % between the two consoles (XSX and PS5). Maybe my information is wrong (I do not sit on any development environment myself) but I expect the info I have received to be accurate.

The latency links into James Stanard's comment "This fundamentally amplifies our memory size..." - that amplification is a function of throughput and latency - and latency is actually completely dominant once you are over a data throughput threshold (that both consoles are over).
 

phil_t98

#SonyToo
The first game leveraging this is Demon's souls. It's the best looking next gen game and I predict no game on Xbox Series X will look like this during the whole generation.

I am talking specifically about density of world: polygons and textures.
What makes you think that especially? Such a weird post
 

yamaci17

Member
SSD in any console is a game changer
I agree, another overlooked aspect of SSDs are usually read latencies and seek rates

ssd can find files very, very quickly.

i did a simple benchmark with an ssd and random 4kb read latency was 500 microseconds. with harddisk, it was 0.2 seconds. this is why some games hitch and cause stutters with hdds, because hdds are very bad at finding the files when the game requests them, but ssd is so quick that it will find file before the cpu/gpu processes the frame: no hitches and stutters induced by ssd

the hdd speed alone is not the only problem, it's high latency and bad seek rates are another problem.

i also read that developers place lots of duplicate files so that harddisks can cope better while finding files. with ssds, this seem to change, for example, rainbow 6 used to take up 100 gb+ and now it only takes up 47 gb but it loads much, much slower with HDDs (it used to be somewhat barable with hdd in the past). cyberpunk is another example at 65 gb with a huge open world and tons of assets.
 
Sony better hope this is true, I think by the end of the gen PS5 will be viewed as a failure unless Cerny really did have an ace up his sleeve with this thing, that really will set PS5 exclusives apart.

As a PS5 owner, there are some benefits but Gamepass really is starting to bring in buyer's remorse, considering the main argument against choosing XSX is the PS5 exclusives, of which there currently is...one.

2 by the end of april, 3 end of may and 4 by the end of june. Thats a steady clip.
 

Dampf

Member
...but we still play old gen games and no sign of true next gen games in sight.
Yeah it annoys the heck out of me. Atleast give us true next gen gameplay and not some pre-rendered trailer.

I want to see what next gen is capable of. The only thing coming close to that is the UE5 demo but that's 1 year old now..
 

Md Ray

Member
You are both still missing the point and how limiting the VRAM pool is to what visuals you can display on a screen.

Firstly, we need to ask ourselves how many textures one actually need for great graphics. Let us use high quality CGI scenes in the movie industry as an example. In Avatar even a single model (e.g. an animal) uses around 100 different textures. In other words, while I do not know the exact amount it is fair to assume that one of those scenes in the tropical forest includes 1000+ textures in a single frame. This is of course impossible to use and display in any current PC/console environment. The point is that we are very far from being able to use an optimal amount of textures in gaming due to hardware limitations.

Secondly, does texture quality matter? The amount of awe people feel when then watch UE4 with 4K to 8K textures is stunning. Here however, one can only do that with a GPU with very high VRAM pool and in a limited environment since the system otherwise runs out of VRAM (it is not random that the landscapes are very homogenous such as a craggy rocky landscape when demoing these things). So yes - the visual impact is crazy good. And people are still barking up that resolution tree thinking that 1800p to 4K matters much...

Thirdly, increasing texture resolution puts very little strain in your GPU. The challenge is that the textures need to be in VRAM.

Fourthly, 4K textures are 50-100MB - 8K 100-200MB. Let's for the sake of this argument say that truly high-resolution textures averages 100 MB. Going back to that Avatar CGI frame we are talking about a VRAM requirement of more than 100 GB (1000 textures at 100 MB each). Now everyone can see the challenge.

The solution is both to increase the amount of VRAM and to allow for very quick swapping of the textures that currently resides in the VRAM pool. The issue in the PC world is latency. Texture files are small and multiple files need to be moved every second for the practical VRAM pool to be really expanded. The Microsoft platform plainly sucks in this department. It is not as if MS is not aware of it but the Windows platform needs to maintain backward compatibility and the file system is one of the most basic functions of any OS. If anyone is unsure how the Windows file system sucks (it is immensely slow) just reminded yourself how fast a file search is on your PC or if you are an advanced user compare copying a large amount of files from A to B in Windows and compare with doing the same in Linux on the same platform - you will see a 20-50x difference in speed on the same hardware.

So while MS is trying hard to create a faster system they are lightyears behind due to system BC. DirectStorage is simply a patchwork fix. Sony changed its file management system on the PS5 to allow even faster access and took away all driver overhead when moving files from the storage device to the (V)RAM along a 100% hardware path - the I/O complex. the Xbox still fundamentally has the Windows file system with significant driver overhead when moving files. And latency rather than throughput will be the key driver how much you expand the (V)RAM pool through texture streaming from an SSD.

I think this is super exciting for gamers and of course the PC world will move in this direction as well. It is not random that Nvidia spent a lot of time on texture I/O when releasing their latest cards.

What does this mean for Xbox and PS5? Well, I expect third party titles to more or less look the same on both since increasing the amount of textures used in environments will require a lot of extra work and a game needs to work on 4GB VRAM cards. So the consoles should - if utilised - be able to use higher resolution textures than the PC counterparts but I do not expect a big difference between the consoles themselves. However, I expect Sony first party titles to utilise a larger amount of textures at a higher resolution than what is currently possible on any other platform. I believe we saw a taste of that both in DS and in the UE5 demo.
Quality post right here.
 

Riky

$MSFT
You say a lot of things right and there is no doubt that both manufacturers have been trying to get the same results taking different paths .Sony has invested more in I/O while trying to get more "bang for the buck" out of the less specced gpu. The same with reversed parts did Ms gone with a higher spec GPU and trying to get smartly more from the I/O. But let me say one thing, if we are to believe and trust what Cerny explained and said for a level playing field, we must do the same with what Microsoft engineers do. Microsoft worked a lot to optimize the streaming of the textures in particular to achieve precisely to get more bang for it's bucks









Microsoft simply found a way to do the same thing without brute forcing
and the xbox too can stream directly from ssd


Exactly, Xbox is betting on hardware support for SFS and Mesh Shaders.
 

yamaci17

Member
Exactly, Xbox is betting on hardware support for SFS and Mesh Shaders.
btw nvidia secretly gimped SFS for their ampere cards so that those cards won't be able to take full advantage of SFS when they run out of vram

ugcXNoi.png


just like fake- pseudo async compute pascal had, ampere and turing probably has not proper sampler feedback. i look forward when this tech actually starts to get implemented in games. ampere will probabl fall behind xbox/rdna2 in terms of this tech

nvidia, the way its meant to be gimped
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
btw nvidia secretly gimped SFS for their ampere cards so that those cards won't be able to take full advantage of SFS when they run out of vram

ugcXNoi.png


just like fake- pseudo async compute pascal had, ampere and turing probably has not proper sampler feedback. i look forward when this tech actually starts to get implemented in games. ampere will probabl fall behind xbox/rdna2 in terms of this tech

nvidia, the way its meant to be gimped

I hope so, nVIDIA marketing dollars are strong on PC so PC devs will be likely not to push for features that make nVIDIA cards look bad :/…
 
The parallel between them was it that both were bullshit.

Ssds are useful for games, but the way cerny said they were superior to modern Ssds were bullshit.

Of course Ssds are useful. Faster loading times, where Kinect was a camera.

But Matrick saying the Kinect was the core of xbox and it could not be sold without it is like cerny saying he invented the super duper ssd.
I can almost feel this post gave me a healthy dose of Coronavirus.
God damn.
My man, just what in the ever living fuck are you talking about.
 
  • Like
Reactions: GHG

FrankWza

Member
Must one of "those" guys who claimed to have both consoles in that pole where Xbox still won as the preferred console by actual users with both 🤣🤣
You’re so used to crying about x threads getting shitposted that you’re now crying about a PS thread getting shitposted. Poor lil bro
peter greene gimp GIF
 

Guilty_AI

Member
You are both still missing the point and how limiting the VRAM pool is to what visuals you can display on a screen.

Firstly, we need to ask ourselves how many textures one actually need for great graphics. Let us use high quality CGI scenes in the movie industry as an example. In Avatar even a single model (e.g. an animal) uses around 100 different textures. In other words, while I do not know the exact amount it is fair to assume that one of those scenes in the tropical forest includes 1000+ textures in a single frame. This is of course impossible to use and display in any current PC/console environment. The point is that we are very far from being able to use an optimal amount of textures in gaming due to hardware limitations.

Secondly, does texture quality matter? The amount of awe people feel when then watch UE4 with 4K to 8K textures is stunning. Here however, one can only do that with a GPU with very high VRAM pool and in a limited environment since the system otherwise runs out of VRAM (it is not random that the landscapes are very homogenous such as a craggy rocky landscape when demoing these things). So yes - the visual impact is crazy good. And people are still barking up that resolution tree thinking that 1800p to 4K matters much...

Thirdly, increasing texture resolution puts very little strain in your GPU. The challenge is that the textures need to be in VRAM.

Fourthly, 4K textures are 50-100MB - 8K 100-200MB. Let's for the sake of this argument say that truly high-resolution textures averages 100 MB. Going back to that Avatar CGI frame we are talking about a VRAM requirement of more than 100 GB (1000 textures at 100 MB each). Now everyone can see the challenge.

The solution is both to increase the amount of VRAM and to allow for very quick swapping of the textures that currently resides in the VRAM pool. The issue in the PC world is latency. Texture files are small and multiple files need to be moved every second for the practical VRAM pool to be really expanded. The Microsoft platform plainly sucks in this department. It is not as if MS is not aware of it but the Windows platform needs to maintain backward compatibility and the file system is one of the most basic functions of any OS. If anyone is unsure how the Windows file system sucks (it is immensely slow) just reminded yourself how fast a file search is on your PC or if you are an advanced user compare copying a large amount of files from A to B in Windows and compare with doing the same in Linux on the same platform - you will see a 20-50x difference in speed on the same hardware.

So while MS is trying hard to create a faster system they are lightyears behind due to system BC. DirectStorage is simply a patchwork fix. Sony changed its file management system on the PS5 to allow even faster access and took away all driver overhead when moving files from the storage device to the (V)RAM along a 100% hardware path - the I/O complex. the Xbox still fundamentally has the Windows file system with significant driver overhead when moving files. And latency rather than throughput will be the key driver how much you expand the (V)RAM pool through texture streaming from an SSD.

I think this is super exciting for gamers and of course the PC world will move in this direction as well. It is not random that Nvidia spent a lot of time on texture I/O when releasing their latest cards.

What does this mean for Xbox and PS5? Well, I expect third party titles to more or less look the same on both since increasing the amount of textures used in environments will require a lot of extra work and a game needs to work on 4GB VRAM cards. So the consoles should - if utilised - be able to use higher resolution textures than the PC counterparts but I do not expect a big difference between the consoles themselves. However, I expect Sony first party titles to utilise a larger amount of textures at a higher resolution than what is currently possible on any other platform. I believe we saw a taste of that both in DS and in the UE5 demo.
TL;DR, goal post got moved from "Only the PS5 can do Y" to "The PS5 and XSX can do Y better than an equivalent PC".
No shit Sherlock.
 

yamaci17

Member
I hope so, nVIDIA marketing dollars are strong on PC so PC devs will be likely not to push for features that make nVIDIA cards look bad :/…
don't worry, lovelace in 2022 will have proper sampler feedback, special "hardware" baked in for sampler feedback streaming, 16 gb vram on 70 series and 80 series, %100 more ray tracing performance, a better refined dlss

oh, they will probably fix their "scheduler overhead" issue as well

all those features will be marketed and most people will be forced to upgrade, one way or another
 
Wait, wait just wait a minute lol. Just reading through this thread there are people calling the PS5 SSD inferior or vaporware because:

- First generation games
- Backwards compatible enhanced multiplatform games

Never in the history of game development has any console platform been firing on all cylinders (hardware wise) in the first year let alone 2 years of the start of a new generation.

Need I remind you asswipes there is such a thing as Covid affecting everything and everyone so development for new showcase titles will be even slower and take much more time.

The developers for both consoles have not had a chance to even scratch the surface here for both architectures for anyone to be making any sweeping statements on these consoles particular hardware feature sets. One of you guys even called the PS5 SSD this generations Kinect 🤣. Well fuck ME 😂🤣😂🤣, I'll just let this carnival of stupid continue if yall going to be posting like that 🤣😂🤣.

tenor.gif
 
Last edited:
I am so sick and tired of hearing about SFS and how it just magically fixes everything. It has been what? 4 months? what game does Xbox have that is on Gamepass, BC or retail that can compare graphically to a "simple" remake of a PS3 game that has been available since launch? none. If its so easy to extract power and they found some elegant software solution with their 12 TF FULL RDNA 2. :eye roll:
 

tillbot8

Banned
Hope the 1.65TB version is coming soon in the future. And yeah a fast SSD can open a lot of possibilities beyond just fast loading. The Series X is no slouch either. Delighted it has become the industry standard. Now as for prices.....
 

VFXVeteran

Banned
Well "look better" is very subjective. What I claim is that no game on Xbox will have the same world density in a semi-open world as Demon's souls. Because All Xbox exclusives have to run on PC using HDDs so they'll never be able to 100% exploit pcie 4.0 + M.2 speeds (and low latency) to dynamically stream stupidely high res textures and assets "just before the corner" the way they do it in Demon's souls.
Despite Demon Souls having incredible texture detail, it's still way way behind FS2020 on actual texture usage. If you think about it, FS2020 should be the de facto standard benchmark for texture streaming and tessellation. No other game will have the same scope. There are 8k textures seen all over the sim that cover a wide area of world space i.e. just one piece of a mountain can have several 8k texture maps all unique unlike Demon Souls which regurgitates the same texture over and over again.

As an aside, people also need to keep in mind that this extreme SSD transfer of large textures and triangles is limited to static geometry only. You won't be seeing any deformable meshes having the same detail as the static noninteractive props and terrain.
 
Last edited:

rodrigolfp

Haptic Gamepads 4 Life


is rdr2 port bad?



how about odyssey?

this is a gpu supposed to be 2 times faster than a ps4

between 2013-2016, you may observe 770 performing above ps4, usually 1.5x-2x more times performance, as it should. i also observed tons of "pc master race" comments where the 770 was hailed like a champion against ps4, claiming how superior it was to ps4 and how ps4 was inferior to it.

in the long game, ps4 played nextgen games such as ac odyssey, rdr 2, metro exodus, doom eternal MUCH better.



ps4 practically runs this game with a locked cap of 60 fps, while 7700 is having hard time, bouncing between 35-50 fps.

Both RDR2 and odyssey are very known to be not good ports plus both have some lower than low pc settings on consoles. And the 770 2 GB is limited by the low 2GB of Vram, not the power of the GPU chip. Plus there are some games that it performs better than PS4, others worse.
 
Last edited:
Top Bottom