• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

Lysandros

Member
MS has only a decompression block that is not even half as fast as the one in the PS5, meanwhile, Sony put about 6 custom chips in the apu to remove all possible bottlenecks.
Thanks for the post. This seems to be a very hard and painful fact to accept for some mysterious reason. The I/O situtation isn't remotely analogous to GPU one where both have slight advantages over the other.
 
Matt is reputable source

Like other sources who says " PS5 have more TF than XSX"....

Please tell us more

7go8REB.gif
 

ethomaz

Banned
I don’t why this discussion.

But own docs says it is the same thing.

“Terminology
Use of sampler feedback with streaming is sometimes abbreviated as SFS. It is also sometimes called sparse feedback textures, or SFT, or PRT+, which stands for “partially resident textures”.”

SFS = PRT+

Sampler Feedback can be used outside streaming but when used for that case it is the same as PRT+.
 
Last edited:

Bo_Hazem

Banned
onesvenus onesvenus About the data streaming on that meme thread of Goliath moving to UE5, what did you find "trolling" in regards of going down from 20 million polygons per frame to 4-5M per frame for Xbox? That has nothing to do with GPU power, it's just how much the I/O can keep up with feeding GPU/RAM/caches. The UE5 demo was so light on the GPU according to Epic Games, something relative to playing Fortnite on consoles!


It's not a GPU power comparison, it's a very realistic comparison which in practice I was actually overestimating that XSX has the same direct feed to GPU cache and RAM with no CPU engagement, which isn't the case. One is capped at 22GB/s the other one at around 4.8-6GB/s. Also it should face regular stalls because it has no GPU cache scrubbers.




And yes, that RAID setup on PC is extremely superior to Xbox Series X, and yet it's still incomparable to PS5. I was really taking you seriously as someone with real world experience but you always tend to call common sense trolling and have strange analyses. It's safe to not take you seriously going forward.
 
Last edited:

M1chl

Currently Gif and Meme Champion
I don’t why this discussion.

But own docs says it is the same thing.

“Terminology
Use of sampler feedback with streaming is sometimes abbreviated as SFS. It is also sometimes called sparse feedback textures, or SFT, or PRT+, which stands for “partially resident textures”.”

SFS = PRT+

Sampler Feedback can be used outside streaming but when used for that case it is the same as PRT+.
Best thing is that the speculation is wilder than official PR.
 

arvfab

Banned
Should be the end of the discussion, we definitely haven’t seen any games demonstrate anything too the level of rachet & clank on Xbox either which is still an early gen example but here we have Microsoft trying to rally the troops to overcome the biggest deficit between either console.

It’s not even the sheer speed of the raw bandwidth, it’s has a lot more channels, significantly higher clock speeds to process data faster, coherency engine informs the gpu over overwritten address ranges whilst the cache scrubbers to evict data without stalling the gpu, uses better decompression software in kraken & rad, has more decompression specific hardware blocks such as data decompression blocks (equivalent to 9 zen 2 cores), a dedicated DMA controller block that’s moves the data to where it needs to be, another two co processors that handle I/O & memory mapping (equivalent to 2 zen 2 cores) whilst the coherency engine house keeps this all to together. Also lots of devs talk about how the ps5 is designed around reducing latency which I think the coherency engine helps with but someone with more knowledge can chime in here but this all leads to the I/0 subsystem capable of over 22gbs of bandwidth speeds at its peak (obviously only occurs when certain conditions are met) but it’s still way ahead of xbox series I/O architecture and we will see Sony first party showcase this consistently throughout the generation, rachet is already doing it this early, god of war ragnorok will probably further showcase that at PlayStation’s next event.
To the defense of Microsoft: you can't really demonstrate anything, if you don't release stuff on your console.
 

SlimySnake

Flashless at the Golden Globes
Matt also said this:

As I have said before, I expect the difference in third party titles to be modest, as they can’t be designed around a faster solution. Maybe the PS5’s IO advantages will be as noticeable as the SX’s TF advantages in those titles.
Really hope Sony first party studios go out of their way to take advantage of the SSD and I/O to bring in fresh experiences. The industry needs innovation badly, and while Im not a fan of developers chasing gimmicks, if this thing forces them to think outside the box when it comes to game design then I am all for it.
 
You know that during Eurogamer interview Xbox architect Andrew talked about SFS, BCPack and what not regarding SSD. If Andrew Goossen said over 6, surely he was careful with words. Over 6 can mean 6,1, 6,2, 6,3.....if it is 6,8 or 6,9 surely he would say closer to 7. If it is 7, he would say 7 and so on. And therefore, Digital Foundry immediatelly after the interview in their video about XSX mentioned highest number for XSX SSD is 6 GB/s. No need to spin otherwise.
6GB/s is referring to the decompression hardware's raw capability, not the SSD. They're saying the decompression hardware can handle data coming into it at a rate of 6GB/s. Basically, on the Series X with its SSD decompression speed will never be a bottleneck for the system because the decompression hardware is well above the raw performance of the SSD itself.

And then with compression and better effective RAM usage through SFS, your effective streaming bandwidth can be enhanced to well beyond that 6GB/s of the decompression hardware. Notice I'm saying "effective" streaming bandwidth. Just as compression boosts effective bandwidth, so too does Sampler Feedback Streaming's 2.5x-3x effective boost to RAM utilization. Compression tackles the issue from one side, Sampler Feedback Streaming tackles it from another.

With the PS5 Cerny said the decompression hardware can handle just over 5GB of raw kraken input data. Compression then can make the effective SSD speed 8-9GB/s. Then with even better compression the numbers can go up to a higher effective range of beyond 20GB/s? I forgot the number he used.
 
Last edited:
Should be the end of the discussion, we definitely haven’t seen any games demonstrate anything too the level of rachet & clank on Xbox either which is still an early gen example but here we have Microsoft trying to rally the troops to overcome the biggest deficit between either console.

It’s not even the sheer speed of the raw bandwidth, it’s has a lot more channels, significantly higher clock speeds to process data faster, coherency engine informs the gpu over overwritten address ranges whilst the cache scrubbers to evict data without stalling the gpu, uses better decompression software in kraken & rad, has more decompression specific hardware blocks such as data decompression blocks (equivalent to 9 zen 2 cores), a dedicated DMA controller block that’s moves the data to where it needs to be, another two co processors that handle I/O & memory mapping (equivalent to 2 zen 2 cores) whilst the coherency engine house keeps this all to together. Also lots of devs talk about how the ps5 is designed around reducing latency which I think the coherency engine helps with but someone with more knowledge can chime in here but this all leads to the I/0 subsystem capable of over 22gbs of bandwidth speeds at its peak (obviously only occurs when certain conditions are met) but it’s still way ahead of xbox series I/O architecture and we will see Sony first party showcase this consistently throughout the generation, rachet is already doing it this early, god of war ragnorok will probably further showcase that at PlayStation’s next event.
Quick Resume swaps quickly between multiple different games in mere seconds, emptying and then filling again system RAM with a whole different game. Even after a system shutdown. Games may not all be properly taking advantage of Velocity Architecture yet, but the xbox OS is.

If it can load entirely different games that fast off the SSD and into memory, allowing you to swap between them in mere seconds it means the same capability can be used for a game. Developers just need to design around the full Xbox Velocity Architecture.
 
onesvenus onesvenus About the data streaming on that meme thread of Goliath moving to UE5, what did you find "trolling" in regards of going down from 20 million polygons per frame to 4-5M per frame for Xbox? That has nothing to do with GPU power, it's just how much the I/O can keep up with feeding GPU/RAM/caches. The UE5 demo was so light on the GPU according to Epic Games, something relative to playing Fortnite on consoles!


It's not a GPU power comparison, it's a very realistic comparison which in practice I was actually overestimating that XSX has the same direct feed to GPU cache and RAM with no CPU engagement, which isn't the case. One is capped at 22GB/s the other one at around 4.8-6GB/s. Also it should face regular stalls because it has no GPU cache scrubbers.




And yes, that RAID setup on PC is extremely superior to Xbox Series X, and yet it's still incomparable to PS5. I was really taking you seriously as someone with real world experience but you always tend to call common sense trolling and have strange analyses. It's safe to not take you seriously going forward.



Series X does the exact same thing that Tim Sweeny is suggesting the PS5 does. Notice he says "without CPU decompression" and without the driver extraction overhead. Series X does the exact same thing to the letter. The key difference is its SSD raw speed, and that to get the full benefit developers would need to design their games around Sampler Feedback Streaming. But they already get exactly the same thing as what's mentioned for PS5 BEFORE Sampler Feedback Streaming.

Sampler Feedback Streaming being in the mix would only make things that much faster on the Series X side because now the SSD is having the burden of unneeded data being transferred into video memory removed entirely.
 

jroc74

Phone reception is more important to me than human rights
Many "reputable sources" lie with number of TF in PS5. Why should I believe Matt?
Because Matt was one of the few basically saying the opposite in general terms?

Matt is one of the few that I would trust with their info, even if I didnt agree with it.

He never gave a number, just a percentage, and he always said XSX.
 
Last edited:

Mr Moose

Member
With the PS5 Cerny said the decompression hardware can handle just over 5GB of raw kraken input data. Compression then can make the effective SSD speed 8-9GB/s. Then with even better compression the numbers can go up to a higher effective range of beyond 20GB/s? I forgot the number he used.
After decompression that typically becomes 8 or 9 GB but the unit itself is capable of outputting as much as 22 GB a second if the data happened to compress particularly well.
 
Guys, does anyone from believe in what SenjutsuSage SenjutsuSage is saying regarding SFS?

It's what's supported by the known and released facts. Do you only want one side? The side that reaffirms what you want to believe, or do you want to hear the other side as well? Because Microsoft has run the demos, they've put out the information same as Sony and its partners have. You only seem to be selective in what you choose to believe. It's marketing on the Xbox side, but it's all facts with no marketing on the Sony side even though we can publicly see Sony handing Epic games millions of dollars there's definitely no marketing component to any of that at all, right? Even recently revealed court documents showcase Epic massaging Sony's ego as it pertains to things as small as crossplay. But I guess that's made up too, right?

Same tech used for the Unreal PS5 demo will be used by multiple Xbox studios. Nanite and Lumen will both work just as well on Xbox.
 

Dodkrake

Banned
6GB/s is referring to the decompression hardware's raw capability, not the SSD. They're saying the decompression hardware can handle data coming into it at a rate of 6GB/s. Basically, on the Series X with its SSD decompression speed will never be a bottleneck for the system because the decompression hardware is well above the raw performance of the SSD itself.

And then with compression and better effective RAM usage through SFS, your effective streaming bandwidth can be enhanced to well beyond that 6GB/s of the decompression hardware. Notice I'm saying "effective" streaming bandwidth. Just as compression boosts effective bandwidth, so too does Sampler Feedback Streaming's 2.5x-3x effective boost to RAM utilization. Compression tackles the issue from one side, Sampler Feedback Streaming tackles it from another.

With the PS5 Cerny said the decompression hardware can handle just over 5GB of raw kraken input data. Compression then can make the effective SSD speed 8-9GB/s. Then with even better compression the numbers can go up to a higher effective range of beyond 20GB/s? I forgot the number he used.

For the SFS info, sorry, but that's not entirely correct. SFS does not apply to all data (audio is excluded, for example), and the 2.5X figure is likely an unrealistic scenario (more below on the PS5 section). Even so, this means that with certain data pools (let's say, 1GB), you can request 500MB instead of the total amount (or less), which can then be further compressed. The thing is, the decompression block only allows 4.8GB/s per MS's own data. This means that you can only decompress 4.8GB of data. Other anecdotal data points to "over 6GB/s", but that was not on the technical docs.

As for the PS5, it allows for a data throughput of 5.5GB/s without any compression. The decompression block allows for 22GB/s. This means you can throw 22GB of compressed data at it and it can decompress it in 1s (probably never gonna happen). With Kraken + Oodle Texture, they are seeing real world results of over 17GB/s of compressed data. Even in best case scenarios, the PS5 is close to twice as fast, and real world as shown it to be (when games are optimized for both platforms) 2 to 4x faster.
 

jroc74

Phone reception is more important to me than human rights
Unreal 5 deja vu. From Ratchet previews

giphy.gif


I think with all the talk about SFS it might be a good time to read the Verge article about Rift Apart.:


Whatever the PS5 is doing, its doing it damn well.

Btw, that gif is fucking amazing. And its time for me to go on media blackout, lol. But this is proof of what some of us were saying, first party studios that dont use UE5 will just update their engines to do similar. I dont think Rift Apart is using UE, is it? Not sure.

With Epic and Sony working so close on UE5, this should be no surprise.
 
Last edited:

Loope

Member
It's what's supported by the known and released facts. Do you only want one side? The side that reaffirms what you want to believe, or do you want to hear the other side as well? Because Microsoft has run the demos, they've put out the information same as Sony and its partners have. You only seem to be selective in what you choose to believe. It's marketing on the Xbox side, but it's all facts with no marketing on the Sony side even though we can publicly see Sony handing Epic games millions of dollars there's definitely no marketing component to any of that at all, right? Even recently revealed court documents showcase Epic massaging Sony's ego as it pertains to things as small as crossplay. But I guess that's made up too, right?

Same tech used for the Unreal PS5 demo will be used by multiple Xbox studios. Nanite and Lumen will both work just as well on Xbox.
You're wasting your time. Just watch the beautiful gif he posted with the UE5 deja vu or some shit like that.
 

IntentionalPun

Ask me about my wife's perfect butthole


For some reason I thought Rage was a PS4/Xbox ONe game (running of hard drive.) I'm aware it's basically the first use of the tech with it's megatextures.

But again, was it really pulling textures as you turned your head? I always assumed games like Rage were way over-streaming.. streaming things that aren't close enough to the player to need to be pulled in when they turned their heads.

That's largely irrelevant to this discussion, the parameters that matter are display resolution and target fidelity

My point being they only had so much RAM available which is hardly irrelevant because it drove the fidelity decisions. I meant what you are saying here basically, just put it differently.
 
Last edited:
It's what's supported by the known and released facts. Do you only want one side? The side that reaffirms what you want to believe, or do you want to hear the other side as well? Because Microsoft has run the demos, they've put out the information same as Sony and its partners have. You only seem to be selective in what you choose to believe. It's marketing on the Xbox side, but it's all facts with no marketing on the Sony side even though we can publicly see Sony handing Epic games millions of dollars there's definitely no marketing component to any of that at all, right? Even recently revealed court documents showcase Epic massaging Sony's ego as it pertains to things as small as crossplay. But I guess that's made up too, right?

Same tech used for the Unreal PS5 demo will be used by multiple Xbox studios. Nanite and Lumen will both work just as well on Xbox.

Because you desperately want to put XSX SSD on par with PS5 SSD. 6 or so is max. nothing more, nothing less. Cheers!
 
Last edited:
But it is clearly stated in that demo that "traditional texture streaming loads entire texture detail levels at once". That is not partially resident textures. Please address this specific point because it's the part I am hung up on. The voice over, the captions, and the graphs of the demo all indicate that SFS is being compared to "traditional texture streaming" which they, themselves defined. Can you see why I'm unclear? I swear I'm not trying to be dense.

All other statements you have shown are a saying that SFS is an evolution and is superior to PRT which I have no doubt is true, but those statements aren't quantifying the benefit.

I'll address this the same way Microsoft did.


"We observed that typically, only a small percentage of memory loaded by games was ever accessed," reveals Goossen. "This wastage comes principally from the textures. Textures are universally the biggest consumers of memory for games. However, only a fraction of the memory for each texture is typically accessed by the GPU during the scene. For example, the largest mip of a 4K texture is eight megabytes and often more, but typically only a small portion of that mip is visible in the scene and so only that small portion really needs to be read by the GPU."

As textures have ballooned in size to match 4K displays, efficiency in memory utilisation has got progressively worse - something Microsoft was able to confirm by building in special monitoring hardware into Xbox One X's Scorpio Engine SoC. "From this, we found a game typically accessed at best only one-half to one-third of their allocated pages over long windows of time," says Goossen. "So if a game never had to load pages that are ultimately never actually used, that means a 2-3x multiplier on the effective amount of physical memory, and a 2-3x multiplier on our effective IO performance."

We have to imagine Xbox One X titles were indeed using PRT. Their multipliers for SFS are almost certainly in comparison to PRT enhanced titles. Why would they ever monitor just titles that don't use PRT? Think about that. PRT wastes a lot of RAM because it's not intelligent enough to know which virtual memory pages or textures need not be inside video memory at all. That's where Sampler Feedback comes in to make it significantly more efficient. PRT, though it added efficiencies to the process for saving RAM, it has never been as efficient as Sampler Feedback Streaming.
 

onesvenus

Member
onesvenus onesvenus About the data streaming on that meme thread of Goliath moving to UE5, what did you find "trolling" in regards of going down from 20 million polygons per frame to 4-5M per frame for Xbox? That has nothing to do with GPU power, it's just how much the I/O can keep up with feeding GPU/RAM/caches. The UE5 demo was so light on the GPU according to Epic Games, something relative to playing Fortnite on consoles!


It's not a GPU power comparison, it's a very realistic comparison which in practice I was actually overestimating that XSX has the same direct feed to GPU cache and RAM with no CPU engagement, which isn't the case. One is capped at 22GB/s the other one at around 4.8-6GB/s. Also it should face regular stalls because it has no GPU cache scrubbers.




And yes, that RAID setup on PC is extremely superior to Xbox Series X, and yet it's still incomparable to PS5. I was really taking you seriously as someone with real world experience but you always tend to call common sense trolling and have strange analyses. It's safe to not take you seriously going forward.

Well, you are making numbers out of thin air.

Saying that the budget will go from 20M polygons on PS5 to 4/5M polygons on Xbox IS trolling. Why you ask? Let me write it down for you:

1) You are basing this under the assumption that polygon budget will be defined mainly by I/O, which has never been the case and I'm not sure it will be now. At least we don't have enough information to make that claim.
2) You could have a single mesh with, let's say 1000 polygons, on VRAM and draw it a million times. Making a total of 1000M of polygons. See? The I/O complex, cache scrubbers and all that don't affect this, only the power of the GPU.

Basically, what the high bandwidth of PS5's I/O allows, and where I see it having an edge upon Xbox, is to have a higher number of different meshes/textures rendered at once BUT that doesn't necessarily relate to the polygon number.

Do you see now why what you say is not common sense and why I say you are trolling? You keep talking in absolute terms about technical aspects but completely missing the point.

Having said that, feel free to ignore me if you want to keep living where all you say makes sense.
 

Bo_Hazem

Banned
Series X does the exact same thing that Tim Sweeny is suggesting the PS5 does. Notice he says "without CPU decompression" and without the driver extraction overhead. Series X does the exact same thing to the letter. The key difference is its SSD raw speed, and that to get the full benefit developers would need to design their games around Sampler Feedback Streaming. But they already get exactly the same thing as what's mentioned for PS5 BEFORE Sampler Feedback Streaming.

Sampler Feedback Streaming being in the mix would only make things that much faster on the Series X side because now the SSD is having the burden of unneeded data being transferred into video memory removed entirely.

No, CPU in Series X MUST help the decompressor, it's not fully independent. XSX ZLIB decompressor is an equivalent of 5x Zen2 cores, while the Kraken decompressor in PS5 is an equivalent of 9x Zen2 cores, with a total of ~12-11x Zen2 cores when combined with I/O. Also PS5 I/O has so much perks missing on Xbox and other traditional PC's:

xaUy0WVJzlodXQGN-b7E-p0eqqGFqbATn631bi3Y1i8.jpg


That Sampler Feedback Streaming is as efficient as on PC, hence unified GDK. The difference is obvious. The SSD on Xbox being DRAM-less adds more burden to the CPU as well.
 

thewire

Member
Quick Resume swaps quickly between multiple different games in mere seconds, emptying and then filling again system RAM with a whole different game. Even after a system shutdown. Games may not all be properly taking advantage of Velocity Architecture yet, but the xbox OS is.

If it can load entirely different games that fast off the SSD and into memory, allowing you to swap between them in mere seconds it means the same capability can be used for a game. Developers just need to design around the full Xbox Velocity Architecture.
You’re comparing an operating system feature, which requires over 3 gbs worth of memory, ranges from 8/12 seconds between games, to loading in new levels in 2 seconds game through portals & much more in an actual game. Where is the rachet & clank equivalent on Xbox? Not an OS feature that Sony will probably add some variant of in the future but actual game design feats that were possible before hand & showcase the I/O. We’ve seen instant loading on ps5 already and now we’re seeing real next generation new game design possibilities in rachet.
Where is the Xbox showcase, which you’ve been claiming before hand is a significantly superior, more powerful & faster than the ps5?
 

Bo_Hazem

Banned
Well, you are making numbers out of thin air.

Saying that the budget will go from 20M polygons on PS5 to 4/5M polygons on Xbox IS trolling. Why you ask? Let me write it down for you:

1) You are basing this under the assumption that polygon budget will be defined mainly by I/O, which has never been the case and I'm not sure it will be now. At least we don't have enough information to make that claim.
2) You could have a single mesh with, let's say 1000 polygons, on VRAM and draw it a million times. Making a total of 1000M of polygons. See? The I/O complex, cache scrubbers and all that don't affect this, only the power of the GPU.

Basically, what the high bandwidth of PS5's I/O allows, and where I see it having an edge upon Xbox, is to have a higher number of different meshes/textures rendered at once BUT that doesn't necessarily relate to the polygon number.

Do you see now why what you say is not common sense and why I say you are trolling? You keep talking in absolute terms about technical aspects but completely missing the point.

Having said that, feel free to ignore me if you want to keep living where all you say makes sense.

It's all about feeding the RAM/GPU caches, how fast you push them out and in PER FRAME! That demo was very light on the GPU, but was choking the whole 16GB RAM of the PS5. Stop trolling and being obtuse, it's not about GPU as PS5 isn't more powerful than most high end PC's!




This is SOLELY SSD>GPU feed.
 

IntentionalPun

Ask me about my wife's perfect butthole
We have to imagine Xbox One X titles were indeed using PRT. Their multipliers for SFS are almost certainly in comparison to PRT enhanced titles. =

They aren't; because in general, they weren't using PRT... which they explained on the slide right before the SFS demo.

SFS is an implementation of PRT Xbox is hoping actually gets used, whereas tiled resources and PRT didn't get used much.. Rage / Rage 2 famously used it, but ID actually even abandoned the method after that.

fg1Nf5M.png


You missed my tech talk where we solved this yesterday.



The real power behind SFS is the i/o Speed.. it then adds the enhancements to do the calculations / automatic caching / un-caching, which is awesome.. nut it was a bit of a flawed comparison.

Comparing SFS vs "non PRT on an SSD" is flawed because why would you not use something like PRT on an SSD? Playstation almost certainly supports some implementation of the tech, whether it has to rely on CPU power more or not would be the question.

It sounds like MS has a simple/easy to use solution though, and we probably haven't been seeing much of it as we are seeing cross-gen games.
 

elliot5

Member
the SSD IO wars is so obnoxious. Even if the PS5 can theoretically load in 22 GB/s worth of decompressed data, you still have only ~13.5 GB of usable RAM at any point in time. The benefits come in to developers not having to deal with the shitty seeking and chunking of the HDD days, and like The Verge's new R&C article points out, allows a lot of density of the world in the camera frustrum, unlike before. You are still limited by how much the GPU can draw and render on-screen at a stable framerate. I/O speeds allow for good user and developer experience, and will surely unlock some unique presentation like R&C, but the warring is obnoxious.

I can imagine even if the XSS didn't exist, the warriors would claim the XSX would be holding 3rd party games back because it doesn't have the same IO speed as PS5 :rolleyes: . Even though that's not true, because PCs with SATA ssds and even HDDs exist. jfc
 

Bo_Hazem

Banned
the SSD IO wars is so obnoxious. Even if the PS5 can theoretically load in 22 GB/s worth of decompressed data, you still have only ~13.5 GB of usable RAM at any point in time. The benefits come in to developers not having to deal with the shitty seeking and chunking of the HDD days, and like The Verge's new R&C article points out, allows a lot of density of the world in the camera frustrum, unlike before. You are still limited by how much the GPU can draw and render on-screen at a stable framerate. I/O speeds allow for good user and developer experience, and will surely unlock some unique presentation like R&C, but the warring is obnoxious.

I can imagine even if the XSS didn't exist, the warriors would claim the XSX would be holding 3rd party games back because it doesn't have the same IO speed as PS5 :rolleyes: . Even though that's not true, because PCs with SATA ssds and even HDDs exist. jfc

It's not about 22GB/s, it's about 22MB/ms. Some really don't understand how streaming works. Also PS5 had to spare 0.7GB on RAM as a streaming pool. The faster the I/O, the smaller the pool is, and vice versa.
 
Last edited:

thewire

Member
No, CPU in Series X MUST help the decompressor, it's not fully independent. XSX ZLIB decompressor is an equivalent of 5x Zen2 cores, while the Kraken decompressor in PS5 is an equivalent of 9x Zen2 cores, with a total of ~12-11x Zen2 cores when combined with I/O. Also PS5 I/O has so much perks missing on Xbox and other traditional PC's:

xaUy0WVJzlodXQGN-b7E-p0eqqGFqbATn631bi3Y1i8.jpg


That Sampler Feedback Streaming is as efficient as on PC, hence unified GDK. The difference is obvious. The SSD on Xbox being DRAM-less adds more burden to the CPU as well.
Completely forget about the extra dram onboard the i/o complex, thanks for adding it in.
 

Dodkrake

Banned
the SSD IO wars is so obnoxious. Even if the PS5 can theoretically load in 22 GB/s worth of decompressed data, you still have only ~13.5 GB of usable RAM at any point in time. The benefits come in to developers not having to deal with the shitty seeking and chunking of the HDD days, and like The Verge's new R&C article points out, allows a lot of density of the world in the camera frustrum, unlike before. You are still limited by how much the GPU can draw and render on-screen at a stable framerate. I/O speeds allow for good user and developer experience, and will surely unlock some unique presentation like R&C, but the warring is obnoxious.

I can imagine even if the XSS didn't exist, the warriors would claim the XSX would be holding 3rd party games back because it doesn't have the same IO speed as PS5 :rolleyes: . Even though that's not true, because PCs with SATA ssds and even HDDs exist. jfc

The more data you can throw at the GPU + the more you can cull, the more detail you can have, and the faster things happen. There's absolutely a difference between filling your RAM in 1s and 2 to 4s, especially around game design. The Xbox Series X will not hold the gen back, but Xbox games being on PC will hold Xbox games back (and PC compatible multiplats back on the PS5).
 
Last edited:
Series X does the exact same thing that Tim Sweeny is suggesting the PS5 does.
Nope, because in order to start decompression in the XsX case (and PC too), the i/o cycle (read loop from ssd) must be completed. Otherwise, we will get unnecessary overhead and delay for GPU rendering. The i/o of XsX system, although it is unconditionally very fast, is far from the PS5 level, it's time to understand this and stop arguing.
 

reksveks

Member
You’re comparing an operating system feature, which requires over 3 gbs worth of memory.
How do you think Quick Resume works?

---------

In terms of how the memory is allocated, games get a total of 13.5GB in total, which encompasses all 10GB of GPU optimal memory and 3.5GB of standard memory. This leaves 2.5GB of GDDR6 memory from the slower pool for the operating system and the front-end shell.

There isn't any ram being used for QR.
 
Last edited:
Status
Not open for further replies.
Top Bottom