• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox Series X’s BCPack Texture Compression Technique 'might be' better than the PS5’s Kraken

1st, your understanding of Compression ratio is wrong, lets say BC pack has 50% Compression efficiency that means the ratio is 1.5 to 1. So after decompressing bcpack format texture of 2.6 ×1.5 = 3.9, not 5.6gb.
2nd, when SFS request data from SSD, the data in SSD is Always compressed (Raw data). So you cannot use the 4.8gb/s speed. That's after applying the Compression ratio of Zlib+bcpack, it is not only for BCpack.


Yea, realized I was wrong on that detail. :p


Hmm, I might finally see what you're saying now. I was mostly stuck on why you guys insisted on only using the 5.6GB figure rather than the 2.8GB figure. I wasn't viewing the divide by two at the end as extra compression. I literally just now caught it and realize where I personally got confused. Despite me pointing out these figures to highlight why I think Sampler Feedback Streaming is such a big deal, I quite literally got distracted on this 5.6GB vs 2.8GB stuff that I lost track of the point I was trying to make in the first place: Sampler Feedback Streaming is making possible a scenario in which the effective equivalent of 14GB of textures are loaded into main memory in just 1.16 seconds, but only needing to fill main memory with 5.6GB of data in order to get the job done, a massive memory savings.

That's what I meant to get at the entire time, Sampler Feedback Streaming's memory efficiency. I got lost on the 2.8GB vs 5.6GB stuff. I acknowledge, though, that I was indeed wrong with my calculations. Turns out I really was double compressing, and in doing so totally forgetting my main point about SFS.

I used the huge 14GB (exceeding Series X usable game memory) as example to make a larger point about SFS, but lost that point in my mistake in believing the 5.6GB of texture data was moved into main memory in just 0.58 seconds, but I was wrong the whole time. It really is 1.16, just like you guys said. The point I was getting at, but lost track of was now if a title requested 5.6GB worth of textures (not 14GB this time) SFS's 2.5x efficiency would turn that into 2.24GB of data, which Series X would load into main memory in just 0.46 seconds. I began thinking of the 5.6GB as an actual 5.6GB, but now as part of a larger effective figure, which was the whole point for me at least.

Thank you guys for being patient with my ass lol. It wasn't easy I'm sure.

Richard Pryor Reaction GIF
 
  • Like
Reactions: Rea

Rea

Member
Every once in awhile it seems old talking points come back to the surface. I thought I somewhat had a general idea about this but I'll just leave this here so some smarter minds can digest it:







Code:
Zip    1.64 to 1
Kraken    1.82 to 1
Zip + Oodle Texture    2.69 to 1
Kraken + Oodle Texture    3.16 to 1

This is why I asked would 12GB/s be considered the average or best case scenario for Series consoles.....because there is a best case for PS5, Cerny mentioned 22GB/s.

In my mind its gonna be similar to this: (GB's; raw/average compressed/best case scenario)

Series consoles:
2.4, 4.8, over 6 or up to 12

PS5:
5.5, 8-9 or 17, 22. I have seen the 17 number before (8-9 is official PS5 specs; I assume 17 is 5.5 x Kraken + Oodle and based on the Kraken dev article) ) ..but Cerny mentioned "capable of outputting as much as 22GB/s if the data happened to compress particularly well". So....anyone care to try to break this down in relation to this topic?

When talking about Compression ratio of 3.16 or 2 or 1.6, they are talking about average, different files have different Compression efficiency even if you apply the same algorithm. So, there are files which will saturate the max bandwidth of ps5 decompressor which is 22/s, and there is also will have files can go as low as 5.5/s which is max speed of SSD. So the average speed of kraken+ Oodle texture will be around 17/s.
Same goes for Xbox but there is no confirmation of Xbox max speed of decompressor.
 

Corndog

Banned
I think the biggest mess to unravel how the effectiveness of SFS/BCpack + VA + SSD RAW stacks up against PS5's solution, is that both SFS and BCpack's benefits have alternatives on PS5 with PRT and Oodle textures.

So the actual gain of SFS/BCpack isn't really noteworthy IMO - unless comparing to PC - because the PS5 may even have the advantage to SFS because of lower latency asset check-in/cache scrubbers. And Oodle textures is at least on par with BCpack for rate adaption compression - if not better, and both use the same underlying block compression anyway.

Which then just brings the comparison back to the decompression units and SSD raw speeds, which gives the PS5 much lower latency (x5, from Road to PS5 info and VA's info - indirectly via RTX I/O reveal) and at least double decompression bandwidth - but probably closer to 4times in real software going by John at DF's recent tweets of comparing 1 second to 4 second loading, or 2 seconds to 8 seconds.
Come on. I certainly believe Sony has the faster io speeds. That said we don’t know how Microsoft’s decompression method compares to Sony’s. Maybe they are equivalent , maybe they are not. We can’t just assume they are the same.

Also different assets are going to have different levels of compression. I don’t think it wise to just assume a certain compression rate for either console.
 
When talking about Compression ratio of 3.16 or 2 or 1.6, they are talking about average, different files have different Compression efficiency even if you apply the same algorithm. So, there are files which will saturate the max bandwidth of ps5 decompressor which is 22/s, and there is also will have files can go as low as 5.5/s which is max speed of SSD. So the average speed of kraken+ Oodle texture will be around 17/s.
Same goes for Xbox but there is no confirmation of Xbox max speed of decompressor.

I don't think Cerny necessary told the max speed of the decompression unit, but more what the current theoretical max compression ratio is with current algorithms. In other words, it could improve further with new algorithms and improvements. With that same principle applied, there may not be a max speed of the Series X decompressor worth actually knowing since they appear to have a different I/O strategy. We just know it's a block that can deliver over 6GB/s, which easily outstrips the max read speed of the SSD.

On the Xbox side of things they appeared to focus on making their own compression format specifically designed around GPU textures, whatever the benefits of that are, and then basically pairing it up with the efficiency gains of Sampler Feedback Streaming. It's a smart approach. What you lack in raw speed, try to make up for by significantly cutting down on the streaming requirements by between 2.5x and 3x, thus giving developers more of what they always want more of, which is more available usable RAM.

The below is the worst example of something that can happen to any texture streaming system, and it's really fast, successive camera cuts. There are times when as much as 4GB to 5GB of data is occupying memory, yet only a tiny chunk of it is actually being actively utilized. Notice the bars below there are darker areas on each bar and brighter areas? The bright areas are the parts of RAM that's actually being used. Darker areas represent texture data in RAM, but not currently being used. The efficiency of SFS allows Series X to get away with using far less RAM because the system will be fast enough to keep up. When you don't have the same speed or efficiency, there's no choice but to stream in or keep active more data in RAM.

ZM3QDoY.jpg





It's a very cool approach. If you wanted to push the scene equivalent of 18GB worth of texture data into RAM (an amount which exceeds the Series X's 13.5GB of usable RAM), Sampler Feedback Streaming, by preventing Series X from wasting bandwidth on unneeded data, cuts that 18GB down to 7.2GB, which Series X would then be able to get into memory in 1.5 seconds. It's a technique that allows the SSD bandwidth to easily exceed its spec, something they've been stressing from the get go. That more than 100GB of game data stored on the SSD just in time for when the game needs it stuff was basically all about Sampler Feedback Streaming.


Through the massive increase in I/O throughput, hardware accelerated decompression, DirectStorage, and the significant increases in efficiency provided by Sampler Feedback Streaming, the Xbox Velocity Architecture enables the Xbox Series X to deliver effective performance well beyond the raw hardware specs, providing direct, instant, low level access to more than 100GB of game data stored on the SSD just in time for when the game requires it.

Forget how it compares, but that's just plain exciting due to all the potential ways this kind of memory savings can be used to make games better in other ways. Or if they want a purely visual benefit, the RAM savings can go right back into making texture quality even higher. Things that shouldn't be possible with the RAM pool of the system suddenly seem possible, and we know developers have tricks for days.

Dirt 5 dev talked about how he used PS2's 4MB video RAM back in the day. He said because the bus was so insanely fast he could do way more than what the 4MB of video RAM would have suggested was possible.
 

Heisenberg007

Gold Journalism
1. Why are we arguing about the decompression speed when Microsoft has explicitly told us that? 2.4 Gb/s raw speed with a 2x compression multiplier to make it 4.8 Gb/s. That is the best case scenario using Velocity Architecture and everything at its maximum.

2. The title of this thread has been debunked already. If it were indeed the case, we would have seen better compression on XSX than PS5 in games. There hasn't been one single evidence for that. Not even one. On the other hand, PS5 has multiple games with significantly smaller game size. The latest being Subnautica with an over 100% difference in file size.

3. You can compress the data as much as you want, but it all still depends on the decompression speed of the console. Otherwise, you will see extreme pop-in in-game and the same old 1-minute loading screens. Resident Evil Village is an excellent example of that. The game is more compressed on the PS5 than it is on XSX. It is roughly 30% bigger in size on Xbox. Despite that, it loads 400% faster on the PS5 (1.5 seconds vs. 8 seconds). Theoretically, the developer could make the file size bigger (compress it less) on Xbox and decrease the loading time by a couple of seconds. The hardware (decompression units) is the limitation here, as compared to the PS5, regardless of the compression multiplier and speed it may have on Xbox.
 
Last edited:
1. Why are we arguing about the decompression speed when Microsoft has explicitly told us that? 2.4 Gb/s raw speed with a 2x compression multiplier to make it 4.8 Gb/s. That is the best case scenario using Velocity Architecture and everything at its maximum.

2. The title of this thread has been debunked already. If it were indeed the case, we would have seen better compression on XSX than PS5 in games. There hasn't been one single evidence for that. Not even one. On the other hand, PS5 has multiple games with significantly smaller game size. The latest being Subnautica with an over 100% difference in file size.

3. You can compress the data as much as you want, but it all still depends on the decompression speed of the console. Otherwise, you will see extreme pop-in in-game and the same old 1-minute loading screens. Resident Evil Village is an excellent example of that. The game is more compressed on the PS5 than it is on XSX. It is roughly 30% bigger in size on Xbox. Despite that, it loads 400% faster on the PS5 (1.5 seconds vs. 8 seconds). Theoretically, the developer could make the file size bigger (compress it less) on Xbox and decrease the loading time by a couple of seconds. The hardware (decompression units) is the limitation here, as compared to the PS5, regardless of the compression multiplier and speed it may have on Xbox.

There are multiple ways of improving texture streaming I/O performance besides just making the file size smaller or having a faster SSD. Sampler Feedback Streaming happens to be a method for significantly cutting down a game's streaming requirements. That fact alone changes what can be done with 2.4GB/s raw 4.8GB/s compressed. We have seen some evidence, it's called Quick Resume. It's the only thing thus far actually taking proper advantage of Xbox Velocity Architecture that we know of.

Much of what you're saying seems premature in light of the fact the generation has barely just started and have yet to truly see what XVA can do when fully utilized in a game built around it. But it'll come I'm certain. But, long story short, for what Microsoft designed around, 2.4GB/s raw and 4.8GB/s compressed is all they will ever need.
 
Last edited:

Rea

Member
I don't think Cerny necessary told the max speed of the decompression unit, but more what the current theoretical max compression ratio is with current algorithms. In other words, it could improve further with new algorithms and improvements. With that same principle applied, there may not be a max speed of the Series X decompressor worth actually knowing since they appear to have a different I/O strategy. We just know it's a block that can deliver over 6GB/s, which easily outstrips the max read speed of the SSD.

On the Xbox side of things they appeared to focus on making their own compression format specifically designed around GPU textures, whatever the benefits of that are, and then basically pairing it up with the efficiency gains of Sampler Feedback Streaming. It's a smart approach. What you lack in raw speed, try to make up for by significantly cutting down on the streaming requirements by between 2.5x and 3x, thus giving developers more of what they always want more of, which is more available usable RAM.

The below is the worst example of something that can happen to any texture streaming system, and it's really fast, successive camera cuts. There are times when as much as 4GB to 5GB of data is occupying memory, yet only a tiny chunk of it is actually being actively utilized. Notice the bars below there are darker areas on each bar and brighter areas? The bright areas are the parts of RAM that's actually being used. Darker areas represent texture data in RAM, but not currently being used. The efficiency of SFS allows Series X to get away with using far less RAM because the system will be fast enough to keep up. When you don't have the same speed or efficiency, there's no choice but to stream in or keep active more data in RAM.

ZM3QDoY.jpg





It's a very cool approach. If you wanted to push the scene equivalent of 18GB worth of texture data into RAM (an amount which exceeds the Series X's 13.5GB of usable RAM), Sampler Feedback Streaming, by preventing Series X from wasting bandwidth on unneeded data, cuts that 18GB down to 7.2GB, which Series X would then be able to get into memory in 1.5 seconds. It's a technique that allows the SSD bandwidth to easily exceed its spec, something they've been stressing from the get go. That more than 100GB of game data stored on the SSD just in time for when the game needs it stuff was basically all about Sampler Feedback Streaming.




Forget how it compares, but that's just plain exciting due to all the potential ways this kind of memory savings can be used to make games better in other ways. Or if they want a purely visual benefit, the RAM savings can go right back into making texture quality even higher. Things that shouldn't be possible with the RAM pool of the system suddenly seem possible, and we know developers have tricks for days.

Dirt 5 dev talked about how he used PS2's 4MB video RAM back in the day. He said because the bus was so insanely fast he could do way more than what the 4MB of video RAM would have suggested was possible.

You are so excited about SFS and still confused. SFS only request the data what is really needed right.
Let me ask you 1 question.
What if I need a texture that is 5GB raw data? I need that texture for my in game character within a sec, to show the gamer that my character has so much details. How does SFS will help?
 

sinnergy

Member
You are so excited about SFS and still confused. SFS only request the data what is really needed right.
Let me ask you 1 question.
What if I need a texture that is 5GB raw data? I need that texture for my in game character within a sec, to show the gamer that my character has so much details. How does SFS will help?
Why would you need that amount ? The game character is always on the screen? Seems like a bad choice.. so you have less left for environment.
 

Rea

Member
Why would you need that amount ? The game character is always on the screen? Seems like a bad choice.. so you have less left for environment.
Why wouldn't? Why would you limit yourself? If dev wants to design a game where they need a texture of 5gb raw data a sec in any given moment, how would you solve that without a hardware?
 

Godfavor

Member
You are so excited about SFS and still confused. SFS only request the data what is really needed right.
Let me ask you 1 question.
What if I need a texture that is 5GB raw data? I need that texture for my in game character within a sec, to show the gamer that my character has so much details. How does SFS will help?
Let's assume that the whole texture data of that character would have been higher than 5gb (example 10gb), because SFS would have to load only the required mips, it cuts it down to 5gb.

As XSX has 4.8gb/sec (with the decompression), the remaining 0.2 gb that are missing would be replaced with lower quality mips (with SFS) temporarily until the last 0.2gb loads in about 0.1sec. This has been clarified that mips that arrive late to the party XSX is using a specialized hardware (go easy on me that's what the XSX technical engineer in the twitter said 🙂).

This is an ideal hypothetical scenario while using SSD as ram, of course you need to load the whole environment as well.
 
Last edited:

Rea

Member
Just to clarify the above discussion. SFS and PRT+ is the same thing. PRT (without+) does not include sampler feedback.
I just want to know how's that PRT+ or SFS (Microsoft called it), will solve my problem? That 5GB raw data that i need is after applying that SFS, so without Sfs i would need 5×2.5= 12.5Gb raw.
 

Panajev2001a

GAF's Pleasant Genius
Just to clarify the above discussion. SFS and PRT+ is the same thing. PRT (without+) does not include sampler feedback.
… and the work to get by without it is not the end of the world for devs (although not 100% free either) and there is no 2-2.5x improvements in data streaming with SFS and PRT. Which is still thread detailing as the thread is about BCPACK and Kraken (with or without Oodle Texture preprocessing).

So, once you bring SFS and PRT in for both consoles the difference in I/O throughout and latency still remains for devs who can adapt their engines to this problem.
 

Rea

Member
As XSX has 4.8gb/sec (with the decompression), the remaining 0.2 gb that are missing would be replaced with lower quality mips (with Sampler Feedback) temporarily until the last 0.2gb loads in about 0.1sec.
You can't use that, i specifically said RAW Data.
 

sinnergy

Member
Why wouldn't? Why would you limit yourself? If dev wants to design a game where they need a texture of 5gb raw data a sec in any given moment, how would you solve that without a hardware?
You are taking about a character with 5 GB, why would you want that? I just think that’s a weird point to make .. it would mean you will at all times need that amount .. memory needed for other parts of the game.
 
Last edited:

Godfavor

Member
… and the work to get by without it is not the end of the world for devs (although not 100% free either) and there is no 2-2.5x improvements in data streaming with SFS and PRT. Which is still thread detailing as the thread is about BCPACK and Kraken (with or without Oodle Texture preprocessing).

So, once you bring SFS and PRT in for both consoles the difference in I/O throughout and latency still remains for devs who can adapt their engines to this problem.
Not sure what's the difference of having it into API level (or hardware level) would make it easier and faster to work with than other solutions maybe.
 

Rea

Member
The game is more compressed on the PS5 than it is on XSX. It is roughly 30% bigger in size on Xbox.

That's because xbox uses Zlib for compression, ps5 uses kraken, kraken is better at compression so the size is smaller, like Cerny said kraken is 10% more efficient than zlib, so the difference is 2.7gb vs 3gb.
Theoretically, the developer could make the file size bigger (compress it less) on Xbox and decrease the loading time by a couple of seconds.
No, you can't make games bigger and have lesser loading time, xbox can have same size as Ps5 or bigger size and loading time will still have the same. The reason why the game size is different is because, Ps5 has custom kraken decompressor, xbox has custom zlib decompressor, xbox could use kraken format and have same size as ps5 but dev has to use CPU for decompression that will crippled game performance. Decompression speed does not makes the game smaller, the game is smaller because kraken+oodle has very efficient data compression.

The loading time is depending on the raw speed of Ssd and also compression ratio.
 

Panajev2001a

GAF's Pleasant Genius
Not sure what's the difference of having it into API level (or hardware level) would make it easier and faster to work with than other solutions maybe.
I think it is ease of use and the fact that the HW helps you compute which data should be prefetched (the feedback part of sampler feedback, the non blocking instructions to fetch Texture data computing availability and cause page faults if needed, etc…) and when as well as avoids corner cases when filtering textures when dealing with some lower detail pages mixed with higher detail ones you were able to stream in.

It is an improvement, it is better to have it than not, but it is not a 2.5-3x bandwidth and memory usage improvements as some people have misunderstood MS’s engineers working on this as.
 
  • Like
Reactions: Rea

Rea

Member
Why would you need that amount ? The game character is always on the screen? Seems like a bad choice.. so you have less left for environment.
I can use lower LOD for environment. My theory is when player zoom in to the character within a sec, and i need the full texture of my character which is 5gb raw.
 

Panajev2001a

GAF's Pleasant Genius
That's because xbox uses Zlib for compression, ps5 uses kraken, kraken is better at compression so the size is smaller, like Cerny said kraken is 10% more efficient than zlib, so the difference is 2.7gb vs 3gb.
BCPACK has a better compression rate than Kraken which is more general purpose, but the game changer seems to be Oodle Texture that can boost Kraken’s compression rate.

Also, with such fast SSD’s and the high cost per GB compared to the old HDD’s, there is more incentive to trim disk usage as well as more fat to trim (we do not need to duplicate all the data we used to cover the mechanical disk latencies and the better I can access smaller and smaller files the less duplication and padding you need).
 

Godfavor

Member
I think it is ease of use and the fact that the HW helps you compute which data should be prefetched (the feedback part of sampler feedback, the non blocking instructions to fetch Texture data computing availability and cause page faults if needed, etc…) and when as well as avoids corner cases when filtering textures when dealing with some lower detail pages mixed with higher detail ones you were able to stream in.

It is an improvement, it is better to have it than not, but it is not a 2.5-3x bandwidth and memory usage improvements as some people have misunderstood MS’s engineers working on this as.
This is a little confusing to me as indeed the 2.5x improvement is from raw texture data without partial textures at all.

The confusing part is that they have compared it with Xbox one X which has PRT.
As they have analyzed a lot of games of how they handle ram data and came into conclusion that only 1/3 of it is usable on average.
Clarification needed.
 
Last edited:

Rea

Member
The 4.8gb would be raw, the remaining ones would be guesswork from SF, unless the textures have already been loaded into ram previously
That 4.8 is after applying compression ratio of 2. So if i follow your logic that means my uncompressed data will be 10gb (5*2), so now i need 10gb in a sec. Xbox still needs more than 2secs even after applying SFS. The only way to solve this problem without hardware is to reduce my texture size.
 

Rea

Member
BCPACK has a better compression rate than Kraken which is more general purpose, but the game changer seems to be Oodle Texture that can boost Kraken’s compression rate.
Yes, to be precise, xbox using zlib+bcpack, ps5 is using kraken + oodle. It seems kraken+ oodle combo is superior than zlib+bcpack combo, when compressing data.
 

dxdt

Member
IIRC DirectStorage/VA by Xbox is essentially a solution shared with nvidia's RTX I/O add-on card for the RTX 3xxx series GPUs, and the latency reduction isn't 100x versus old interfaces - like PS5 is versus Ps4 - but just a 20x times improvement at best.

The I/O complex in the PS5 is a lot of hardware just for IO, and even Tim schooled that youtube influencer that tried to imply PC could get close to PS5 SSD bandwidth and check-in latency. Carmack even temporarily made a comment regarding bypassing kernel mapped memory (IIRC) on PC to reduce latency - before probably being given a PS5 devkit or a phone call from Tim, and quietly said no more - suggesting that PS5's IO is the paradigm shift that PS5 owners mostly think it is/will be.
Where did you get the 100X and 20X numbers?
 

Godfavor

Member
That 4.8 is after applying compression ratio of 2. So if i follow your logic that means my uncompressed data will be 10gb (5*2), so now i need 10gb in a sec. Xbox still needs more than 2secs even after applying SFS. The only way to solve this problem without hardware is to reduce my texture size.
SFS would cut the 10gb textures down to 5gb mips before the IO starts transferring data to 4.8gb/sec
 

dxdt

Member
That's because xbox uses Zlib for compression, ps5 uses kraken, kraken is better at compression so the size is smaller, like Cerny said kraken is 10% more efficient than zlib, so the difference is 2.7gb vs 3gb.

No, you can't make games bigger and have lesser loading time, xbox can have same size as Ps5 or bigger size and loading time will still have the same. The reason why the game size is different is because, Ps5 has custom kraken decompressor, xbox has custom zlib decompressor, xbox could use kraken format and have same size as ps5 but dev has to use CPU for decompression that will crippled game performance. Decompression speed does not makes the game smaller, the game is smaller because kraken+oodle has very efficient data compression.

The loading time is depending on the raw speed of Ssd and also compression ratio.
Wouldn't the XSX use BCPack and not zlib? Not a lot is known about BCPack or which games are using it.
 

Panajev2001a

GAF's Pleasant Genius
This is a little confusing to me as indeed the 2.5x improvement is from raw texture data without partial textures at all.

The confusing part is that they have compared it with Xbox one X which has PRT.
As they have analyzed a lot of games of how they handle ram data and came into conclusion that only 1/3 of it is usable on average.
Clarification needed.
Xbox One X has PRT and so do PS4 and Xbox One S, but they also have low bandwidth and very high latency storage solutions. Devs moving from Xbox 360 to Xbox One and then Xbox One X for the 4K patches went from 512 MB of RAM to 8 GB and then to 12 GB. XOX adoption (now discontinued) was also hyped but low in actual numbers so few games were really maxing it.
Most Xbox One games were targeting a much lower resolution and available RAM made advanced PRT usage less of a killer (it could have helped with the lower external RAM bandwidth, but you also had more local ESRAM to hold some of your Texture data oerman

Some developers were using advanced virtual texturing from disk to main RAM to GPU, many were not. MS was doing an XVA vs Xbox One X demo and leaned on all the pillars that make XVA: SFS + Direct Storage + BCPACK.

It is interesting to speculate, maybe we learn that PRT was bugged or broken on XOX for some reason or there are other explanations. The point is not XSX vs XOX, but how this was sold as SFS vs PRT to make an XSX vs PS5 argument which MS was very aligned with its influencers about ;).
 
  • Like
Reactions: Rea

Panajev2001a

GAF's Pleasant Genius
SFS would cut the 10gb textures down to 5gb mips before the IO starts transferring data to 4.8gb/sec
SFS and PRT are mechanisms to avoid transferring data that is not needed more than multiplying how much data is transferred (so it is about the equivalent bandwidth you would need to transfer all of the data in the same time it takes you to only stream what you think is needed to render the actual scene).
 

dxdt

Member
Bcpack is for texture only same goes for Oodle. Zlib is the general compression format, same goes for kraken. You can combine both for more compression, Zlib+bcpack and Kraken+Oodle, ZLib+Oodle.
So BCPACK is more comparable to Kraken and not zlib?
 

Godfavor

Member
No! That 10gb uncompressed is after Using *SFS*.
Dear lord!!!
If without SFS my data will be 25gb uncompressed.
Hey I am trying to understand here😔
Then by your example it would take 2.3 sec then. Peace
Xbox One X has PRT and so do PS4 and Xbox One S, but they also have low bandwidth and very high latency storage solutions. Devs moving from Xbox 360 to Xbox One and then Xbox One X for the 4K patches went from 512 MB of RAM to 8 GB and then to 12 GB. XOX adoption (now discontinued) was also hyped but low in actual numbers so few games were really maxing it.
Most Xbox One games were targeting a much lower resolution and available RAM made advanced PRT usage less of a killer (it could have helped with the lower external RAM bandwidth, but you also had more local ESRAM to hold some of your Texture data oerman

Some developers were using advanced virtual texturing from disk to main RAM to GPU, many were not. MS was doing an XVA vs Xbox One X demo and leaned on all the pillars that make XVA: SFS + Direct Storage + BCPACK.

It is interesting to speculate, maybe we learn that PRT was bugged or broken on XOX for some reason or there are other explanations. The point is not XSX vs XOX, but how this was sold as SFS vs PRT to make an XSX vs PS5 argument which MS was very aligned with its influencers about ;).
That makes sense..I guess that they have called it that way because it is implemented in API level (or hardware solution) that was not done before.
Just to clarify SFS/PRT+ includes both PRT and SF.
 
  • Like
Reactions: Rea
Let's assume that the whole texture data of that character would have been higher than 5gb (example 10gb), because SFS would have to load only the required mips, it cuts it down to 5gb.

As XSX has 4.8gb/sec (with the decompression), the remaining 0.2 gb that are missing would be replaced with lower quality mips (with SFS) temporarily until the last 0.2gb loads in about 0.1sec. This has been clarified that mips that arrive late to the party XSX is using a specialized hardware (go easy on me that's what the XSX technical engineer in the twitter said 🙂).

This is an ideal hypothetical scenario while using SSD as ram, of course you need to load the whole environment as well.

but Ps5 is also going to only load the required mip map tiles, PRT already do that, in which case SFS ensures there is a low resolution mip map already in memory to use while it waits to load the correct one, its good but it wont help as much as people think


this is a Deja Vu, the past generation there was a lot of discussion about PRT on Xbox one with Ms examples with a red planet(I think was mars) where only the required tiles from mip maps were loaded using only 16 MB of memory(this caused confusion as some poeple tought they were storing that on esRAM because the size), back then for some reason there was this idea that this kind of procedure wont be available or was impossible on PS4 because <reasons> and was something that will compensate the PS4 advantage vs Xbox One, guess what happened?

PRT2.jpg



 

Allandor

Member
No! That 10gb uncompressed is after Using *SFS*.
Dear lord!!!
If without SFS my data will be 25gb uncompressed.
Would not work on both systems. Ps5 could transfer the data (when the compression works good) in 2s but the GPU can't process that much data ;)
Both consoles are not IO bandwidth limited. Getting the data into memory and actually use that data are two different things. Also if you have so much data used in just one scene the SSD itself would be limiting by its size.
Don't try to make up some extreme cases that are unrealistic and the hardware can't even handle.
 

Rea

Member
Would not work on both systems. Ps5 could transfer the data (when the compression works good) in 2s but the GPU can't process that much data ;)
Both consoles are not IO bandwidth limited. Getting the data into memory and actually use that data are two different things. Also if you have so much data used in just one scene the SSD itself would be limiting by its size.
Don't try to make up some extreme cases that are unrealistic and the hardware can't even handle.
Who told you the gpu can't process that much data? Are you developer?
Also ps5 can transfer 5gb raw data in a sec, that is 10gb uncompressed if the compression ratio is 2.
Don't be disingenuous.
 

Allandor

Member
Who told you the gpu can't process that much data? Are you developer?
Also ps5 can transfer 5gb raw data in a sec, that is 10gb uncompressed if the compression ratio is 2.
Don't be disingenuous.
Well, yes I'm a software developer. And no the GPU can't handle that much data in a meaningful way.
Most of the memory in the new consoles shouldn't be used as texture/asset buffer. Instead Android t is being used for render targets, computation stuff ....
if you would transfer that much data every second, you also would just steal "much" bandwidth from the GPU (don't forget the bandwidth lost by memory contention). So it is really not a real world case you want to construct.
Just try to make a bit more realistic consumptions.
 

Riky

$MSFT
Pretty much. My plan is to use a huge external for cold storage and playing PS4 games and then I'll just expand the internal once the price is low enough.

I understand that Microsofts solution is very convenient but I'm not so dumb that I'm incapable of installing an NVME into the PS5. It's a great solution for those that are not capable of doing those types of things.
Microsoft solution is also better for those with multiple Series consoles, I keep the games I'm playing on the Storage Card and can remove it in a second and reinstall it in another machine in a second, even going from X to S and visa versa the game just plays.
Very convenient.
 

Darius87

Member
Would not work on both systems. Ps5 could transfer the data (when the compression works good) in 2s but the GPU can't process that much data ;)
Both consoles are not IO bandwidth limited. Getting the data into memory and actually use that data are two different things. Also if you have so much data used in just one scene the SSD itself would be limiting by its size.
Don't try to make up some extreme cases that are unrealistic and the hardware can't even handle.
what a bs! any modern GPU can process that much data limiting factor is not GPU but SSD speed because it has lower BW.
you as dev should know that but clearly you're one of these pseudo devs pretending to know shit :messenger_grinning_smiling:
 
Last edited:

Rea

Member
Well, yes I'm a software developer. And no the GPU can't handle that much data in a meaningful way.
Most of the memory in the new consoles shouldn't be used as texture/asset buffer. Instead Android t is being used for render targets, computation stuff ....
if you would transfer that much data every second, you also would just steal "much" bandwidth from the GPU (don't forget the bandwidth lost by memory contention). So it is really not a real world case you want to construct.
Just try to make a bit more realistic consumptions.
Cerny literally said during road to PS5, that the GPU can render those data while the player is turning around.
You're being disingenuous.
 

Shmunter

Member
Well, yes I'm a software developer. And no the GPU can't handle that much data in a meaningful way.
Most of the memory in the new consoles shouldn't be used as texture/asset buffer. Instead Android t is being used for render targets, computation stuff ....
if you would transfer that much data every second, you also would just steal "much" bandwidth from the GPU (don't forget the bandwidth lost by memory contention). So it is really not a real world case you want to construct.
Just try to make a bit more realistic consumptions.
Maximum bandwidth form ssd to ram is 22gig/s and typical 8-9gig. Out of 448gig/s.

When the GPU composes a frame in 16ms I’m not convinced that ratio is onerous with some basic budgeting and buffering by the engine.
 
Last edited:

assurdum

Banned
None yet. It might be this year or next as Xbox hasn't released anything that uses all of the Series features, while PS5 has.
The hell you are talking about. How all of the ps5 hardware features are already used exactly? Doubt the cache scrubbers or the GE have been touched at all. Series X indeed can definitely push a lot more his "special" hardware features. The enormous advantage to use an hardware thinking with a multiplatform development kit as series X is to have immediate access to such features.
 
Last edited:

SpokkX

Member
This fighting over specs is a bit pathetic

both will be fast enough and produce similat results - better/worse depending on game/dev/engine.. just like the gpu and cpu difference

the main difference this gen will be exclusive games and services

- gamepass, quick resume, enhanced bc with fps boost for xbox..
- game help feature and game stream sharing on ps5
 
Last edited:
Top Bottom