• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Real Talk: Gaming Hardware Is About Maximizing Efficiency Not TFLOPs

Xplainin

Banned
But how does it transfer to the games/engines themselves tho? What's the end impact for us gamers? After tens of games shown so far we only saw R&C that stands out, which again, isn't anything new because Star Citizen has already been doing that on an ordinary ~500MB/s SATA SSDs with all its suppose bottlenecks.
Other than loading times, its going to free up devs to not have to do as much work in working around slow data from the drives. So rather than spending time and resources making maps a certain way to hold load time, or LOD pop in, they can spend it on making the game better.
 

Darius87

Member
This is bunch of crap that you wrote i'll try to breakdown:
Well, we can actually break down some numbers and see. An 8K texture is 7,680 x 4,320 pixels, or 33 million pixels. Each pixel is 3 bytes, so each 8K texture is 99,144,000 bytes. In megabytes, that becomes 99.144 MB. This is all uncompressed, by the way.
8K textures doesn't exactly match 8K screen resolution it varies and total size of unique texture assets in scene per frame could easily exceed 100mb.

So let's just say they were streaming one uncompressed 8K texture per frame in the UE5 demo. The demo was capped at 30 FPS, so in 30 frames they're streaming 2.974 GB of data. Now, that's clearly more than XSX's raw bandwidth cap of 2.4 GB/s uncompressed, but I'm talking a very extreme case here, a case of the demo streaming in a new raw 8K texture every single frame to the RAM, which I'm almost certain isn't happening.
it's totally possible on ps5 even without compression here maths:
------------------------------
1000ms - 5500mb
33ms - X mb
------------------------------
X = 33 * 5500 / 1000 == 181.5 mb
obviously 181.5mb > 99.1mb so ps5 could stream even bigger textures or other assets per frame and that's without compression for series x it would be:
------------------------------
1000ms - 2400mb
33ms - X mb
------------------------------
X = 33 * 2400 / 1000 == 79.2 mb
obviously series x would need to compress this asset because 79.2mb < 99.1mb.
does a game need to stream 100mb texture every frame? next-gen of course most likely ue5 demo exceeds that specially when flying section of the demo kicks in. so you clearly wrong here.

Not only because that would be excessive for a real game scenario, but because we can also assume on PS5 that when the dedicated processor in the I/O block is streaming data from storage to RAM, the other system components are waiting on bus access, since they're all a part of a hUMA architecture. So the GPU isn't going to be able to read the new texture data in RAM until the I/O block returns access of the bus to another system component. The thing is, then, if the I/O block is writing those 8K textures for 30 frames worth consecutively, those are 30 frames where the GPU isn't accessing any of those frames since it doesn't have access back to the memory bus.
jeez... i think you just making this crap up just to look you know stuff but you don't...
i don't think memory bus is the problem here like you described, also GPU reads textures for next frame that is 33ms not next 30 frames which is 30 * 33 = 990 ms (1 second)

This same issue also pops up on XSX since it's also hUMA, but there's a fraction of a CPU core still handling movement of data between storage and RAM in that situation so while the GPU has to wait, CPU-bound tasks could (in some limited capacity) still access data in the RAM while new data from storage is being copied to it. There's other things that might prevent this though, or at least limit it a lot, because you probably don't want a game's CPU-bound logic trying to access data in RAM that is actively being replaced or will very soon be replaced, as that could cause errors (this same scenario would happen on PS5 if in fact it handled transfer of data from storage to RAM the same way, which it doesn't).
another paragraph of gibberish nonsense...
This could be the case for this -gen consoles GPU's to wait for data to read from RAM but next-gen will have ssd's which removes this bottleneck.

So back to the UE5 demo scenario, yes if it were drawing raw 8K textures at a rate of 1 new texture per frame at 30 frames per second, that would exceed XSX's raw SSD bandwidth. However, literally the only realistic scenario where you would be doing that...is in a tech demo, which is what the UE5 demo was. An actual real game on PS5 won't be able to stream in data at that rate because other game logic has to actually be performed. On both PS5 and XSX the issue can be addressed some by compressing those 8K textures ahead of time and then decompressing them through the decompressors, and while both systems can decompress data MUCH faster than on PC thanks to dedicated decompression hardware, it still adds a bit of a time penalty to decompress.
how do you know what game will need in the future? UE5 provides technology for creating games like UE5 demo on PS5 so clearly in future games will look like that.
so because CPU has to do logic tasks PS5 couldn't stream game assets at high rate? do i read this right? i mean...
one thing is CPU and another is I/O streaming assets from SSD to RAM so it's independent one from another. wrong again.

both PS5 and XSX could compress/decompress assets in real time without any penalty that's why we see 9 zen2 cores(PS5) and 5 zen2cores(XSX) equilent to decompress data on the fly also compressed assets could be already stored on the BR-disk.


If you start talking in the realm of using compressed 8K textures, then you get into a scenario where that same UE5 demo streaming in compressed 8K textures could easily stream them in on XSX in addition to PS5 but, again, you're talking about consecutive streaming per frame with a new texture per frame, which is simply not realistic for an actual gameplay scenario where other game logic is functioning.

I think folks have to look at Sweeney's comments in that context because it's the only one that makes sense.
most games compress textures for streaming because they compress well and are big in size so compressing makes sense also game logic and streaming has nothing to do with one another like i said before and there's nothing wrong with streaming 8K textures per frame when compressed ps5 could do without compression but it's not practical.
 
Last edited:

thelastword

Banned
This is bunch of crap that you wrote i'll try to breakdown:

8K textures doesn't exactly match 8K screen resolution it varies and total size of unique texture assets in scene per frame could easily exceed 100mb.


it's totally possible on ps5 even without compression here maths:
------------------------------
1000ms - 5500mb
33ms - X mb
------------------------------
X = 33 * 5500 / 1000 == 181.5 mb
obviously 181.5mb > 99.1mb so ps5 could stream even bigger textures or other assets per frame and that's without compression for series x it would be:
------------------------------
1000ms - 2400mb
33ms - X mb
------------------------------
X = 33 * 2400 / 1000 == 79.2 mb
obviously series x would need to compress this asset because 79.2mb < 99.1mb.
does a game need to stream 100mb texture every frame? next-gen of course most likely ue5 demo exceeds that specially when flying section of the demo kicks in. so you clearly wrong here.


jeez... i think you just making this crap up just to look you know stuff but you don't...
i don't think memory bus isn't the problem here like you described, also GPU reads textures for next frame that is 33ms not next 30 frames which is 30 * 33 = 990 ms (1 second)


another paragraph of gibberish nonsense...
This could be the case for this -gen consoles GPU's to wait for data to read from RAM but next-gen will have ssd's which removes this bottleneck.


how do you know what game will need in the future? UE5 provides technology for creating games like UE5 demo on PS5 so clearly in future games will look like that.
so because CPU has to do logic tasks PS5 couldn't stream game assets at high rate? do i read this right? i mean...
one thing is CPU and another is I/O streaming assets from SSD to RAM so it's independent one from another. wrong again.

both PS5 and XSX could compress/decompress assets in real time without any penalty that's why we see 9 zen2 cores(PS5) and 5 zen2cores(XSX) equilent to decompress data on the fly also compressed assets could be already stored on the BR-disk.



most games compress textures for streaming because they compress well and are big in size so compressing makes sense also game logic and streaming has nothing to do with one another like i said before and there's nothing wrong with streaming 8K textures per frame when compressed ps5 could do without compression but it's not practical.
Thank you, goodness gracious....
 
For those of you who have chosen to completely dismiss and disregard the point trying to be made in the opening post I'll leave this here for you.

Dirt 5 has an option for 120 frames per second on the PS5

 
Last edited:

Tqaulity

Member
For those of you who have chosen to completely dismiss and disregard the point trying to be made in the opening post I'll leave this here for you.
🙂 The first of many times we’ll see parity with multiplats! Again, even in a “brute force sense” the difference is just not that big to see a noticiable difference in quality or performance.
 

Genx3

Member
I'm not sure if people are being sarcastic or just clueless to what the SSD's will do.

Holy crap at the level of nonsense in here.
 

The Alien

Banned
Sorry, but these reads as damage control.

You say: "nobody really knows which system is more powerful between the Xbox Series X and PS5." I kinda think we can safely say we know which is more powerful.

Also, I hate the efficiency vs raw power debate. The Series X is an efficient machine. It's not just scrap that gonna power 4k60fps.
 

Genx3

Member
Not really they are actually useful for both systems and a massive upgrade from HDDs. That's why developers are excited that SSDs are standard for both systems.

Just like SSD's move engines and PRT hw are also useful. HW customizations brings benefits. They are not Magic.
 

Genx3

Member
The number 1 bottleneck for just about every console GPU has been bandwidth.
XSX has 25% more Bandwidth available to the GPU.
That means the greatest bottleneck to gaming HW has a 25% improvement on XSX hw over PS5.
There is nothing the SSD can do to combat this.
Just like PS5, XSX' SSD also has HW decompression and other customizations that will help with the lack of Ram and help performance by unburdening the CPU from these tasks.
 
Just like SSD's move engines and PRT hw are also useful. HW customizations brings benefits. They are not Magic.

SSDs seem magical when compared to the incredibly slow HDDs of previous consoles. There will be changes in game design due to that.

We are not talking about a 10% improvement here.
 

BootsLoader

Banned
I don't know about PS5 and Series X but as far as I remember PS3 exclusive games were much better and complicated in general than 360 games.
I think that Xbox 360 would never handle games like The last of us and God of war 3. The problem was that multiplat developers could not afford the time to harness the power of that system. It was hard to develop for and that is a fact.
So I think that if Xbox Series X is more powerful then that's it. It is just more powerful.
On the other side it depends on the developers too and how much they will take advantage of that power. Xbox one X is more powerful than PS4 Pro but the games (exclusives) on PS4 pro look much better.

Who knows, time will show us the truth.
 

Tqaulity

Member
Wow. I almost forgot about this thread but rereading the OP, it is awesome! My point was clear and remains true, the actual real world performance is what matters and efficiency is the goal. We've seen a number of titles where PS5 (despite inferior specs) outperforms XSX largely due to software deltas and potential HW bottlenecks. That whole discussions about Nvidia's driver overhead is also relevant to this. To have a 5600XT outperform a RTX3090 in ANY scenario just speaks to the importance of the entire system and SW stack with regards to performance.

At the end of the day, the differences between the consoles in actual game performance are so slight it's not worth talking about and I've been done engaging in the petty arguments for every face off. But we will continue to see the importance of the SW (SDK and application) in utilizing the HW better as games on next gen will evolve well beyond what we see today.

 
Last edited:

JackMcGunns

Banned
For those that want the TLDR:

"Look, I'm not saying that either system is more powerful but the PS5 is more powerful because Sony devs praised it and also TFLOPS are just like, theoretical and stuff, it is all about efficiency and removing bottlenecks not TFLOPS. Again, not claiming that one is more powerful but the PS5 will be more powerful."


The greatest summary ever! :messenger_grinning_sweat:
 
Top Bottom