• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

The PlayStation 5 GPU Will Be Supported By Better Hardware Solutions, In Depth Analysis Suggests

Shmunter

Member
Actually, it hasn't because what's been kicked around GAF alot is the idea that the overall bandwidth of the XSX would be higher thus more or less negating some of the PS5's advantage. Not true, but definitely a sentiment. But nonetheless, i find this to be on-topic.

The more i learn about the PS5, the more elegant the hardware design seems to be.

Looking at where both MS and Sony decided to cut corners is pretty interesting to me.
Should have said, obvious to me and a few others that had a discussions on the XsX ram setup. You’re right, it’s not a blanket statement.
 

LordOfChaos

Member
uhauhaahuha

Massive Edge Over PS5 Says Devs

Dealer - Gaming

Xbox centric YouTuber

"xbox centric youtuber" know more than Jason schreier...

and look who appeared:

ax0k0B0.png



LOL





xRuxJwC.png




Interesting:

all "developers" speaking well of the xbox SX interact with xbox fanboys.

all developers speaking well on PS5 interact with Jason Schreier.

:messenger_face_screaming::messenger_tears_of_joy:


Whoever she responded to, the question is if the interleaving theory in the OP checks out, and it doesn't, as would be the smart choice the OS sits on the slow portion and the data isn't interleaved.


I'm the one who posted this, I'm not driving at any angle other than truth, this conversation is becoming such a spew because people can't just talk about the interesting technical aspects of both without others thinking they're picking a side.

 

Jon Neu

Banned
“more efficient” and “clever engineering magic” are the new teraflops

Don't forget “no bottlenecks”.

The performance is all about the bottlenecks. Less bottlenecks means you can achieve more with the same hardware.

IF it’s true what Marks saying then he removed all the bottlenecks via custom hardware parts.

It’s all about efficiency but how efficient it will be against the XSX remains to be seen.

Based designer Cerny.

Only he is capable of removing bottlenecks when designing a console.

I bet those dumbass MS engineers didn't even think about them bottlenecks. They just put some PC parts together and that's it. Hahahaha, poor ignorant dumbasses.
 
What the fuck is this garbage?

In comparison, the PS5 has a static 448 GB/s bandwidth for the entire 16 GB of GDDR6 (also operating at 14 GHz, across a 256-bit interface). Yes, the SX has 2.5 GB reserved for system functions and we don't know how much the PS5 reserves for that similar functionality but it doesn't matter - the Xbox SX either has only 7.5 GB of interleaved memory operating at 560 GB/s for game utilisation before it has to start "lowering" the effective bandwidth of the memory below that of the PS5... or the SX has an averaged mixed memory bandwidth that is always below that of the baseline PS4.

Either option puts the SX at a disadvantage to the PS5 for more memory intensive games and the latter puts it at a disadvantage all of the time.“

Neither of these options are true. Jesus. Christ.

XSX uses different memory channels to access the different chip sizes. The access of every channel is scheduled by the memory controller. These different channels can be accessed concurrently.

There are 10 32 bit chips, leading to a 320-bit bus.

10 GB of this is accessible at once across the 320-bit bus at 560GB/s.

6 GB can only be accessed across a 192-bit bus at 336 GB/s because there are only 6 physical chips conected.

When you are accessing the 6 GB at 336 GB/s you can still access the remaining 4 x 1GB chips across the unused 128-bit bus for your other 224 GB/s.

It all depending on where accesses are. You can use your CPU in the "slow" section of memory and still use the reset of the memory for the GPU and still hit the (theoretical) 560 GB/s, patterns permitting.

"In-depth analysis" my fucking arse. My god, how has this bullshit got a thread?
 
Last edited:

LordOfChaos

Member
What the fuck is this garbage?



Neither of these options are true. Jesus. Christ.

XSX uses different memory channels to access the different chip sizes. The access of every channel is scheduled by the memory controller. These different channels can be accessed concurrently.

There are 10 32 bit chips, leading to a 320-bit bus.

10 GB of this is accessible at once across the 320-bit bus at 560GB/s.

6 GB can only be accessed across a 192-bit bus at 336 GB/s because there are only 6 physical chips conected.

When you are accessing the 6 GB at 336 GB/s you can still access the remaining 4 x 1GB chips across the unused 128-bit bus for your other 224 GB/s.

It all depending on where accesses are. You can use your CPU in the "slow" section of memory and still use the reset of the memory for the GPU and still hit the (theoretical) 560 GB/s, patterns permitting.

"In-depth analysis" my fucking arse. My god, how has this bullshit got a thread?



Pretty well what's said here by a game dev.


 

oldergamer

Member
I'm shaking my head. WHAT?!?

Do you think the only measure of PERFORMANCE is a theoretical Tflop figure, despite Tflops being defined as only the computational capability of the vector ALU and is just one part of the GPU? Please answer that question and we know whether you are serious or not. How long have you been following processors?

I didnt mention tflops, did i? I didnt need to. You give me a few examples of nvidia or amd (not comparing against each other, but with products within each companies own FAMILY of processors) where for example one chip with lower specs out performed a chip in the same architecture with higher specs? Again, without a major revision to the hardware (Use with GCN architecture as an example if you like.)

Do you know about fixed function accelerators? Has it crossed your mind that PS5 may have more of these inside its GPU?
Sure but that also means less flexible or capable. I don't think that can even come into this equation unless you think ps5 has a less capable gpu?
 
Game devs being Sony fanboys confirmed?



Gotta make that spin for the hype train.

0/10 thread. +1 though for the polish on that shining armor.

Well, it's not that bad. :messenger_beaming: Anyway, it's pathetic attempt by OP about how he tried somehow to mitigate XSX SSD compressed speed compared to PS5 SSD with some imaginary bonus? Funny thing is that MS dev with that tweets in Gamingbolt article didn't even mentioned nor implied to some 20% bonus, but OP added that bonus on his own for some reason.
 

Connxtion

Member
All I see is we can’t use TFlops but yet I keep seeing these 18-20% numbers being flung about, that are based on TFlops 🙄 so what is it can we use TFlops as a number of power or can’t we? Because if we can’t we also can’t use those percentages as they don't show the actual difference in what can be achieved.

Also we need to know the base clocks (minimum) PS5 will hit. These variable clocks are controlled by voltage (power). We don’t even know how much the PSU in the PS5 will be allowing. For all we know it will be a 200w power supply and that would mean the CPU power would need to drop a lot to allow the GPU to get what’s now freed & vice verca. (To reach 2.23GHz and 3.5GHz)

Look at this way, variable frequencies is basically how a u-tube manometer works.
fig3.gif


Positive pressure is the CPU (P) being down clocked to allow the GPU (Atmosphere) to take the remainder of the power budget. The negative pressure is the CPU (P) getting most of this budget. Now by default they will rest in a idle state, so what will the idle state be 🤷‍♂️ Who knows. Only Sony knows.

We do know (as it’s completed useless running full clock always) ain’t going to be running at 3.5GHz or 2.23GHz on the dashboard or sitting in a menu.

I suspect the XSEX has the same and will down clock for sitting idle or on the dashboard. To conserve our power bills.

Maybe someone will make a nice animation of how the u-tube manometer works. (Changes it’s values to represent voltage)
 
Last edited:

-thirdeye-

Neo Member
Playstation-5-logo.jpg


The PlayStation 5 will feature a weaker GPU, compared to the Xbox Series X, but developers continue to praise the console. The raw specs are definitely not painting the whole picture about the new consoles, and a recent in-depth analysis suggests that the Sony next-gen console's GPU will have a better system supporting it, resulting in better overall performance.

On his blog, James Prendergast has been providing a very interesting on-going analysis of the next-generation consoles, based on what has been revealed so far. In his latest post, he took a look at how RAM, I/O and SSD speed and function will make a difference, considering the CPU difference between the two consoles is minimal.


Taking a look at the RAM configuration of both consoles, the analysis highlights how the Xbox Series X configuration is sub-optimal, as the asymmetric configuration used for the console can lead to reduced bandwidth once the symmetrical portion is full. The PlayStation 5 configuration, on the other hand, allows a static bandwidth for the entire 16 GB of GDDR6.


Taking a look at the I/0 and SSD access, the analysis highlights how the Xbox Series X simply has a slower interface over the PlayStation 5's. The solution used by the Sony console allows for better data management within the RAM as well, allowing for less frequent reloading of data into the RAM, improving system efficiency.

Putting everything together, including how the PlayStation 5 audio hardware will take less CPU power compared to the Xbox Series X, the analysis reveals that the PlayStation 5 has the "bandwidth and I/O silicon in place to optimise the data transfer between all the various computing and storage elements". The Xbox Series X, on the other hand, has some sub-optimal implementations that are going to perform below the specs of the PlayStation 5, despite the inclusion of smart prediction engines.


On a related note, a video released recently by Coreteks, who claims to have insider knowledge on both consoles, reaches the same conclusions. It is a very long video, but it's a very interesting watch nonetheless.




-------------------------------------------------------------------------

We've been hearing for months that there's not much between the two devices from Microsoft and SONY, with "sources" on both sides of the argument claiming that each console has an advantage in specific scenarios. Incontrovertibly, Microsoft has the more performant graphics capabilites with 1.4x the physical makeup of the Playstation 5's GPU core. That's only part of the story though, with the PS5 running a minimum 1.2x faster than the Series X across the majority of workload scenarios. That narrows the advantage of the SX (in terms of pure, averaged, GPU compute power) to around 1.18x-1.2x that of the PS5.

But what about the CPU? Performing the same, simple ratio calculation, you can work out that the SX is 1.02x - 1.10x more powerful than the PS5's, depending on the scenario. Not that big a difference, really... and the CPU/GPU should sport pretty much the same feature set on both consoles.

However, everyone and their dog are talking about the GPUs and have been for a long time: It's not all that interesting to me at this point in time until more of their underlying architectures are revealed. Those three to four* things that are more interesting to me are:

  • RAM
  • I/O
  • SSD speed and function
  • Audio hardware

Unfortunately, we don't have the full information on the SX's audio hardware implementation, meaning we can't yet do a proper comparison between the two consoles for that. So let's begin with the RAM configuration.

RAM

Let me put this bluntly - the memory configuration on the Series X is sub-optimal.

I understand there are rumours that the SX had 24 GB or 20 GB at some point early in its design process but the credible leaks have always pointed to 16 GB which means that, if this was the case, it was very early on in the development of the console. So what are we (and developers) stuck with? 16 GB of GDDR6 @ 14 GHz connected to a 320-bit bus (that's 5 x 64-bit memory controllers).

Microsoft is touting the 10 GB @ 560 GB/s and 6 GB @ 336 GB/s asymmetric configuration as a bonus but it's sort-of not. We've had this specific situation at least once before in the form of the NVidia GTX 650 Ti and a similar situation in the form of the 660 Ti. Both of those cards suffered from an asymmetrical configuration, affecting memory once the "symmetrical" portion of the interface was "full".

RAM%2Bconfiguration%2Bgraphic.jpg

Interleaved memory configurations for the SX's asymmetric memory configuration, an averaged value and the PS5's symmetric memory configuration... You can see that, overall, the PS5 has the edge in pure, consistent throughput...

Now, you may be asking what I mean by "full". Well, it comes down to two things: first is that, unlike some commentators might believe, the maximum bandwidth of the interface is limited to the 320-bit controllers and the matching 10 chips x 32 bit/pin x 14 GHz/Gbps interface of the GDDR6 memory.

That means that the maximum theoretical bandwidth is 560 GB/s, not 896 GB/s (560 + 336). Secondly, memory has to be interleaved in order to function on a given clock timing to improve the parallelism of the configuration. Interleaving is why you don't get a single 16 GB RAM chip, instead we get multiple 1 GB or 2 GB chips because it's vastly more efficient. HBM is a different story because the dies are parallel with multiple channels per pin and multiple frequencies are possible to be run across each chip in a stack, unlike DDR/GDDR which has to have all chips run at the same frequency.

However, what this means is that you need to have address space symmetry in order have interleaving of the RAM, i.e. you need to have all your chips presenting the same "capacity" of memory in order for it to work. Looking at the diagram above, you can see the SX's configuration, the first 1 GB of each RAM chip is interleaved across the entire 320-bit memory interface, giving rise to 10 GB operating with a bandwidth of 560 GB/s but what about the other 6 GB of RAM?

Those two banks of three chips either side of the processor house 2 GB per chip. How does that extra 1 GB get accessed? It can't be accessed at the same time as the first 1 GB because the memory interface is saturated. What happens, instead, is that the memory controller must instead "switch" to the interleaved addressable space covered by those 6x 1 GB portions. This means that, for the 6 GB "slower" memory (in reality, it's not slower but less wide) the memory interface must address that on a separate clock cycle if it wants to be accessed at the full width of the available bus.

The fallout of this can be quite complicated depending on how Microsoft have worked out their memory bus architecture. It could be a complete "switch" whereby on one clock cycle the memory interface uses the interleaved 10 GB portion and on the following clock cycle it accesses the 6 GB portion. This implementation would have the effect of averaging the effective bandwidth for all the memory. If you average this access, you get 392 GB/s for the 10 GB portion and 168 GB/s for the 6 GB portion for a given time frame but individual cycles would be counted at their full bandwidth.

However, there is another scenario with memory being assigned to each portion based on availability. In this configuration, the memory bandwidth (and access) is dependent on how much RAM is in use. Below 10 GB, the RAM will always operate at 560 GB/s. Above 10 GB utilisation, the memory interface must start switching or splitting the access to the memory portions. I don't know if it's technically possible to actually access two different interleaved portions of memory simultaneously by using the two 16-bit channels of the GDDR6 chip but if it were (and the standard appears to allow for it), you'd end up with the same memory bandwidths as the "averaged" scenario mentioned above.

If Microsoft were able to simultaneously access and decouple individual chips from the interleaved portions of memory through their memory controller then you could theoretically push the access to an asymmetric balance, being able to switch between a pure 560 GB/s for 10 GB RAM and a mixed 224 GB/s from 4 GB of that same portion and the full 336 GB/s of the 6 GB portion (also pictured above). This seems unlikely to my understanding of how things work and undesirable from a technical standpoint in terms of game memory access and also architecture design.

In comparison, the PS5 has a static 448 GB/s bandwidth for the entire 16 GB of GDDR6 (also operating at 14 GHz, across a 256-bit interface). Yes, the SX has 2.5 GB reserved for system functions and we don't know how much the PS5 reserves for that similar functionality but it doesn't matter - the Xbox SX either has only 7.5 GB of interleaved memory operating at 560 GB/s for game utilisation before it has to start "lowering" the effective bandwidth of the memory below that of the PS5... or the SX has an averaged mixed memory bandwidth that is always below that of the baseline PS4. Either option puts the SX at a disadvantage to the PS5 for more memory intensive games and the latter puts it at a disadvantage all of the time.

PS5%2BSSD%2B3.png

The Xbox's custom SSD hasn't been entirely clarified yet but the majority of devices on the market for PCIe 4.0 operate on an 8 channel interface...



I/O and Storage

Moving onto the I/O and SSD access, we're faced with a similar scenario - though Microsoft have done nothing sub-optimal here, they just have a slower interface.

14 GHz GDDR6 RAM operates at around 1.75 GB/s per pin, per chip (14 Gbps [32 data pins per chip x 10 chips gives total potential bandwidth of 560 GB/s - matching the 320-bit interface]). Originally, I was concerned that would be too close to the total bandwidth of the SSD but Microsoft have upgraded to a 2.4/4.8 GB/s read interface with their SSD which is, in theory, enough to utilise the equivalent of 1.7% of 5 GDDR6 chips uploading the decompressed data in parallel each second, leaving a lot of overhead for further operations on those chips and the remaining 6 chips free for completely separate operations. (4.8 GB/5 (1 GB) chips /1.75x32 GB/s)

In comparison, SONY can utilise the equivalent of 3.2% of the bandwidth of 6 GDDR6 chips, in parallel, per second (9 GB/5 (2 GB) chips /(1.75x32 GB/s)) due to the combination of a unified interleaved address space and unified larger RAM capacity (i.e. all the chips are 2 GB in capacity so, unlike the SX, the interface does not need to use more chips [or portion of their total bandwidth] to store the same amount of data).

Turning this around to the unified pool of memory, the SX can utilise 0.86% of the total pin bandwidth whereas the PS5 can use 2.01% of the total pin bandwidth, all of this puts the SX at just under half the theoretical performance (ratio of 0.42) of the PS5 for moving things from the system storage.

Unfortunately, we don't know the random read IOPS for either console as this number will more accurately reflect the real world performance of the drives but going on the above figures this means is that the SX can fill the RAM with raw data (2.4 GB/s) in 6.67 seconds whereas the PS5 can fill the RAM (5.5 GB/s) in 2.9 seconds, again, 2.3x the rate of the SX (this is just literally the inverse ratio of the above comparison with the decompressed data).

However, that's not the entire story. We also have to look at the custom I/O solutions and other technology that both console makers have placed on-die in order to overcome many potential bottlenecks and limitations:

The decompression capabilities and I/O management of both consoles are very impressive, but again, SONY edges out Microsoft with the equivalent of 10-11 Zen 2 CPU cores to 5 cores in pure decompression power. This optimisation on SONY's part really lifts all of the pressure off of the CPU, allowing it to be almost entirely focussed on the game programme and OS functions. That means that the PS5 can move up to 5.5 GB/s compressed data from the SSD and the decompression chip can decompress up to 22 GB/s from that 5.5 GB compressed data, depending on the compressibility of that underlying raw data (with 9 GB as a lower bound figure).

16GB%2Bfill%2Brate.PNG
Data fill rates for the entire memory configuration of each console; the PS5 unsurprisingly outperforms the SX... *I used the "bonus" 20% figure for the SX's BCPack compression algorithm.

Meanwhile, the SX can move up to 4.8 GB/s of compressed data from the SSD and the decompression chip can decompress up to 6 GB/s of compressed data. However, Microsoft also have a specific decompression algorithm for texture data* called BCPack (an evolution of BCn formats) which can potentially add another 20% compression on top of that achieved by the PS5's Kraken algorithm (which this engineer estimates at a 20-30% compression factor). However, that's not an apples-to-apples comparison because this in on uncompressed data, whereas the PS5 should be using a form of RDO which the same specialist reckons will bridge the gap in compression of texture data when combined with Kraken. So, in the name of fairness and lack of information, I'm going to leave only the confirmed stats from the hardware manufacturers and not speculate about further potential compression advantages.

While the SFS won't help with data loading from the SSD, it will help with data management within the RAM, potentially allowing for less frequent re-loading of data into RAM - which will improve the efficiency of the system, overall - something which is impossible to even measure at this point in time - especially because the PS5 will also have systems in place to manage data more intelligently.

This capability, combined with the consistent access to the entirety of the system memory, enables the PS5 to have more detailed level design in the form of geometry, models and meshes. It's been said by Alexander Battaglia that this increased speed won't lead to more detailed open worlds because most open worlds are based on variation achieved through procedural methods. However, in my opinion, this isn't entirely true or accurate.

The majority of open world games utilise procedural content on top of static geometry and meshes. Think of Assassin's Creed Odyssey/Origins, Batman Arkham City/Origins/Knight, Red Dead Redemption 2, GTA 5 or Subnautica. All of them open worlds, all of their "variations" are small aspects drawn from a standard pre-made piece of art - whether that's just a palette swap or model compositing. The only open world game that is heavily procedurally generated that I can think of is No Man's Sky. Even games such as Factorio or Satisfactory do not go the route of No Man's Sky...

In the majority of games, procedural generation is still a vast minority of the content generation. Texture and geometry draws are the vast majority of data required from the disk. Even in games such as No Man's Sky, there are meshes that are composited or even just entirely draw from disk.

SX_SSD.png
The Series X's SSD actually looks like it can be replaced... although you'd have to disassemble the entire console to be able to do so...

Looking at the performance of the two consoles on last-gen games, you'll see that it takes 830 milliseconds on PS5 compared to 8,100 milliseconds on PS4 Pro for Spiderman to load whereas it takes State of Decay 2 an average of 9775 milliseconds to load on the SX compared to 45,250 milliseconds on One X. (Videos here) That's an improvement of 9.76x on the PS5 and 4.62x on the SX... and that's for last gen games which don't even fill up as much RAM as I would expect for next generation titles.
SX_game%2Bsuspend.PNG
Here I attempted to estimate the RAM usage of each game based on the time it took to swap out RAM contents and thus game session. We can see that State of Decay 2 has some overhead issues - perhaps it's not entirely optimised for this scenario... this is a simple model and not accurate to actual system RAM contents since I'm just dividing by 2 but it gives us a look at potential bottlenecks in the I/O system of the SX.


Now, this really isn't a fair test and isn't necessarily a "true" indication of either console's performance but these are the examples that both companies are putting out there for us to consume and understand. Why is it perhaps not a true indication of their performance? Well, combining the numbers above for the SSD performance you would get either (2.4 GB/s) x 9.78 secs = 23.4 GB of raw data or (4.8 GB/s) x 9.78 secs = 46.9 GB of compressed data... which are both impossible. State of Decay 2 does not (and cannot) ship that much data into memory for the game to load. Not to mention that swapping games on the SX takes approximately the same amount of time... Therefore, it's only logical to assume there are some inherent load buffers in the game that delay or prolong the loading times which do not port over well to the next generation.

In comparison, the Spiderman demo is either (5.5 GB/s) x 0.83 secs = 4.6 GB or (9 GB/s) x 0.83 secs = 7.47 GB, both of which are plausible. However, since I don't know the real memory footprint of Spiderman I don't know which number is accurate.
PS5%2Bfrequency%2B3.png


This is a really interesting implementation of using a power envelope to determine the activity across the die...




Audio Hardware

In my opinion, the "pixel" is well and truly dead. The majority of PC players in the world play at the 1080p resolution. The majority of TVs in peoples' houses are 720-1080p. 4K is a vast minority - yes, of course it's gaining ground as people replace their screens but the point is that most people are happy with their current setup and don't see the added bonus of upgrading the resolution or size of the screen setup unnecessarily.

Unfortunately, Microsoft have pushed their audio features much less than SONY have - I presume because it was not a huge focus of the console, instead they decided to focus on raytracing, graphics throughput, variable refresh rate, auto low latency mode and HDR. If you're not going to use the added rasterisation power through targetting a higher resolution, instead opting for optimisations that allow you to render at lower resolutions and scale up, why bother modelling the console around that processing power in the first place?

In comparison, SONY hasn't even name-checked HDR output like Microsoft have with 3D audio.

What we do know about the SX's audio solution is that it is a custom audio hardware block which will output compatible signals in Dolby Atmos, DTS:X and Windows Sonic codecs. This hardware will handle wave propagation and diffraction but has not officially (as far as I can find) linked this with the ray tracing engine on the GPU.

SONY, on the other hand, have gone all-in on their audio implementation. I had speculated previously that the audio solution might be based on AMD's TrueAudioNext and their GPU CU cores. Thinking that, I had presumed that the console designers would provide a subset of of their total CU count on the GPU for this function. Instead, SONY have actually modified the CU units from AMD's design to make them more like the SPUs in the PS3's Cell architecture (no SRAM cache, direct data access from the DMA controller through the CPU/GPU and back out again to the system memory). We don't know how many altered CUs are present in this Tempest engine but SONY have said that the SIMD computational power is equivalent to the entire 8 core Jaguar CPU that was in the PS4.

Essentially, SONY decided to reduce the amount of fully fledged CUs available to the GPU in order to provide this audio solution. This also means that the PS5's sound processing will take less CPU power from the system compared to the SX - which, again, counts against the SX in terms of resources available to run games.

I guess that I'll have more on this as the features are fully revealed.
PS5%2Braytracing%2B1.png

SONY's implementation of RT is able to be spread across many different systems...

Conclusion

The numbers are clear - the PS5 has the bandwidth and I/O silicon in place to optimise the data transfer between all the various computing and storage elements whereas the SX has some sub-optimal implementations combined with really smart prediction engines but these, according to what has been announced by Microsoft, perform below the specs of the PS5. Sure, the GPU might be much larger in the SX but the system itself can't supply as much data for that computation to take place.

Yes, the PS5 has a narrower GPU but the system supporting that GPU is much stronger and more in-line with what the GPU is expecting to be handed to it.

Added to this, the audio solution in the PS5 also alleviates processing overhead from the CPU, allowing it to focus on the game executable. I'm sure the SX has ways of offloading audio processing to its own custom hardware but I seriously doubt that it has a) the same latency as this solution, b) equal capabilities or c) the ability to be altered through code updates afterwards.

In contrast, the SX has the bigger and wider GPU but, given all the technical solutions that are being implemented to render games at lower than the final output resolution and have them look as good, does pushing more pixels really matter?

Excellent and detailed post. thanks!
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
Let's take a quick look at this deep analysis: Assuming Microsoft is using the slower RAM for system functionality - which is much more reasonable than the other way around, if XSX is selective in its RAM usage for the system - then the average RAM speed would be (10*560+3,5*336)/13,5=501, so significantly faster than PS5's. This is not to say the average is a particularly useful metric, but since the analyst in op is talking about mixed performance, let's do the numbers. Going by that (and also the baseless assumption that Microsoft would reserve a 2.5GB portion of the faster RAM for system functionality) we can derive all we need to know about this "in depth analysis".

How is a 11.8% difference in bandwidth speed "significantly faster"?
 

Connxtion

Member
^^
I love it, we are using laws of hydraulics now to explain PS5.
Yeah maybe not the best diagram to use but it’s a visual representation of how variable clocks work.

There is only so much power allocated so it’s a balancing game, the above diagram show how this would be shifted. (some one can make a better diagram, with proper labels eg...)
 

Tarin02543

Member
Yeah maybe not the best diagram to use but it’s a visual representation of how variable clocks work.

There is only so much power allocated so it’s a balancing game, the above diagram show how this would be shifted. (some one can make a better diagram, with proper labels eg...)


No man, I love your way of thinking. Interesting equation.
 

Ascend

Member
The PS5's memory bandwith IS overall greater than the XSX. That's just a fact.
What does "overall" mean? Even if you account for the slower RAM, and calculate a weighted average, you end up at 476 GB/s. If you calculate it in a dumb way by simply adding them and dividing them by two, you end up with 448GB/s which is exactly the same as the PS5. How do you arrive at the conclusion that the PS5 bandwidth is overall greater than the XSX?

And even then... Things don't work that way in practice. In practice, the majority of games running at 4K today do not exceed 6GB of VRAM. By that standard, 10GB of fast RAM is plenty for the next couple of years. In the beginning, developers can simply put everything in the fast RAM and be done with it. And developers have room to optimize for the slower RAM as more RAM space is required.
 
What the fuck is this garbage?



Neither of these options are true. Jesus. Christ.

XSX uses different memory channels to access the different chip sizes. The access of every channel is scheduled by the memory controller. These different channels can be accessed concurrently.

There are 10 32 bit chips, leading to a 320-bit bus.

10 GB of this is accessible at once across the 320-bit bus at 560GB/s.

6 GB can only be accessed across a 192-bit bus at 336 GB/s because there are only 6 physical chips conected.

When you are accessing the 6 GB at 336 GB/s you can still access the remaining 4 x 1GB chips across the unused 128-bit bus for your other 224 GB/s.

It all depending on where accesses are. You can use your CPU in the "slow" section of memory and still use the reset of the memory for the GPU and still hit the (theoretical) 560 GB/s, patterns permitting

"In-depth analysis" my fucking arse. My god, how has this bullshit got a thread?

C'mon, you know why. Anything that can paint the XSX in a negative light just to make the PS5 look better by comparison is given the clear-ahead. Rather than, 'ya know, actual fair and balanced discussions on the systems that give them their credit where due.

These people were never interested in viewing both systems in an equal, honest light from the get-go. Tactics and talking points have changed but the end goal remains the same, I can't believe I gave them the benefit of the doubt early on.

Have fun debunking the rabid disingenuous Sony fanboys on that; not wasting my time there. And can't wait for someone to label me an Xbox fanboy for simply pointing out an obvious, even though I was very partial to both systems in numerous aspects in previous discussions (and still am). It'll happen anyway; don't care anymore.
 
Last edited:

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
C'mon, you know why. Anything that can paint the XSX in a negative light just to make the PS5 look better by comparison is given the clear-ahead. Rather than, 'ya know, actual fair and balanced discussions on the systems that give them their credit where due.

These people were never interested in viewing both systems in an equal, honest light from the get-go. Tactics and talking points have changed but the end goal remains the same, I can't believe I gave them the benefit of the doubt early on.

Have fun debunking the rabid disingenuous Sony fanboys on that; not wasting my time there. And can't wait for someone to label me an Xbox fanboy for simply pointing out an obvious, even though I was very partial to both systems in numerous aspects in previous discussions (and still am). It'll happen anyway; don't care anymore.

You wanna talk about the rabid Xbox Fanboys too that discredit anything positive about the PS5 and how it's engineered?
 

Fun Fanboy

Banned
C'mon, you know why. Anything that can paint the XSX in a negative light just to make the PS5 look better by comparison is given the clear-ahead. Rather than, 'ya know, actual fair and balanced discussions on the systems that give them their credit where due.

These people were never interested in viewing both systems in an equal, honest light from the get-go. Tactics and talking points have changed but the end goal remains the same, I can't believe I gave them the benefit of the doubt early on.

Have fun debunking the rabid disingenuous Sony fanboys on that; not wasting my time there. And can't wait for someone to label me an Xbox fanboy for simply pointing out an obvious, even though I was very partial to both systems in numerous aspects in previous discussions (and still am). It'll happen anyway; don't care anymore.
I remember some on the webz saying power doesn't matter! It's all about that SSD! But for some reason lot's of opinion pieces coming out trying to paint the PS5 more powerful. Lol. We're only about to start April! Long time till November.
 

JLB

Banned
I do believe the xbox is a good console i just think the ps5 will be a better engineered device.
Cerny made it clear that he wanted less bottlenecks and more of a focus on optimisation and innovation.

Look what they are doing with all the bandwith ,ssd, triggers, audio etc,

and you think it ll be better engineered because its what you feel in your heart, right? /jk
 

SonGoku

Member
A developer explained the SEX memory configuration trade-offs
Apparently bandwidth takes a disproportionate hit in performance whenever the CPU accesses the 6GB pool but this can be mitigated by switching access between pools on a cycle by cycle basis. The end result is its still a loss in performance but nowhere near as bad as the blog post made it seem
sqPOlNf.png

Here the dev explains the kind of performance hit bandwidth takes on the SEX compared to PS5. Both consoles end up with the same bandwidth proportionate to their computational targets (GPUs performance/needs)
c67XdCs.png

FIBvcQm.png

KxUmSTs.png
 
Last edited:

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
A developer explained the SEX memory configuration trade-offs
Apparently bandwidth takes a disproportionate hit in performance whenever the CPU accesses the 6GB pool but this can be mitigated by switching access between pools on a cycle by cycle basis. The end result is its still a loss in performance but nowhere near as bad as the blog post made it seem
sqPOlNf.png


Here the dev explains the kind of performance hit bandwidth takes on the SEX compared to PS5. Both consoles end up with the same bandwidth proportionate to their computational targets (GPUs performance/needs)

c67XdCs.png

FIBvcQm.png

KxUmSTs.png

This is what we need to read. Thanks for posting this.
 

FranXico

Member
A developer explained the SEX memory configuration trade-offs
Apparently bandwidth takes a disproportionate hit in performance whenever the CPU accesses the 6GB pool but this can be mitigated by switching access between pools on a cycle by cycle basis. The end result is its still a loss in performance but nowhere near as bad as the blog post made it seem
sqPOlNf.png

Here the dev explains the kind of performance hit bandwidth takes on the SEX compared to PS5. Both consoles end up with the same bandwidth proportionate to their computational targets (GPUs performance/needs)
c67XdCs.png

FIBvcQm.png

KxUmSTs.png
So, both consoles actually have an adequate bandwidth for their GPUs.
 
Why are excuses being made for Sony on a daily basis? Show the fucking thing and show some games instead of force feeding us this pandering bs praising its super secret powers.
Are they really excuses if these "people" "devs" "websites" whoever it is or whatever side they're pushing, aren't affiliated with or owned by with Xbox/PS? If it was coming directly from Xbox/PS then that would be pandering BS, but it's not. You may have a deep understanding into hardware engineering, but a lot of gamer's don't. These pandering articles/write ups/breakdowns know that, and know that people are kinda interested in this stuff. So they write them, it may not be for you, you may have already made up your mind or like I said you may have a much deeper understanding of hardware engineering that the average person does not.

These things won't stop. When the systems are games are released, they still won't stop. It will be a generation long argument, approach A vs approach B.
 

quest

Not Banned from OT
It drops 2% not 10.
Where did you get this magical chart no onc else has? Sony did not commit to any official numbers on the down clock. It was pure PR generic terms zero numbers the opposite of the SSD. He never said 2% he said minor, a couple ect but zero hard numbers. If it was really 2% there be nothing to hide and they post the numbers like the SSD. You can't only post the good and be transparent. Microsoft laid out the good bad and the ugly. Sony needs to follow and give us the same.
 

bitbydeath

Member
Don't forget “no bottlenecks”.



Based designer Cerny.

Only he is capable of removing bottlenecks when designing a console.

I bet those dumbass MS engineers didn't even think about them bottlenecks. They just put some PC parts together and that's it. Hahahaha, poor ignorant dumbasses.

C’mon now, if removing all bottlenecks were so easy it would have been done generations ago.

If true (not saying it is) then this is a huge innovation for gaming.
 

Shin

Banned
It drops 2% not 10.
Where do you get 2% from? Couple of percent in frequency can be anything and beside that GDC video there isn't any article floating around.
In the section Cerny discussed SmartShift he only outed dropping 10% of power (power can be consumption or absolute, hence bolded) then moved on to 3D audio.
"When that worst case game arrives, it will run at a lower clock speed. But not too much lower, to reduce power by 10 per cent it only takes a couple of percent reduction in frequency, so I'd expect any downclocking to be pretty minor," he explains.
 

Yoshi

Headmaster of Console Warrior Jugendstrafanstalt
C’mon now, if removing all bottlenecks were so easy it would have been done generations ago.

If true (not saying it is) then this is a huge innovation for gaming.
How would anyone even know now whether PS5 or XSX has any bottlenecks? All console vendors try to prevent bottle necks based on their estimations what is required over the next five or so years. But they cannot know whether they are right.
 

bitbydeath

Member
How would anyone even know now whether PS5 or XSX has any bottlenecks? All console vendors try to prevent bottle necks based on their estimations what is required over the next five or so years. But they cannot know whether they are right.

Every generation has had bottlenecks to date. Developers would know because without bottlenecks it means their calls are instant as opposed to queued.
 

Yoshi

Headmaster of Console Warrior Jugendstrafanstalt
Every generation has had bottlenecks to date. Developers would know because without bottlenecks it means their calls are instant as opposed to queued.
The needs of the developers change over time and thus the hardware component holding back the overall architecture is not obvious from the start. A well-rounded system without any bottlenecks does not mean a system with unlimited resources. In a well-balanced systemsome requested data might still not be instantly available. That lies in the nature of multi-layered memory set ups (registers, ram, hdd/sdd/disc).
 

bitbydeath

Member
The needs of the developers change over time and thus the hardware component holding back the overall architecture is not obvious from the start. A well-rounded system without any bottlenecks does not mean a system with unlimited resources. In a well-balanced systemsome requested data might still not be instantly available. That lies in the nature of multi-layered memory set ups (registers, ram, hdd/sdd/disc).

Not unlimited resources but rather making the most out of the hardware. Data is shared instantly with no bottlenecks and an example of that is how PS5 has no load times. Nothing is queued and everything is instantly displayed.
 
Not unlimited resources but rather making the most out of the hardware. Data is shared instantly with no bottlenecks and an example of that is how PS5 has no load times. Nothing is queued and everything is instantly displayed.
Yeah we'll have to see about that.
 

SonGoku

Member
Where do you get 2% from? Couple of percent in frequency can be anything and beside that GDC video there isn't any article floating around.
In the section Cerny discussed SmartShift he only outed dropping 10% of power (power can be consumption or absolute, hence bolded) then moved on to 3D audio.
  1. Where did you get 9.2tf from?
  2. 2% is closer to "a couple" than 10% is, this is for worst case btw not typical.
  3. PS5 variable frequency isn't based on smartshift, its a entirety different mechanism
  4. Smartshift is an added bonus to squeeze "a few more pixels" that will likely be used to maintain peak performance (10.28TF) under heavy GPU loads
  5. CPU & GPU are capped, meaning they have the power budget to go higher, theres a buffer a power available.
 

LordKasual

Banned
^^
I love it, we are using laws of hydraulics now to explain PS5.

lol literally nothing else works for these morons who refuse to actually research anything. If this variable frequency technology was used by the new Xbox, i assure you the people who keep asking these stupid ass questions would be on the other side of the fence.

What does "overall" mean? Even if you account for the slower RAM, and calculate a weighted average, you end up at 476 GB/s. If you calculate it in a dumb way by simply adding them and dividing them by two, you end up with 448GB/s which is exactly the same as the PS5. How do you arrive at the conclusion that the PS5 bandwidth is overall greater than the XSX?

And even then... Things don't work that way in practice. In practice, the majority of games running at 4K today do not exceed 6GB of VRAM. By that standard, 10GB of fast RAM is plenty for the next couple of years. In the beginning, developers can simply put everything in the fast RAM and be done with it. And developers have room to optimize for the slower RAM as more RAM space is required.

Shmunter Shmunter , see, nothing is obvious lol

As for you Ascend Ascend , here you go.

And it makes no sense to rate the specs of current gen titles to the limits of next-gen titles. Today's 4K games dont exceed 6GB of ram, but that's likely to change with new systems that have vastly different limitations on what you can and can't do with it.


But you're right, it ISNT going to matter for most games.....again....1st party titles are going to see the benefits of the nuances of these designs.
 
Where do you get 2% from? Couple of percent in frequency can be anything and beside that GDC video there isn't any article floating around.
In the section Cerny discussed SmartShift he only outed dropping 10% of power (power can be consumption or absolute, hence bolded) then moved on to 3D audio.

What? 'couple' literally means two. I remember having exact same post and reply to someone....

But yes, GPU may drop by 2%, or 3% or 4%. Highly unlikely it will drop anywhere close to 10%, as you have to consider they can drop CPU by a few percent to save power on that side too (Cerny's comment was talking about both CPU and GPU), and with Smartshift, this power draw can be fed to the GPU.

It seems unlikely to me that many games will need to max 3.5Ghz CPU frequency and 2.23Ghz GPU at the same time outside of spikes, therefore 100-200mhz reduction in CPU clocks can/will enable 2.23Ghz GPU clocks more or less 100% of the time. So it's a 10.3tflop console any way you like it.
 
Last edited:
  1. Where did you get 9.2tf from?
  2. 2% is closer to "a couple" than 10% is, this is for worst case btw not typical.
  3. PS5 variable frequency isn't based on smartshift, its a entirety different mechanism
  4. Smartshift is an added bonus to squeeze "a few more pixels" that will likely be used to maintain peak performance (10.28TF) under heavy GPU loads
  5. CPU & GPU are capped, meaning they have the power budget to go higher, theres a buffer a power available.
In your opinion how do you think this 'design' came to exist and why?
 

Shin

Banned
What? 'couple' literally means two.
That's the thing, people are taking it literally when it was an figure of speech otherwise he would have said 2%.
The whole showing breezed over things and left probably just as many questions on the table than it provided an answer.
CPU side of things shouldn't be a worry I believe it is a Zen 2 8/16 after all, which in turn bodes well for GPU side of things as well.
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
We have no idea what the drops will be. Cerny was estimating in the GDC talk. He also estimated 4K would require 8TF of power and was of course very wrong.

Go read a book or something. Or better yet, learn how to properly listen to a talk.

If you feel 11.8% are not significant, I'd suggest you give me 11.8% of your monthly wage.

It's very small and barely noticeable. Why compare it to money? That's a terrible comparison. You won't even be able to tell the difference without a digital magnifying glass.

In your opinion how do you think this 'design' came to exist and why?

Yall are some weirdos. Cerny literally answered this question for you. So, why are you asking it here?
 
Last edited:
Top Bottom