• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

The PlayStation 5 GPU Will Be Supported By Better Hardware Solutions, In Depth Analysis Suggests

Digital Foundry: Less CUs at a higher frequency brings better results than going wide with more CUs and lower frequencies. (Not an exact quote)
XboxGAF: There's no difference

Digital Foundry: "Solid state storage directly into the game may make a big difference to the experience we're going to be enjoying in the next generation."

XboxGAF: It's a pipe dream. It's nothing more than "secrete sauce". It will offer nothing more than 1-2 seconds of load times

I don't know why these guys go out of their way to say it's not true. It's crazy.
You do understand that MS didn’t go wide and slow, as Sony fans like to claim, they went wide and fast right ? Some bullshit variable numbers, that Sony gave no real range of, doesn’t change the fact that XSX has both a big number of CU and these CU run at very high and CONSTANT clock speed (1.85 GHz). Also DF members have explained again and again their view of these consoles to the point where Sony fans basically forced them to stop commenting on next gen consoles altogether.
 
Digital Foundry: Less CUs at a higher frequency brings better results than going wide with more CUs and lower frequencies. (Not an exact quote)
XboxGAF: There's no difference

Digital Foundry: "Solid state storage directly into the game may make a big difference to the experience we're going to be enjoying in the next generation."

XboxGAF: It's a pipe dream. It's nothing more than "secrete sauce". It will offer nothing more than 1-2 seconds of load times

I don't know why these guys go out of their way to say it's not true. It's crazy.

wait the narrative yesterday was that DF is a puppet on Microsoft’s payroll, you’re saying we can trust them again? Console warz are so confusing.
 

DForce

NaughtyDog Defense Force
You do understand that MS didn’t go wide and slow, as Sony fans like to claim, they went wide and fast right ? Some bullshit variable numbers, that Sony gave no real range of, doesn’t change the fact that XSX has both a big number of CU and these CU run at very high and CONSTANT clock speed (1.85 GHz). Also DF members have explained again and again their view of these consoles to the point where Sony fans basically forced them to stop commenting on next gen consoles altogether.
1.85 GHz is slower than 2.23 GHz

There's only one console people are considering ridiculous high clock speed and that's the PS5.

I don't know why you guys feel the need to say, "Sony fans" are saying this when it's coming DIRECTLY from DF and Mark Cerny. You are saying, "They're lying. Let me tell you how it really works."

Here is a direct quote from Digital Foundry

So if you increase the frequency of the GPU, your caches have more bandwidth rasterization rate improves in step with the clock speed. So this is kind like why they wanted to push the frequencies there because you can get much more out of the GPU without having to go wider.

Without having to go wider is the key point here. XsX frequencies are higher than what was reported, but it's not nearly the high clock of the PS5 and they're having to go wider.
 
I never believed that there would come a time that variable clock speeds, with no real range given by the console manufacturer, and a GPU with a low number of CU would actually be considered a....positive, by the same people that considered it a huge negative a few days ago no less. A 9 TF console masquerading as a 10 TF console is a good thing. Whatever.......
 
1.85 GHz is slower than 2.23 GHz

There's only one console people are considering ridiculous high clock speed and that's the PS5.

I don't know why you guys feel the need to say, "Sony fans" are saying this when it's coming DIRECTLY from DF and Mark Cerny. You are saying, "They're lying. Let me tell you how it really works."

Here is a direct quote from Digital Foundry



Without having to go wider is the key point here. XsX frequencies are higher than what was reported, but it's not nearly the high clock of the PS5 and they're having to go wider.
This is actually referring to two GPUs with the same TF number , not one at 12,2 and the other at maybe 10.
 
This is actually referring to two GPUs with the same TF number , not one at 12,2 and the other at maybe 10.
Yeah, there seems to be some delusion floating around that due in part to Sony's higher clocks that it somehow makes up for 16 less CU's. Yes higher clocks increase the bandwidth a bit but they're not going to overcome the sheer amount of hardware, not even close.

Those clocks and that CU count are what net it a peak of 10.28 teraflops, Microsoft's clocks and their CU's are what net them a fixed 12.155 teraflops. They don't understand this.
 
Last edited:
Hahaha, you make a mockery of everything that is good and decent. You are paralysed by irrational fear. Your foolish way will come to an end soon.
No, there's literally no logic in leaning into Sony's hardware for some kind of computational win. It's unadulterated stupidity, it's weaker hardware across the board. They've both eliminated the bottlenecks in their systems, but every computing aspect of Microsoft's system is stronger.

It's not even a question.
 

Shmunter

Member
No, there's literally no logic in leaning into Sony's hardware for some kind of computational win. It's unadulterated stupidity, it's weaker hardware across the board. They've both eliminated the bottlenecks in their systems, but every computing aspect of Microsoft's system is stronger.

It's not even a question.
I was joking around bro. Gee, why so serious?

Take the L and move on.
 

Shmunter

Member
Yeah except facts around computational hardware results back up what I'm saying while literally nothing backs up any of your assertions, i.e. delusions.
Nobody knows what you’re saying. That’s your first mistake. Acknowledging ones mistakes is the first step to recovery.
 
Last edited:

DForce

NaughtyDog Defense Force
wait the narrative yesterday was that DF is a puppet on Microsoft’s payroll, you’re saying we can trust them again? Console warz are so confusing.
And people said they were damage controlling for Sony by saying there's more than just the number of teraflops.
 

martino

Member
Yes, that's the reason why I quoted it.

10.2 with less CUs and higher clock > 10.2 with more CUs at lower lock.

That's the point. But you have people in here denying this fact.
what is denied is that the advantage this has in some scenarios with same tflop, will be an advantage over more tflop in any scenarios (involving them of course)
 
Last edited:

mejin

Member
From all I read till now, multis will be similar.
outside a little boost in resolution and ray tracing favouring Xbox, I don't think we'll see much difference. PS5 could have more details or higher quality assets than XSX version, even if resolution is worse.
 
Sure it was bud, sure it was lol

Look, seriously though Series X will be great and play every XB1 game (Gears, Halo, Forza, Crackdown) scaled up with best resolutions and fps.

But nearly everyone is looking forward to true next gen experiences, which we'll only get on PS5 because of MS' no exclusive policy for 1-2 years. The PS5's laser focus to remove bottlenecks in performance will be on display with these games no doubt.
 

CJY

Banned
yeah, it could be some horseshit, I'm ignorant about specs. So, I won't I mean I can't really talk about it. Just wait and see approach.
The XSX CPU/GPU (APU) was built for xCloud first to be able to run 4 Xbox One games concurrently for streaming to people's mobile phones. It's certainly cost effective and it's the future of Xbox.

Putting that same APU into the XSX was just an after thought. Specs might be decent, but if you care about console gaming, you wouldn't support Xbox practices. They are out to destroy console gaming.
 
Good thing it doesn't apply to me as I don't have a current gen machine and post news for all 3 platforms, I leave the tag adding to the mentally ill.

Well, for some reason, even if you don't have any console, good thing is what you said about others ( me included ) :

It is what it is, no matter how much people sugar coat or try and spin.

But, i surely didn't spin anythig.
 

M1chl

Currently Gif and Meme Champion
giphy.gif
 

Hobbygaming

has been asked to post in 'Grounded' mode.
The thing is though even if the 10gb can be accessed @560gb and 224gb/s for the remaining 3.5gb it will still perform better then sonys 13 - 14gb @ 448gb/s.

The ram is split, and its the video ram which requires and benifits from higher speeds.

45b0871338605098.jpg
They're saying it's not really 560 GBs, they're saying the asymmetric set up of the Series X ram, bottlenecks the bandwidth
 

SleepDoctor

Banned
Look, seriously though Series X will be great and play every XB1 game (Gears, Halo, Forza, Crackdown) scaled up with best resolutions and fps.

But nearly everyone is looking forward to true next gen experiences, which we'll only get on PS5 because of MS' no exclusive policy for 1-2 years. The PS5's laser focus to remove bottlenecks in performance will be on display with these games no doubt.


You don't know from left to right as far as this goes. Definitely not someone I'd listen to. Take your fantasy war up with someone clueless as you that might buy your fairy tales.

Its amazing how fast some of you forget your own post history will expose you lol.
 

Hobbygaming

has been asked to post in 'Grounded' mode.
Which is why the PS5 is so interesting.
Mark Cerny is an actual genius so I'm interested to see if he has come up with something no-one else has thought of before.



Revolutionary, 3rd party devs saying it's the most exciting hardware in 20 years, Naughty Dog dev saying this is the biggest leap in his career, superior in a lot of ways

I'm so excited to see what Cerny has done with the PS5
 

LordKasual

Banned
Then how does that equate 7.5GB? Are we saying that only now we're going to use interleaving? Because even on PCs right now, with all its inefficiencies, 4K rarely goes over 6GB of total RAM usage. By this logic, only something over 3GB is available on a 6GB graphics card.

It all sounds like a bunch of stretches to downplay the strengths of the XSX and make the PS5 sound better than it really is.


There is still no explanation why the OS using 2.5 GB leaves 7.5GB of fast modules available. Because that is how it is written in the article.

The PS5's memory bandwith IS overall greater than the XSX. That's just a fact.

But i don't give a shit about downplaying xsx, i already said XSX is going to be stronger, and i also said nobody will likely tell that much of a difference.

Then why is it incapable of running both the CPU and GPU at their nominal clocks? A problem that has never been a thing for any previous console?

Do you not know how to read? I just explained this to a poster above you in the quoted post.

And to answer your question, it's because it does not need to.
 

Panajev2001a

GAF's Pleasant Genius
Microsoft had Atmos for 2 1/2 years and no one says shit, it's all fake, people are fake.

What people should be excited about are things like Microsoft's HDR thing that automatically maps out any game and applies HDR at a system level. That's a dream feature that's incredibly useful and yet people are virtually silent.

What have people been saying for the last 4 years? HDR is the biggest differentiator this generation, and now it's able to be on everything and no one says anything.

Fake ass people everywhere.
HDR everywhere is a big deal and if they got it working well in some way: mad props to them.
HDR applied at a system level In the general case a process not that much different than your TV HDR mode for non HDR content though. They have more data to play with, but right now it seems to be an AI driven feature working on screen space. Would like to be proven wrong.
 
It's going to be interesting to see how the CPU / GPU power shifting works out in real life. I have to wonder if to make maximum use of it, devs will need to craft their games in particular ways, so as not to be putting the CPU and GPU into full drive at the same time.
 
You don't know from left to right as far as this goes. Definitely not someone I'd listen to. Take your fantasy war up with someone clueless as you that might buy your fairy tales.

Its amazing how fast some of you forget your own post history will expose you lol.

You're the most insecure Xbox fanboy on this site, I was being nice to you but you respond with emotion every time 🤣

It's ok lad, you enjoy your upscaled 12tflop games I wont stop you!
 

LordKasual

Banned
From Era post nib95:


"Microsoft is touting the 10 GB @ 560 GB/s and 6 GB @ 336 GB/s asymmetric configuration as a bonus but it's sort-of not. We've had this specific situation at least once before in the form of the NVidia GTX 650 Ti and a similar situation in the form of the 660 Ti. Both of those cards suffered from an asymmetrical configuration, affecting memory once the "symmetrical" portion of the interface was "full".



RAM%2Bconfiguration%2Bgraphic.jpg


Interleaved memory configurations for the SX's asymmetric memory configuration, an averaged value and the PS5's symmetric memory configuration... You can see that, overall, the PS5 has the edge in pure, consistent throughput...

Now, you may be asking what I mean by "full". Well, it comes down to two things: first is that, unlike some commentators might believe, the maximum bandwidth of the interface is limited to the 320-bit controllers and the matching 10 chips x 32 bit/pin x 14 GHz/Gbps interface of the GDDR6 memory.

That means that the maximum theoretical bandwidth is 560 GB/s, not 896 GB/s (560 + 336). Secondly, memory has to be interleaved in order to function on a given clock timing to improve the parallelism of the configuration. Interleaving is why you don't get a single 16 GB RAM chip, instead we get multiple 1 GB or 2 GB chips because it's vastly more efficient. HBM is a different story because the dies are parallel with multiple channels per pin and multiple frequencies are possible to be run across each chip in a stack, unlike DDR/GDDR which has to have all chips run at the same frequency.

However, what this means is that you need to have address space symmetry in order have interleaving of the RAM, i.e. you need to have all your chips presenting the same "capacity" of memory in order for it to work. Looking at the diagram above, you can see the SX's configuration, the first 1 GB of each RAM chip is interleaved across the entire 320-bit memory interface, giving rise to 10 GB operating with a bandwidth of 560 GB/s but what about the other 6 GB of RAM?

Those two banks of three chips either side of the processor house 2 GB per chip. How does that extra 1 GB get accessed? It can't be accessed at the same time as the first 1 GB because the memory interface is saturated. What happens, instead, is that the memory controller must instead "switch" to the interleaved addressable space covered by those 6x 1 GB portions. This means that, for the 6 GB "slower" memory (in reality, it's not slower but less wide) the memory interface must address that on a separate clock cycle if it wants to be accessed at the full width of the available bus.

The fallout of this can be quite complicated depending on how Microsoft have worked out their memory bus architecture. It could be a complete "switch" whereby on one clock cycle the memory interface uses the interleaved 10 GB portion and on the following clock cycle it accesses the 6 GB portion. This implementation would have the effect of averaging the effective bandwidth for all the memory. If you average this access, you get 392 GB/s for the 10 GB portion and 168 GB/s for the 6 GB portion for a given time frame but individual cycles would be counted at their full bandwidth.

However, there is another scenario with memory being assigned to each portion based on availability. In this configuration, the memory bandwidth (and access) is dependent on how much RAM is in use. Below 10 GB, the RAM will always operate at 560 GB/s. Above 10 GB utilisation, the memory interface must start switching or splitting the access to the memory portions. I don't know if it's technically possible to actually access two different interleaved portions of memory simultaneously by using the two 16-bit channels of the GDDR6 chip but if it were (and the standard appears to allow for it), you'd end up with the same memory bandwidths as the "averaged" scenario mentioned above.

If Microsoft were able to simultaneously access and decouple individual chips from the interleaved portions of memory through their memory controller then you could theoretically push the access to an asymmetric balance, being able to switch between a pure 560 GB/s for 10 GB RAM and a mixed 224 GB/s from 4 GB of that same portion and the full 336 GB/s of the 6 GB portion (also pictured above). This seems unlikely to my understanding of how things work and undesirable from a technical standpoint in terms of game memory access and also architecture design.

In comparison, the PS5 has a static 448 GB/s bandwidth for the entire 16 GB of GDDR6 (also operating at 14 GHz, across a 256-bit interface). Yes, the SX has 2.5 GB reserved for system functions and we don't know how much the PS5 reserves for that similar functionality but it doesn't matter - the Xbox SX either has only 7.5 GB of interleaved memory operating at 560 GB/s for game utilisation before it has to start "lowering" the effective bandwidth of the memory below that of the PS5... or the SX has an averaged mixed memory bandwidth that is always below that of the baseline PS4.

Either option puts the SX at a disadvantage to the PS5 for more memory intensive games and the latter puts it at a disadvantage all of the time.“


Interesting read. We technically already knew this was going to be the spread, but the option of it just being all-around worse is interesting.
 

TBiddy

Member
Interesting read. We technically already knew this was going to be the spread, but the option of it just being all-around worse is interesting.

I think that post has been doing the rounds for a week now. That doesn't make it true though.
 

Shmunter

Member
From Era post nib95:




Interesting read. We technically already knew this was going to be the spread, but the option of it just being all-around worse is interesting.

It has indeed always been obvious, and gracefully putting out the info objectively is admirable, but i hope there is no need to rub fans faces in it. Being above it is what separates the grownups from the children.
 

SleepDoctor

Banned
You're the most insecure Xbox fanboy on this site, I was being nice to you but you respond with emotion every time 🤣

It's ok lad, you enjoy your upscaled 12tflop games I wont stop you!

Emotion? Lol interesting.

If im emotional then that must make you mentally ill cuz im not the one passionately fighting some brand war over a piece of plastic 🤣.
 

RespawnX

Member
No the only propaganda is tflops = performance. Yes, it's an indicator, but not a very good one when each APU is heavily customized. No one ever judges graphics cards on tflops, it's just you console gamers that are new to the game that do this.

Of course it's the narrative. Both consoles share the same GPU architecture so it's logically to compare their output performance on this base. GPU remains the main factor to render a game. We know from RDNA 1.0 that you need to overclock your frequency by around 20% to compensate 10% CU difference. Their are plenty of comparisons on web. Best example is a overclocked 5700 vs. stock 5700 XT. Now we are talking RDNA 2.0. And main focus is AI. It's "only" an evolution of RDNA 1.0, but let's assume it's a generational leap and performs around 10% better than RNDA 1.0. In GPU world 10% higher performance per clock is a huge leap. Even if you consider this best case GPU evolution, the Xbox Series X output performance ramains around 20-25% higher than PS5. Assuming PS5 can mantain 2,23 GHz all the time. Which I doubt, but with a very good thermal design 2-2,1 Ghz should be possible.

So, that's all we know based on the known technology aspects. That's a good advantage by Xbox Series X but both performs have a lot of processing power and in comparison to last gen launch these are really powerful machines. Last time we got this kind of power in relation to the technolgy standard was with Xbox 360. Biggest leap of PS4 and Xbox One was the architecture leap, not the raw performance.

So, speculation goes mostly around the SSD and yes, it's all speculation. You can't compare exactly to a PC with plently of RAM, because RAM is still faster. The only game I know, which takes advantage from SSD ist Star Citizen. In fact the game requires you to run on a SSD. And even with a SSD which is 1/5 of the speed of Xbox Series X we are getting impressive results. Both machines have very fast SSD and there is not a single one game today, which could be representative for what may possible with such capabilities.

It's nothing more than speculation and I doubt we are going to see this evolutionary leap of SSDs before end of the next year. This is also more or less the point, were Microsoft is going to say good bye to cross releases with Xbox One. I'm pretty confident, that Sony is going to make out the best of their fast SSD. At least Guerilla Games are going to show us some magic. How this magic could look like is up to the stars. On multiplatform things are getting a bit more complicated, SSDs are only becoming slowly standard on PC. This should change with the new consoles, HDDs now should definitly die for gaming. Meanwhile I expect games to need 16-32 GB of RAM to compensate and buffer the content which on consoles simply can be streamed into the GDDR. I am very excited to see what solutions the engine developers will come up with.

However, I guess it will take a few years until we really see the potential of NVMe. At this point the GPU performance of the consoles will be outdated and both platforms will run at their graphical limits. That's Okay, that's why we got PS4 Pro and Xbox One X. I highly suspect, that Sony will go for the same NVMe in their "PS5 Pro" - if we are going to see such a console. From this point their system is really designed for the future of gaming. Meanwhile I expect Microsoft either to upgrade their SSD or make a step back. This will depend on the state of streaming. But I don't expect streaming to overcome consoles in the next 2-3 years. So let's be excited for the new consoles and the generational leap.
 

LordKasual

Banned
It has indeed always been obvious, and gracefully putting out the info objectively is admirable, but i hope there is no need to rub fans faces in it. Being above it is what separates the grownups from the children.

Actually, it hasn't because what's been kicked around GAF alot is the idea that the overall bandwidth of the XSX would be higher thus more or less negating some of the PS5's advantage. Not true, but definitely a sentiment. But nonetheless, i find this to be on-topic.

The more i learn about the PS5, the more elegant the hardware design seems to be.

Looking at where both MS and Sony decided to cut corners is pretty interesting to me.
 
Top Bottom