• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Matt weighs in on PS5 I/O, PS5 vs XSX and what it means for PC.

chilichote

Member
PS5 has 10.28 TFLOPs MAX
XSX has 12.1 TFLOPs ALWAYS, 100%, sustained.


Hahaha, no, you don't get it! That's only theoretical TF, not more not less. You have to fill all of the CU's with work, to get there. But it is very hard to reach, and that's the reason why Cerny prefers to take fewer CUs with higher clocks, simply because they are better to fill with meaningful work!
 

longdi

Banned
Does the PS5 custom CU size applies to the Series X ? I know the Series X and the Xbox One X die size are identical
bC04UjT.png

We dont know, but Mark was saying the CU is bigger in rDNA2, which best guess is the same with Series X.

CUs makes up the die size.
 

SSDfan

Neo Member
I think Ps5 can stream data per frame like the UE5 demo, and games that do made for ps5 only from sony will look a gen above anything on XSX.

There will be 2 classes of game visuals next gen.

Do you think 20 TF would of been able to run that UE5 demo at that detail - no.

The PlayStation 5 is, without a shadow of doubt, not only the fastest but also the smartest and most inovative platform ever built.

The "best archtecture in history" is not a vain claim. Ps5 Will redefine How powerfull are measured and many Will be totally confused.

Ps5 was built under a different and unique philosophy, its not supposed tô be judged with same old criterias. It Will completely revolutionize the gaming development enviroment, giving creators to express their creation to its fullest. Thanks with the Speed being its core philosophy, wich is way more demanding and necessry these days, devs Will finally be free and no longer being forced to puta limitations and use old gamey tricks as the SSD and super customized I/o allows them tô stream alô the scenario with all the destila properly loaded witha a Blink of eyes
 

Bryank75

Banned
Ability to run sustained 100% is different from running 100% all the time
Stop attempt to fudge matters, you know it all too well.
But you are missing my point I think... since a console only hits peak performance very rarely, it only needs to stay there for a while. Meaning the difference should in practical terms be far less than what the figures suggest.

Of course that all is dependent at the same time on memory which SX has 10GB of 560 and 3.5 of 336 vs PS5 16GB of 448 and they have yet to state how much of that is dedicated to gaming / UI etc.

I don't think we can compare like for like yet. We need a bit more info still.
 

Ascend

Member
Hahaha, no, you don't get it! That's only theoretical TF, not more not less. You have to fill all of the CU's with work, to get there. But it is very hard to reach, and that's the reason why Cerny prefers to take fewer CUs with higher clocks, simply because they are better to fill with meaningful work!
That is true. But remember. Before this, we had GCN. GCN had particularly bad scaling, and I suspect they did this to avoid the issues GCN had. RDNA does not scale so badly, which is why the 40CU 5700XT performs almost the same as the 64CU Radeon VII. Most likely Sony wanted to avoid this, and went with a narrower GPU, expecting the same limitation that does not seem relevant anymore. That is a possibility, I am not saying it IS like that. There are still benefits of the high clock speed.

And one important thing people forget is... Console makers design their next console on the negative feedback that they have received. All of them received feedback of the weak CPUs and the slow storage, and they built their consoles around that.
Sony got feedback of the PS4 being loud, so they designed the PS5 to avoid that, hence smartshift. MS got that feedback with the X360, so they already fixed that for the Xbox One. They're not going to make the same mistake again, so, they made a fridge.

But both are custom
Custom in their feature set. But the compute capabilities are exactly the same. Considering that on the UE5 demo it was mentioned that they used primitive shaders, it might as well be that the PS5 has primitive shaders while the XSX has mesh shaders, as an example of a differing feature set.
SmartShift is another feature. There are multiple "add-ons" so to speak that are custom. But the core compute unit design is the same for both consoles, and that is what the TFLOPS are based on,
 

Degree

Banned
The PlayStation 5 is, without a shadow of doubt, not only the fastest but also the smartest and most inovative platform ever built.

The "best archtecture in history" is not a vain claim. Ps5 Will redefine How powerfull are measured and many Will be totally confused.

Ps5 was built under a different and unique philosophy, its not supposed tô be judged with same old criterias. It Will completely revolutionize the gaming development enviroment, giving creators to express their creation to its fullest. Thanks with the Speed being its core philosophy, wich is way more demanding and necessry these days, devs Will finally be free and no longer being forced to puta limitations and use old gamey tricks as the SSD and super customized I/o allows them tô stream alô the scenario with all the destila properly loaded witha a Blink of eyes

Can you please stop with this console warring nonsense? Also, your username checks out. lol

What is so smart about it? That's why devs already have issues with PS5 at the beginning of the gen? They have to throttle the CPU to make sure that the GPU is able to run at a sustained clock:

Several developers speaking to Digital Foundry have stated that their current PS5 work sees them throttling back the CPU in order to ensure a sustained 2.23GHz clock on the graphics core.

Throttling the CPU is at the beginning of the Gen not a huge problem, because games are still designed with jaguar in mind, but soon enough, when nextgen only games arrive, devs can't just throttle the CPU to have a sustained GPU core. There will be games that need 100% of the CPU clock.

By this, I would say, PS5 is one of the worst architectures.
 
Well, the final version of UE5 won't be released until late 2021, so add a couple of years after that until we see games released taking proper advantage of the new stuff UE5 will offer. Late 2023 maybe. Consider also that during the first couple of years of the generation or so, many 3rd party games will be crossgen. So anyways they won't use these features at all to make sure the game works well in the previous gen.

At least it's what MS announced they will do with the games they will publish. Sony instead said they will bet on PS5 only games, so I think we may see stuff like that before in PS5 exclusives, even if some early PS5 Sony game may be some delayed PS4 project.
Its not late 2021 the UE5 is released, its in the first months of 2021😉
 
Last edited:

CurtBizzy

Member
That is true. But remember. Before this, we had GCN. GCN had particularly bad scaling, and I suspect they did this to avoid the issues GCN had. RDNA does not scale so badly, which is why the 40CU 5700XT performs almost the same as the 64CU Radeon VII. Most likely Sony wanted to avoid this, and went with a narrower GPU, expecting the same limitation that does not seem relevant anymore. That is a possibility, I am not saying it IS like that. There are still benefits of the high clock speed.

And one important thing people forget is... Console makers design their next console on the negative feedback that they have received. All of them received feedback of the weak CPUs and the slow storage, and they built their consoles around that.
Sony got feedback of the PS4 being loud, so they designed the PS5 to avoid that, hence smartshift. MS got that feedback with the X360, so they already fixed that for the Xbox One. They're not going to make the same mistake again, so, they made a fridge.


Custom in their feature set. But the compute capabilities are exactly the same. Considering that on the UE5 demo it was mentioned that they used primitive shaders, it might as well be that the PS5 has primitive shaders while the XSX has mesh shaders, as an example of a differing feature set.
SmartShift is another feature. There are multiple "add-ons" so to speak that are custom. But the core compute unit design is the same for both consoles, and that is what the TFLOPS are based on,
I understand but Sony built the GPU with AMD, the CU size might be exclusive
 

Degree

Banned
It's hard to believe it's the same case for the XSX because of the die size

what? no, XSX can run at 100% sustained, all the time, under all circumstances:

But up until now at least, the focus has been on the GPU, where Microsoft has delivered 12 teraflops of compute performance via 3328 shaders allocated to 52 compute units (from 56 in total on silicon, four disabled to increase production yield) running at a sustained, locked 1825MHz. Once again, Microsoft stresses the point that frequencies are consistent on all machines, in all environments. There are no boost clocks with Xbox Series X.

 

Bryank75

Banned
Can you please stop with this console warring nonsense? Also, your username checks out. lol

What is so smart about it? That's why devs already have issues with PS5 at the beginning of the gen? They have to throttle the CPU to make sure that the GPU is able to run at a sustained clock:



Throttling the CPU is at the beginning of the Gen not a huge problem, because games are still designed with jaguar in mind, but soon enough, when nextgen only games arrive, devs can't just throttle the CPU to have a sustained GPU core. There will be games that need 100% of the CPU clock.

By this, I would say, PS5 is one of the worst architectures.
I don't understand why you think he is 'console war-ing' when he says it's the best architecture ever, he is referencing multiple sources that have spoken on the subject.
 

Ascend

Member
what's misleading about it? The console has 16 more compute units than the PS5 and that is over 40% more than 36. Simple math.
This is true. The XSX has 44% more CUs than the PS5. But this is only a meaningful metric if the CUs would run at the same clock speed. Accounting for the clock speeds, it's an 18%+ advantage rather than a 44%+.

By this, I would say, PS5 is one of the worst architectures.
Well, this is a bit of an exaggeration... But if I recall correctly Cerny did mention that as the console is nearing the end of its life cycle that it would be strained more and would therefore likely need to lower clocks to stay within its power limit.
 
Last edited:

SSDfan

Neo Member
Can you please stop with this console warring nonsense? Also, your username checks out. lol

What is so smart about it? That's why devs already have issues with PS5 at the beginning of the gen? They have to throttle the CPU to make sure that the GPU is able to run at a sustained clock:



Throttling the CPU is at the beginning of the Gen not a huge problem, because games are still designed with jaguar in mind, but soon enough, when nextgen only games arrive, devs can't just throttle the CPU to have a sustained GPU core. There will be games that need 100% of the CPU clock.

By this, I would say, PS5 is one of the worst architectures.

Warring?

This is a baseless and unfair accusstion.

I did not make any single mention to any other platform but Ps5, let alone downplaying them with FUD like overheating, 9.2, no rdna2 so and só forth.

If you got upset over a nick and a post that was strictly meant to my praise my platform of choice without downplay yours, You should look at yourself in the mirror buddy. The problem here might be on You.

Shalom, namastê, salahmuhaneikum
 

Panajev2001a

GAF's Pleasant Genius
But if I recall correctly Cerny did mention that as the console is nearing the end of its life cycle that it would be strained more and would therefore likely need to lower clocks to stay within its power limit.

It just said it is a scenario developers would face possibly more later down the line or better people took it from his talk and the DF interview, I do not recall the exact quote and the attribution so if you can link it / quote it then it would be great. Still, the point made sense: as people dog deeper and deeper into the platform they may run into peaks more often, but they are not sustained over a long period of time and the console can adjust and shift power very very fast (1ms latency) and it is not outside of the limit of devkit tools to rise in the quality and sophistication over time too and help developers optimise them out.
 

Lethal01

Member
This would apply with both systems however, so seeing it usually only used in reference to PS5 is baffling. Depending on what extent the ML customizations have been done on XSX's GPU they could use a combination of that and the SSD I/O to achieve effective parity with what Sony's SSD I/O alone seems to provide. For most use-cases, anyhow.

It's baffling to you because you seem to believe that the XSX SSD is close in parity to the PS5's. But developers in general say otherwise. The head of Epic says otherwise and the dev this thread is discussing is saying that the PS5 feels like what you would have expects to come in a heavily SSD focused mid gen update 4 years from now.

We cannot really say anything about the real world performance since we cannot test it ourselves, the people have have are saying that despite the XSX GPU being extremely impressive the PS5's is a world ahead.

Being able to save on VRAM usage does apply to both SSD's but for all we know the PS5 could let them save 10x more.
 
It's baffling to you because you seem to believe that the XSX SSD is close in parity to the PS5's. But developers in general say otherwise. The head of Epic says otherwise and the dev this thread is discussing is saying that the PS5 feels like what you would have expects to come in a heavily SSD focused mid gen update 4 years from now.

We cannot really say anything about the real world performance since we cannot test it ourselves, the people have have are saying that despite the XSX GPU being extremely impressive the PS5's is a world ahead.

Being able to save on VRAM usage does apply to both SSD's but for all we know the PS5 could let them save 10x more.

So now you're making shit up to stir the console warring.
 

Panajev2001a

GAF's Pleasant Genius
It's baffling to you because you seem to believe that the XSX SSD is close in parity to the PS5's. But developers in general say otherwise. The head of Epic says otherwise and the dev this thread is discussing is saying that the PS5 feels like what you would have expects to come in a heavily SSD focused mid gen update 4 years from now.

We cannot really say anything about the real world performance since we cannot test it ourselves, the people have have are saying that despite the XSX GPU being extremely impressive the PS5's is a world ahead.

Being able to save on VRAM usage does apply to both SSD's but for all we know the PS5 could let them save 10x more.

If in the pursuit of using the GPU’s transistors to run machine learning algorithms to upscale their assets they squander a good chunk of their FLOPS advantage to match Sony’s SSD advantage (as I said “in the pursuit” as it is another power of the cloud scenario to apply that to whole classes of games as a generic solution IMHO) that seems to say Sony made a very very smart decision with their SSD and I/O HW.
 
Hahaha, no, you don't get it! That's only theoretical TF, not more not less. You have to fill all of the CU's with work, to get there. But it is very hard to reach, and that's the reason why Cerny prefers to take fewer CUs with higher clocks, simply because they are better to fill with meaningful work!

This "very hard" myth is false; that's one of the points in the presentation Cerny PR'd up somewhat to try shifting the narrative away from such a thing. The truth is GPUs are pretty easy to saturate with work, that's just how they are by design. They are highly parallelized computational architectures, like DSPs on steroids. In fact, they've been replacing DSPs for years due to their advantages (there's still a few areas where DSPs have some edges, such as power consumption and cost-for-performance).

You can look at most GPU benchmarks today and see that, for architecturally similar cards, the card with more hardware gets better results in almost every single instance, across multiple categories. So the "it's hard to parallelize for GPUs" myths was a PR spin by Cerny in an otherwise informative presentation to try cutting away at one of the perceived weaknesses of their system, simple as that.

Smart developers can reasonably keep a wider net of CUs occupied, especially given the frontend improvements AMD have made with RDNA2. And trust me, those improvements are definitely there, otherwise they would not be able to create these massive GPUs (both them and AMD, let alone Intel with that giant XE GPU) in the first place.

If in the pursuit of using the GPU’s transistors to run machine learning algorithms to upscale their assets they squander a good chunk of their FLOPS advantage to match Sony’s SSD advantage (as I said “in the pursuit” as it is another power of the cloud scenario to apply that to whole classes of games as a generic solution IMHO) that seems to say Sony made a very very smart decision with their SSD and I/O HW.

You (nor any of us on the forum) have any idea how much of the GPU is being utilized for those upscaling asset tasks. We also don't know if they have made customizations to the GPU specifically for those tasks (it's very likely they have), thus taking some of the workload off of the general compute units for that.

AT the end of the day MS's approach regards that and Sony's approach are both valid options, but MS's offers more flexibility depending on the needs of the underlying game design and programming techniques. I.e if a game isn't relying on a large set of unique visual assets of insane size, those GPU resources can potentially be put to use in different ways/tasks (of varying degrees).

It's baffling to you because you seem to believe that the XSX SSD is close in parity to the PS5's. But developers in general say otherwise. The head of Epic says otherwise and the dev this thread is discussing is saying that the PS5 feels like what you would have expects to come in a heavily SSD focused mid gen update 4 years from now.

We cannot really say anything about the real world performance since we cannot test it ourselves, the people have have are saying that despite the XSX GPU being extremely impressive the PS5's is a world ahead.

Being able to save on VRAM usage does apply to both SSD's but for all we know the PS5 could let them save 10x more.

So every single "developer" has put out a public statement on this? Every single developer is specifically working with next-gen devkits, on next-gen projects, with next-gen API dev tools, in areas of game design where they need to actually utilize that particular hardware?

Or are we talking about some developers who have a preference to PS5 in a certain way that isn't too different from other certain developers who have a preference to XSX in certain other ways? I guess we should take their opinions as absolute statements of fact now? No, that's not how critical thinking actually works.

The head of Epic hasn't "said otherwise"; he is not legally permissible to say such publicly. Him being positive about PS5's SSD I/O is not an automatic indictment on the competitor's SSD I/O and that is regardless of anything he has said since partial minds can see what he's saying for what it is (and understanding both historical precedent with Epic demos on PS consoles and any potential backend PR between Epic and Sony with regards the demo).

Matt is ultimately one person, a single opinion. It's not absolute. I'm sure he feels the way he does earnestly but again it's more or less his opinion even if there are aspects of it based in probable truth. Also it's funny that you are understating Matt's own comments regarding XSX's GPU, going by his words it would be inferred it's potentially magnitudes ahead in particular areas. I can speculate what those areas would be but I don't know if I would state it to the degree Matt's own comments implied.

What is this "10x" multiplier coming from? What factors are you considering here? What aspects of the tech and how they work in relation to each other? What established performance metrics, formulas, etc? If I'm not allowed to throw around random spec claims/benefits without detailing how those figures are reached and what methods/factors are being used to arrive at them, I'm definitely not letting others try slipping away with suspect claims of their own x3.
 
Last edited:

geordiemp

Member
Custom in their feature set. But the compute capabilities are exactly the same. Considering that on the UE5 demo it was mentioned that they used primitive shaders, it might as well be that the PS5 has primitive shaders while the XSX has mesh shaders, as an example of a differing feature set.
SmartShift is another feature. There are multiple "add-ons" so to speak that are custom. But the core compute unit design is the same for both consoles, and that is what the TFLOPS are based on,

Most of the silicon furnctionality is LIKELY the same for RDNA2 and each console maker call the DRIVERS different names.....

If Sony or MS designed some of their own silicon, they would shout about it and call it their hardware design.....rather than referring to the API / Driver feature set.
 
Last edited:

geordiemp

Member
This "very hard" myth is false; that's one of the points in the presentation Cerny PR'd up somewhat to try shifting the narrative away from such a thing. The truth is GPUs are pretty easy to saturate with work, that's just how they are by design. They are highly parallelized computational architectures, like DSPs on steroids. In fact, they've been replacing DSPs for years due to their advantages (there's still a few areas where DSPs have some edges, such as power consumption and cost-for-performance).

You can look at most GPU benchmarks today and see that, for architecturally similar cards, the card with more hardware gets better results in almost every single instance, across multiple categories. So the "it's hard to parallelize for GPUs" myths was a PR spin by Cerny in an otherwise informative presentation to try cutting away at one of the perceived weaknesses of their system, simple as that.

Smart developers can reasonably keep a wider net of CUs occupied, especially given the frontend improvements AMD have made with RDNA2. And trust me, those improvements are definitely there, otherwise they would not be able to create these massive GPUs (both them and AMD, let alone Intel with that giant XE GPU) in the first place.

Your correct on parallelisation....but I would not doubt Cerny too much, he was on the subject of lots of small triangles I recall and efficiency.......

Funny the UE5 demo used lots of small triangles, co-incidence ? Maybe ...but too many have doubted allot of stuff he said, and he was smiling ...

I probably need to watch it again as his comments need to be taken in context, rising tide lifts all boats.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
We also don't know if they have made customizations to the GPU specifically for those tasks (it's very likely they have), thus taking some of the workload off of the general compute units for that

You want your cake and eat it too... now we have secret extra HW that makes ML tasks free (or rather as they said in the DF piece they extended the Shader vector ALU to process INT8 and INT4 at a comparatively increased rate... like an extension of FP16 RPM) and the same SSD speed or better (thanks to magic) and Shader supremacy, and RT supremacy... it is a bit of a fantasy or wishful thinking at this point sorry.
 

FranXico

Member
The truth is GPUs are pretty easy to saturate with work, that's just how they are by design.
GPUs are first and foremost designed for paralellized work, and they can be saturated, but in reality they are hardly ever saturated for prolongued periods of time. In fact, if you try to force sustained saturation upon a GPU, it will eventually throttle down to prevent overheating.
 
You want your cake and eat it too... now we have secret extra HW that makes ML tasks free (or rather as they said in the DF piece they extended the Shader vector ALU to process INT8 and INT4 at a comparatively increased rate... like an extension of FP16 RPM) and the same SSD speed or better (thanks to magic) and Shader supremacy, and RT supremacy... it is a bit of a fantasy or wishful thinking at this point sorry.

Take off your fanboy goggles and try seeing this from a business sensibility POV. DF doesn't have all the information on XSX's customizations, no one does. They might know a partial amount of them or could even be inferring how certain aspects work based off of logical conclusions, but they don't have the specific hard specs provided yet. Otherwise they'd of likely mentioned them, and we wouldn't need to wait until August for a system architecture dive on the system itself.

Again, you're just exposing the unspoken notion (and double-standard) of "well Sony can make customizations to their hardware but MS can't" because...reasons? When Sony mentioned they added GPU cache scrubbers, I only questioned that for maybe a second and then did some research, and started to see how and why they would make that type of customization. I didn't cling onto it as an impossibility. The same can be said about many of their other hardware customizations.

There's nothing preventing Sony or MS from having made specific customizations to target their hardware needs, and there are in fact rumors suggesting that, yes, both of them have done this. I have zero reason why you are mentioning "same SSD speed or better" because those of us who have tried looking at the potential benefits of XvA never once suggested it would outright match or exceed Sony's SSD I/O solution, merely potentially punch above its weight and give overall performance that's closer than what the paper specs imply. Literal common-sense conclusion, but you're making a strawman that never existed.

If the system has more CUs, has some customizations for DXR RT on the hardware/software end (while also having the larger GPU), then that's simply what it is. Never referred to those things as 'supremacy', you're using intentionally inflammatory, colorfully exaggerated language to suggest the people acknowledging those particular advantages are doing so with some agenda to crush the other system. That's not how I operate, I don't care about that. I enjoy the hardware features on both of these systems, but I'm also a realist.


GPUs are first and foremost designed for paralellized work, and they can be saturated, but in reality they are hardly ever saturated for prolongued periods of time. In fact, if you try to force sustained saturation upon a GPU, it will eventually throttle down to prevent overheating.

You aren't disproving anything I posted by writing this, it just adds upon my points/supplements them. I think it's fairly understood GPUs don't operate at peak saturation for prolonged periods....but silicon can't operate at peak power load for prolonged periods either or it will eventually overheat or even cause silicon damage, and that's irrespective to what cooling system is being used or however you dynamically shift around the power budget.

Physics gonna physics.
 
What are you smoking? Lol. You only need a notebook to run the UE5 demo with the same amount of detail at 1440p@40fps. XSX will easily run it at a higher res. Stop being to delusional.
And STOP with this stupid narrative that XSX means last-gen jaguar based games and PS5 is the only platform on this planet to have nextgen visuals.
This is ridiculous.



All of this doesn't matter at all. Developers will not create based on a single SSD. They will use the PC SSD as the lowest common denominator and that's it. So, we will see better loading times, but that's it.
Again, UE5 demo could run on a notebook - according to epic engineers in china, at 40fps.



what? no, XSX has way more TFLOps, and it's sustained 100% of the time. PS5 clocks are variable, I think Devs will just use the base/lowest possible clock to make sure the performance is consistent.
But we will see as soon as the games arrive.
 

Degree

Banned
The PlayStation 5 is, without a shadow of doubt, not only the fastest but also the smartest and most inovative platform ever built.

The "best archtecture in history" is not a vain claim. Ps5 Will redefine How powerfull are measured and many Will be totally confused.

Ps5 was built under a different and unique philosophy, its not supposed tô be judged with same old criterias. It Will completely revolutionize the gaming development enviroment, giving creators to express their creation to its fullest.
Thanks with the Speed being its core philosophy, wich is way more demanding and necessry these days, devs Will finally be free and no longer being forced to puta limitations and use old gamey tricks as the SSD and super customized I/o allows them tô stream alô the scenario with all the destila properly loaded witha a Blink of eyes

Sounds like a wet fanboys dream.
How much do you get paid by Sony to spread this marketing crap? Seriously?
 

chilichote

Member
I'm not sure some of you here can actually be serious.

Do you really think Sony and Cerny would build a whole console around the idea that you could use fewer CUs with more clock more efficiently, some of whom would even think it would be more expensive than the competition if they could have had the same thing as the XSeX? for less cost?

I think if the PS5 with its underlying technology also had 12 TF's, we would also hear completely different things from the developers.

Exciting times are ahead^^
 

Ascend

Member
GPUs are first and foremost designed for paralellized work, and they can be saturated, but in reality they are hardly ever saturated for prolongued periods of time. In fact, if you try to force sustained saturation upon a GPU, it will eventually throttle down to prevent overheating.
This is true. But this counts for all GPUs. One can't simply ignore this on the PS5 because of its power management. Tracing back these arguments, it goes back to the benefits of the higher clock on the PS5. And I don't see how higher clocks changes this.
 

Panajev2001a

GAF's Pleasant Genius
Take off your fanboy goggles and try seeing this from a business sensibility POV. DF doesn't have all the information on XSX's customizations, no one does.

That was the interview with the XSX architects and that description was their description of the ML enhancements you talk about. There has been no sign of additional parallel HW, not even the slightest AMD RDNA related patent, that support your scenario beyond a “Why not? How do you know? Maybe it is there who’s to say” argument.

Not to talk about the irony of talking about fanboy goggles and another wall of text to try to claim yet another secret trump card to make the case for XSX to win in all scenarios... now it is secret extra HW which is there because it must else Sony has a win there where we cannot magically close the gap or hand wave it away.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
This is true. But this counts for all GPUs. One can't simply ignore this on the PS5 because of its power management. Tracing back these arguments, it goes back to the benefits of the higher clock on the PS5. And I don't see how higher clocks changes this.

Efficiency over a wide variety of workloads, as GPU’s become more and more flexible (and CPU like) is still hard to get hence you see still pretty major architectural changes between GCN revisions and GCN to RDNA especially. Look at when AMD went from VLIW to SIMT/Scalar professing in shaders (what is the need there if parallelism is “never” a problem), splitting wavefronts from 64 instructions to 32 (also helps lower penalty with divergence / dynamic branching in shaders), etc...

Higher clockspeed allows in a way you to bypass that part of the problem away (parallel work efficiency impacts you less with a narrower GPU) and ensure that other parts that may get bottlenecked (like geometry processing) are not.
 

FranXico

Member
This is true. But this counts for all GPUs. One can't simply ignore this on the PS5 because of its power management. Tracing back these arguments, it goes back to the benefits of the higher clock on the PS5. And I don't see how higher clocks changes this.
For sure, this is true for both the PS5 and the XSX. It means that 12.1TF are every bit as theoretical as 10.28TF. It doesn't matter if the clock is variable or fixed. You can't push and sustain maximum load for an arbritary amount of time.
 

Lethal01

Member
What is this "10x" multiplier coming from? What factors are you considering here? What aspects of the tech and how they work in relation to each other? What established performance metrics, formulas, etc? If I'm not allowed to throw around random spec claims/benefits without detailing how those figures are reached and what methods/factors are being used to arrive at them, I'm definitely not letting others try slipping away with suspect claims of their own x3.

You took that 10x multiplier remark far too literally, it's not a claim. The entire point I was making is that we don't know and we need proper testing to see just how much effect the difference between the PS5 and XSX SSD could have on something like VRAM. The point was that we can't put a number on it so saying that they can both do it is silly.

So every single "developer" has put out a public statement on this?
But developers in general say otherwise.

There's no point to this if you are misreading what I say this badly. IN GENERAL we are seeing developers saying that the PS5 storage architecture is more impressive than that of the SX and that it's paired with an SSD that's just faster. You are free to have a hunch that most developers speaking on it are uninformed, paid or biased but the point is that we have more reason to believe that the PS5 SSD is far above that of the XSX than we have to believe that they are near the same level. Why are basing their speculation around what's most likely to be true?
 

Lethal01

Member
So now you're making shit up to stir the console warring.
Once again, me saying 10x more is to point out that we DONT know what kind of difference the gap between the SSDs could amount to. So saying they will do the same thing is as silly as randomly saying that ps5 SSD will let it save 6gb of ram.
 

sircaw

Banned
Sounds like a wet fanboys dream.
How much do you get paid by Sony to spread this marketing crap? Seriously?

There have been numerous sources from developers praising the ps5 architecture. It's now a fact, end of story, finished. DONE DEAL.

Does that mean that the new xbox's console is bad or is going to be a piece of junk, no, its just more of a standard or conventional piece of kit.

Microsoft has build the best of what is available to us now.

However, Its very obvious that Sony vision is in completely different direction to what Microsoft's is.
If both machines had the same amount of tflops then yes, Microsoft would of been in some serious trouble with the advancements Sony has made. Microsoft has gone for the power route. This advantage should prove itself useful in some tests against the ps5 ie Ray tracing/resolution.

In the end, xbox fans should be really really happy as they are getting a state of the art high performing pc in console form but you really need to start shifting your focus towards your own exclusive games,

These comparisons between consoles, i am afraid to say, are only going to end in disappointment when you realise you can hardly spot the difference with out an ultra zoomed still shot from digital foundry telling you what to look for. Resolutions are as a standard going to be high so now the world of diminishing returns are a real thing.

In regards to Sony's first party studios, i am afraid to say they will not be beaten, their studios have had years and years of experience in making the best games on the planet and being able to call upon Sony's new advancements/tools, the gap in making better quality games will grow even larger..

I am not trying to be mean to xbox fans but microsoft are focusing their direction in a more quantity than quality approach because of game pass.

This philosophy is exactly the opposite in how Sony are trying to march forward.

Xbox fans, You really need to shift your focus away from this constant Waring with Sony.

Enjoy your tremendous piece of kit, its unbelievably good but this constant Waring with Sony is going to leave alot of your really really upset and disappointed when Sonys shows their exclusive line up.

No amount of Hype, buzzwords or bullshit marketing is go to change this, its games that matter.
 

killatopak

Member
This "very hard" myth is false; that's one of the points in the presentation Cerny PR'd up somewhat to try shifting the narrative away from such a thing. The truth is GPUs are pretty easy to saturate with work, that's just how they are by design. They are highly parallelized computational architectures, like DSPs on steroids. In fact, they've been replacing DSPs for years due to their advantages (there's still a few areas where DSPs have some edges, such as power consumption and cost-for-performance).

You can look at most GPU benchmarks today and see that, for architecturally similar cards, the card with more hardware gets better results in almost every single instance, across multiple categories. So the "it's hard to parallelize for GPUs" myths was a PR spin by Cerny in an otherwise informative presentation to try cutting away at one of the perceived weaknesses of their system, simple as that.
I agree with every point you made.

I just want to amend that Cerny didn’t say it was hard to saturate higher CUs. It was just that it was easier to saturate lower amounts.

Real world effect would probably mean developers especially third party ones will be able to squeeze more performance out of the Series X as the gen goes on.

What I personally think will happen is that third party will peak the 10.2 TF PS5 around mid gen while they would peak XSX’s 12TF a year or two later. This is in terms of GPU.

Don’t worry PS5 fans, the same principle will probably happen for the SSD as PC inevitably catches up to XSX speeds and later on PS5 speeds which means there’s no need to purposely gimp XSX and PS5 third party games just to make it work on a larger number of devices.
 
Last edited:
Top Bottom