• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Oberon PlayStation 5 SoC Die Delidded and Pictured

MonarchJT

Banned
Why are you talking about RT, when his example didnt mention RT at all? It just looks like that you missed the point below completely.

Hypothesis: series x have 20% more pure power than PS5, so series x should perform ~20% better on identical scenario.

test scenario(s): Same game, same resolution, same graphical fidelity level

Results: Series x and PS5 perform almost identically on some of these scenarios


That is the point he was pointing out, nothing about RT. It is just weird that you even bring that up, when it is not mentioned.


The interesting thing is, that when one system have more theoretical performance, but they perform similarry on same kind of situations, why is that?

is it because faster clocks of PS5 allow higher usage of the theoretical performance maxium, or something else?

And if it is that,then does series x have more future potential to gain higher gains of what devs can get out of the system, assuming ps5 as a system is easier to nearly be maxed out.
the reality is that it has been proven multiple times that the xsx has more grunts ... in fact almost most of the new releases push more pixels on xbox Lowering the resolution it is very likely that it would also have better performance in terms of fps since even pushing more pixels we are with a difference of 0.something in terms of fps.
 
Last edited:

Lysandros

Member
Hypothesis: series x have 20% more pure power than PS5, so series x should perform ~20% better on identical scenario.



And if it is that,then does series x have more future potential to gain higher gains of what devs can get out of the system, assuming ps5 as a system is easier to nearly be maxed out.
That's the thing, XSX doesn't have a '20% more pure power' or 'power (as a whole)', it has 18% more compute power (along with texel fill rate) over PS5 while actually being in deficit by around 20% in other GPU 'power' metrics tied to fixed function units due to frequency difference. Thus expecting a consistent difference of 20% in resolution or FPS is illogical to begin with.

And going by Cerny's statement of "when the triangles are small it's more difficult to feed the CUs with useful work" future next-gen titles with more complex geometry can actually favor PS5's 'deep' design more. There was also a post by Matt Hargett on Twitter alluding that optimized code (with higher cache hit rate) will benefit PS5's faster cache subsystem more. Furthermore, i really don't think that PS5's Geometry Engine and I/O complex are even remotely close to being maxed out.
 
Last edited:

assurdum

Banned
the reality is that it has been proven multiple times that the xsx has more grunts ... in fact almost most of the new releases push more pixels on xbox Lowering the resolution it is very likely that it would also have better performance in terms of fps since even pushing more pixels we are with a difference of 0.something in terms of fps.
Not necessarily. Faster GPU goes more FPS boost at lower resolution than an slower one with more CUs. You have to look to the GPU benchmarks on pc. More pixels are not equally to more power in an absolutely way.
 
Last edited:
That's the thing, XSX doesn't have a '20% more pure power' or 'power (as a whole)', it has more 18% more compute power (along with texel fill rate) over PS5 while actually being in deficit by around 20% in other GPU 'power' metrics tied to fixed function units due to frequency difference. Thus expecting a consistent difference of 20% in resolution or FPS is illogical to begin with.

And going Cerny's statement of "when the triangles are small it's more difficult to feed the CUs with useful work" future next-gen titles with more complex geometry can actually favor PS5's 'deep' design more. There was also a post by Matt Hargett on Twitter alluding that optimized code (with higher cache hit rate) will benefit PS5's faster cache subsystem more. Furthermore, i really don't think that PS5's Geometry Engine and I/O complex are even remotely close to being maxed out.
Bingo.
 

MonarchJT

Banned
Not necessarily. Faster GPU goes more FPS boost at lower resolution than an slower one with more CUs. You have to look to the GPU benchmarks on pc. More pixels are not equally to more power in an absolutely way.
99.9% of the time yes they are....absolutely yes
there are thousands and decades of PC benchmark who prove this. the single or couple of frame dips that we seeing inc retain games are very VERY likely related to platform optimization and not what you think. obviously a higher clocked GPU more easily compensates for the optimization
 
Last edited:

Hobbygaming

has been asked to post in 'Grounded' mode.
rdna2 things have not been practically even used yet .. we will see a big change as soon as they switch from primitive shaders the mesh shaders take hold
I just got Deja Vu of waiting until Xbox switches their API to DX11 for a massive performance gain on Xbox One

I think people are waiting for a boat that will never arrive at the docks in hoping that the XSX will just start to widen the gap
 

MonarchJT

Banned
I just got Deja Vu of waiting until Xbox switches their API to DX11 for a massive performance gain on Xbox One

I think people are waiting for a boat that will never arrive at the docks in hoping that the XSX will just start to widen the gap
after games will start to release using those new tech we will know ...mesh shader + vrs tier 2+ sfs benchmarks give me reason to think the gap will widen
 
Last edited:

Lysandros

Member
I just got Deja Vu of waiting until Xbox switches their API to DX11 for a massive performance gain on Xbox One

I think people are waiting for a boat that will never arrive at the docks in hoping that the XSX will just start to widen the gap
I think in order to widen it there should be a consistent/unquestionable gap to begin with. As for now the machines are trading blows.
 
after games will start to release using those new tech we will know ...mesh shader + vrs tier 2+ sfs benchmarks give me reason to think the gap will widen
In the future devs will learn to better use both machines. Not only Xbox. Xbox has mesh shader and vrs (already in some games) and others stuff. By the way sfs will not improve performance, but whatever.

On PS5 devs will also learn to use primitive shaders, custom I/O hardware (that will actually improve framerate in I/O heavy games and we know games are more and more I/O limited), maybe they will learn to use Tempest to offload some CPU tasks, ID Buffer could help for CBR or TAA. All those things being real custom hardware, not exclusive APIs features like "DirectX 12 ultimate".

By the way for now most games are cross gen so they won't usually use the PS5 specific stuff but can actually use some Xbox series specific stuff (like VRS) due to the nature of Xbox APIs.
 
Last edited:

assurdum

Banned
99.9% of the time yes they are....absolutely yes
there are thousands and decades of PC benchmark who prove this. the single or couple of frame dips that we seeing inc retain games are very VERY likely related to platform optimization and not what you think. obviously a higher clocked GPU more easily compensates for the optimization
So why Nvidia points more to DLSS tech than native higher pixels in the GPU performance? Now I want to hear how much low you can go.
 
Last edited:

Hobbygaming

has been asked to post in 'Grounded' mode.
after games will start to release using those new tech we will know ...mesh shader + vrs tier 2+ sfs benchmarks give me reason to think the gap will widen
For games available on both consoles it surely won't happen as most studios won't make two different versions of one game to release at the same time

Also PS5 has its own things that will keep them competing on the performance front

The customized GE, IO, cache scrubbers (less cache misses) 12 channel SSD with 8 priority levels
 

assurdum

Banned
after games will start to release using those new tech we will know ...mesh shader + vrs tier 2+ sfs benchmarks give me reason to think the gap will widen
Man what a rude awakening will waiting you. More the engines will push the consoles and more close the native resolution will be between the two (if not will use the same) Prepare you. Just look to the 30 FPS mode in the games already released.
 
Last edited:

MonarchJT

Banned
So why Nvidia points more to DLSS tech than native higher pixels in the GPU performance? Now I want to hear how much low you can go.
wtf ...exactly because finding a way to be able to push fewer pixels (resolution improved through dlss in fact.) saves a lot of performance that can be spent otherwise. there are billions of benchmark around, get a culture
 
Last edited:

MonarchJT

Banned
Man what a rude awakening will waiting you. More the engines will push the consoles and more close the native resolution will be between the two (if not will use the same) Prepare you. Just look to the 30 FPS mode in the games already released.
lol...this is what fanboys like you are hoping for ... and I think marketing on you has worked great. if there are substantial differences in resolution so already in the first year I don't know how this situation will change ... if not get worse
and I'm being this rude because you called me "idiot" not even knowing who I am, what I do in life and without having taken a ban
 
Last edited:

Hobbygaming

has been asked to post in 'Grounded' mode.
lol...this is what fanboys like you are hoping for ... and I think marketing on you has worked great. if there are substantial differences in resolution so already in the first year I don't know how this situation will change ... if not get worse
and I'm being this rude because you called me "idiot" not even knowing who I am, what I do in life and without having taken a ban
That's the thing, resolutions and framerates have been similar up to this point. There hasn't been many stark differences. If anything, the PS5 has proved it can have some advantages in some areas
 
Last edited:

MonarchJT

Banned
That's the thing, resolutions and framerates have been similar up to this point. There hasn't been many stark differences. If anything, the PS5 has proved it can have some advantages in some areas
are not similar ...this is the point...there are differences ... substantial differences in resolution while maintaining practically comparable framerates
this can only get one way......and is worse for the weaker machine
it's not about brand is about hardware
 
Last edited:

MonarchJT

Banned
For games available on both consoles it surely won't happen as most studios won't make two different versions of one game to release at the same time

Also PS5 has its own things that will keep them competing on the performance front

The customized GE, IO, cache scrubbers (less cache misses) 12 channel SSD with 8 priority levels
yes all this customized things get the console pushing 0.1% 0.2% fps more at less resolution. mesmerizing
 
Last edited:

Hobbygaming

has been asked to post in 'Grounded' mode.
are not similar ...this is the point...there are differences ... substantial differences in resolution while maintaining practically comparable framerates
this can only get one way......and is worse for the weaker machine
it's not about brand is about hardware
Is someone really going to care that one game looks slightly better when zoomed in 400%

It really won't matter to most and upscaled imaging tech is improving constantly
 

rnlval

Member
Why are you talking about RT, when his example didnt mention RT at all? It just looks like that you missed the point below completely.

Hypothesis: series x have 20% more pure power than PS5, so series x should perform ~20% better on identical scenario.

test scenario(s): Same game, same resolution, same graphical fidelity level

Results: Series x and PS5 perform almost identically on some of these scenarios


That is the point he was pointing out, nothing about RT. It is just weird that you even bring that up, when it is not mentioned.


The interesting thing is, that when one system have more theoretical performance, but they perform similarry on same kind of situations, why is that?

is it because faster clocks of PS5 allow higher usage of the theoretical performance maxium, or something else?

And if it is that,then does series x have more future potential to gain higher gains of what devs can get out of the system, assuming ps5 as a system is easier to nearly be maxed out.
vswIZwC.png

You're blind.


I counter post with the following screenshot which shows larger scale raytraced reflections.
VoUqyJ9.jpg
 
Last edited:

rnlval

Member
That's the thing, XSX doesn't have a '20% more pure power' or 'power (as a whole)', it has 18% more compute power (along with texel fill rate) over PS5 while actually being in deficit by around 20% in other GPU 'power' metrics tied to fixed function units due to frequency difference. Thus expecting a consistent difference of 20% in resolution or FPS is illogical to begin with.

And going by Cerny's statement of "when the triangles are small it's more difficult to feed the CUs with useful work" future next-gen titles with more complex geometry can actually favor PS5's 'deep' design more. There was also a post by Matt Hargett on Twitter alluding that optimized code (with higher cache hit rate) will benefit PS5's faster cache subsystem more. Furthermore, i really don't think that PS5's Geometry Engine and I/O complex are even remotely close to being maxed out.
XSX has "20% more pure power" is flawed when increasing CU count scales texel and raytracing processing via the compute path. Mesh Shaders NGGP are processed via shaders which enables geometry processing to scale with increase CU count.

XSX's 52 CU includes higher SRAM storage (Wavefront queue, registers, L0 cache, local data store, shader instruction cache) and 320-bit bus leads to 5 MB L2 cache configuration.

3gMeG9K.jpg


Both XSX and RTX Ampere GPU have a higher bias towards compute engine path.
 
Last edited:

rnlval

Member
In the future devs will learn to better use both machines. Not only Xbox. Xbox has mesh shader and vrs (already in some games) and others stuff. By the way sfs will not improve performance, but whatever.

On PS5 devs will also learn to use primitive shaders, custom I/O hardware (that will actually improve framerate in I/O heavy games and we know games are more and more I/O limited), maybe they will learn to use Tempest to offload some CPU tasks, ID Buffer could help for CBR or TAA. All those things being real custom hardware, not exclusive APIs features like "DirectX 12 ultimate".

By the way for now most games are cross gen so they won't usually use the PS5 specific stuff but can actually use some Xbox series specific stuff (like VRS) due to the nature of Xbox APIs.
FYI, PS5's Tempest CU (AMD DSP based on compute shader only CU design) has about 100 GFLOPS.
 
Last edited:
Thanks to AMD putting defective PS5 SOCs on the market, people can run tests on it.

Remember that big controversy about the FPU? Everyone can see that Sony changed something inside the Zen 2 FPU but the questions remained, where those changes for better or worse?
Well, it looks like it was for the worse (IMO):




But why would they do this if it was NOT a matter of space (there's unused space right there besides the FPUs)?

 

assurdum

Banned
Thanks to AMD putting defective PS5 SOCs on the market, people can run tests on it.

Remember that big controversy about the FPU? Everyone can see that Sony changed something inside the Zen 2 FPU but the questions remained, where those changes for better or worse?
Well, it looks like it was for the worse (IMO):




But why would they do this if it was NOT a matter of space (there's unused space right there besides the FPUs)?


Honest question: can you find to me a game on ps5 which perfoms worse in the limited cpu scenario compared the XSX (if I'm not wrong XSX hasn't such cut)? Because from what we have seen now, the cpu modification on ps5 seems far off for the 'worse'.
 
Last edited:

Great Hair

Banned
Thanks to AMD putting defective PS5 SOCs on the market, people can run tests on it.

Remember that big controversy about the FPU? Everyone can see that Sony changed something inside the Zen 2 FPU but the questions remained, where those changes for better or worse?
Well, it looks like it was for the worse (IMO):




But why would they do this if it was NOT a matter of space (there's unused space right there besides the FPUs)?


"Who cares, have you seen the Vanguard yet or me with sexy PS5 graphics?!"
iu

iu

iu


Reveal.gif
 
Honest question: can you find to me a game on ps5 which perfoms worse in the limited cpu scenario compared the XSX (if I'm not wrong XSX hasn't such cut)? Because from what we have seen now, the cpu modification on ps5 seems far off for the 'worse'.

That's the real mystery, how can a hardware with LESS features perform better?
One possibility is that it's just brute forcing while the missing features aren't being properly used by the developers, meaning that when they do the SeX will knock it up. :messenger_pensive:
 

assurdum

Banned
That's the real mystery, how can a hardware with LESS features perform better?
One possibility is that it's just brute forcing while the missing features aren't being properly used by the developers, meaning that when they do the SeX will knock it up. :messenger_pensive:
It's not a mistery. Some cpu part on console are unnecessary. It's not a pc. If I'm not wrong even Cerny said that in Road of the ps5.
 
Last edited:
Seems people are still struggling to comprehend the benefits of the "Magic SSD" when it comes to RAM utilisation and the effect it has on graphics. Here are some quotes from third party to first party developers on this subject.

"So Nanite enabled the artist to build a scene with geometric complexity that would have been impossible before. there are tens of billions of triangles in that scene and we simply couldn't have them all in memory at once so what end up doing is streaming in triangles as the camera is moving through the environment, and the I/O capabilities of the PS5 are one the key hardware features that enable us to achieve that level of realism".

“Rendering micropolygons resulting from a 20 billion-polygon scene is hard enough. But actually being able to get that data into memory is a critical challenge, and as a result of the years of discussions and efforts leading up to that, it was a perfect opportunity to partner [with Sony] to show that effort finally coming to fruition with pixels on the screen.”

Nick Penwarden, VP of Engineering at Epic Games on the UE5 demo.

"You could render a version of this [demo on a system with an HDD], it would just be a lot lower detail,".

TIm Sweeney on the UE5 demo and SSD vs HDD


This is one of the most popular engines by both third and first party developers, that says enough but let's keep going...

"it absolutely effects the amount of data we're able to push... we can load chunks right as you load the corner, not halfway down the hallway. Which basically allows us to say we need less stuff in memory at all times so we can take the areas we're pushing and up the detail and up the textures because they can all come in on time before the player gets there."


Bluepoint developer on the SSD streaming via DF interview.

There's plenty more where those came from.
 
Last edited:
It's not a mistery. Some cpu part on console are unnecessary. It's not a pc. If I'm not wrong even Cerny said that in Road of the ps5.

I know, but in this case it's hard to make sense.
The FPU still retains the Zen 2 FPU capabilities, it's a 256bit AVX capable. But then Cerny crippled it to only to only do half the work per clock? Why? What is the benefit? Why this work was needed? Does he mean that it would be bad if he had not limited the FPU? Why exactly, how the got to this conclusion? What the system gained by doing this?
Me, and I believe many others, wish to hear more specific answers.
 

MonarchJT

Banned
I know, but in this case it's hard to make sense.
The FPU still retains the Zen 2 FPU capabilities, it's a 256bit AVX capable. But then Cerny crippled it to only to only do half the work per clock? Why? What is the benefit? Why this work was needed? Does he mean that it would be bad if he had not limited the FPU? Why exactly, how the got to this conclusion? What the system gained by doing this?
Me, and I believe many others, wish to hear more specific answers.
i think that 256bit avx are so fucking power hungry and stressful for the CPU Cerny talked about how the PS5 downclock itself based on that and not on the temp. I think it's a way to lighten the power consumption of the console and also avoid downclocking. Is the only thing that come in my mind
 
Last edited:

twilo99

Member
That's the real mystery, how can a hardware with LESS features perform better?
One possibility is that it's just brute forcing while the missing features aren't being properly used by the developers, meaning that when they do the SeX will knock it up. :messenger_pensive:

Well even if that was the case, there is no way to know what 3rd party developers are actually doing.. 3rd party developer can decide to never use the full potential on either system.

If you want to look at what these things are capable of, you have to look at what 1st party studios can do.. unfortunately.
 
Last edited:
i think that 256bit avx are so fucking power hungry and stressful for the CPU Cerny talked about how the PS5 downclock itself based on that and not on the temp. I think it's a way to lighten the power consumption of the console and also avoid downclocking. Is the only thing that come in my mind

Zen 2 runs 256bit AVX2 at fullclock, and the arch doesn't use much energy anyway, so I tend to think that this was unnecessary to protect the GPU clock.
I think that Cerny used other reasoning to make this decision, like, he had plenty of data showing that developers aren't and won't use AVX2 that much and this will not hurt CPU performance in the long run. I disagree with this, I feel that people minimize the CPU side too much, put all the weigh of the game on the GPU too much.
 
Last edited:
Zen 2 runs 256bit AVX2 at fullclock, and the arch doesn't use much energy anyway, so I tend to think that this was unnecessary to protect the GPU clock.
I think that Cerny used other reasoning to make this decision, like, he had plenty of data showing that developers aren't and won't use AVX2 that much and this will not hurt CPU performance in the long run. I disagree with this, I feel that people minimize the CPU side too much, put all the weigh of the game on the GPU too much.
I personally believe Cerny knows more than us about this subject and made the best decision he could have done at that point in time. Only time will tell if he's correct.
 

IntentionalPun

Ask me about my wife's perfect butthole
Seems people are still struggling to comprehend the benefits of the "Magic SSD" when it comes to RAM utilisation and the effect it has on graphics. Here are some quotes from third party to first party developers on this subject.

"So Nanite enabled the artist to build a scene with geometric complexity that would have been impossible before. there are tens of billions of triangles in that scene and we simply couldn't have them all in memory at once so what end up doing is streaming in triangles as the camera is moving through the environment, and the I/O capabilities of the PS5 are one the key hardware features that enable us to achieve that level of realism".

“Rendering micropolygons resulting from a 20 billion-polygon scene is hard enough. But actually being able to get that data into memory is a critical challenge, and as a result of the years of discussions and efforts leading up to that, it was a perfect opportunity to partner [with Sony] to show that effort finally coming to fruition with pixels on the screen.”

Nick Penwarden, VP of Engineering at Epic Games on the UE5 demo.

"You could render a version of this [demo on a system with an HDD], it would just be a lot lower detail,".

TIm Sweeney on the UE5 demo and SSD vs HDD



The PS5 SSD is awesome, and general I/O complex.. but all of this UE5 stuff was essentially marketing fluff.. those statements are essentially lies really as nobody is pushing 20 billion polygons around, Nanite optimizes that down to ~20 million which is a lot, but the whole thing was a lot of BS'ing on Epics part.

The entire point of Nanite is that it is extremely memory efficient.. more detail with smaller sized models.

Now textures? PS5 can definitely throw more texture data around.
 

rnlval

Member
Zen 2 runs 256bit AVX2 at fullclock, and the arch doesn't use much energy anyway, so I tend to think that this was unnecessary to protect the GPU clock.
I think that Cerny used other reasoning to make this decision, like, he had plenty of data showing that developers aren't and won't use AVX2 that much and this will not hurt CPU performance in the long run. I disagree with this, I feel that people minimize the CPU side too much, put all the weigh of the game on the GPU too much.
VSLubAj.jpg


AVX units have a higher contribution to the thermal density.

Honest question: can you find to me a game on ps5 which perfoms worse in the limited cpu scenario compared the XSX (if I'm not wrong XSX hasn't such cut)? Because from what we have seen now, the cpu modification on ps5 seems far off for the 'worse'.
Most released games to date are cross-generation that still factors in Jaguar's 128-bit AVX v1 subset or 128 bit SSE 1/2/3/4. Intel Sandybridge Core i7-2600K enabled 256-bit AVX v1 can still play 2021 era games.

Full 256-bit AVXv 2 usage is useful for larger-scale PhysX style interactivity and it wouldn't work on Jaguar's 128-bit AVX v1 subset. Crackdown 3's wide-scale interactive physics are powered by Intel Xeon Haswell-E/Broadwell-E era 256-bit AVX v2.
 
Last edited:

onesvenus

Member
I know, but in this case it's hard to make sense.
The FPU still retains the Zen 2 FPU capabilities, it's a 256bit AVX capable. But then Cerny crippled it to only to only do half the work per clock? Why? What is the benefit? Why this work was needed? Does he mean that it would be bad if he had not limited the FPU? Why exactly, how the got to this conclusion? What the system gained by doing this?
Me, and I believe many others, wish to hear more specific answers.
As rnlval rnlval pointed out, FPUs are the hottest components in a CPU so I'm sure that's your reason there
 

PaintTinJr

Member
Thanks to AMD putting defective PS5 SOCs on the market, people can run tests on it.

Remember that big controversy about the FPU? Everyone can see that Sony changed something inside the Zen 2 FPU but the questions remained, where those changes for better or worse?
Well, it looks like it was for the worse (IMO):




But why would they do this if it was NOT a matter of space (there's unused space right there besides the FPUs)?


Assuming that the real PS5 doesn't have a different setup to the n-th level defect boards AMD are selling (I personally don't think those boards are representative, and have been rejected for more than just PS5s by Sony ) - the constant boost clocking situation with the PS5 required them to evenly distribute heat dissipation across the entire SoC IIRC going by what Cerny said, so it is probably in part down to that.

My other explanation theory would be that they only needed that many FPUs to use them for some CPU based RT - because unless you are doing deep algorithms that provide incoherent RT fx (with updates at multiple frame intervals) that allow all FPUs to be used heavily at the same time, there's only a need for enough hardware you can exploit for an fx in the 33.3ms, 16.66ms or 8.3ms you have to do the compute and composite the result in the GPUs backbuffer before flipping the result to the frontbuffer.
 

onQ123

Member
Why do people say PS5 has 64 ROPS when the die shot show 72? is this just random guessing that they are only using 64 of them or is it a real source saying they only use 64?
 

Boglin

Member
Why do people say PS5 has 64 ROPS when the die shot show 72? is this just random guessing that they are only using 64 of them or is it a real source saying they only use 64?
That's odd. I wonder if it's a labeling mistake or if people really did accidentally overlook it
 

Sosokrates

Report me if I continue to console war
That's the thing, XSX doesn't have a '20% more pure power' or 'power (as a whole)', it has 18% more compute power (along with texel fill rate) over PS5 while actually being in deficit by around 20% in other GPU 'power' metrics tied to fixed function units due to frequency difference. Thus expecting a consistent difference of 20% in resolution or FPS is illogical to begin with.

And going by Cerny's statement of "when the triangles are small it's more difficult to feed the CUs with useful work" future next-gen titles with more complex geometry can actually favor PS5's 'deep' design more. There was also a post by Matt Hargett on Twitter alluding that optimized code (with higher cache hit rate) will benefit PS5's faster cache subsystem more. Furthermore, i really don't think that PS5's Geometry Engine and I/O complex are even remotely close to being maxed out.

This 18% is not taking into account, the fluctuations of the PS5's gpu clock speed. The xsx having a static 1825mhz gpu clock will have some advantages beyond the raw compute advantage.
 
Top Bottom