• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Oberon PlayStation 5 SoC Die Delidded and Pictured

Of course L3 CPU cache is shared. It's also shared on PS4 CPU. The problem is the high latency if one CCX wants to access the other CCX L3 cache.

When people are talking about Zen 3 L3 shared cache they are talking about low latency for all L3 accesses from any CCX
 

Zathalus

Member
Of course L3 CPU cache is shared. It's also shared on PS4 CPU. The problem is the high latency if one CCX wants to access the other CCX L3 cache.

When people are talking about Zen 3 L3 shared cache they are talking about low latency for all L3 accesses from any CCX
Yeah, but the die shots we have already confirm that is not the case. Same cache setup as XSX/S and regular Zen 2.

 

jroc74

Phone reception is more important to me than human rights
There's no L3 cache for the GPU.

It's ok that the PS5 is a bit less powerful than the XSX, you can stop looking for "secret sauce".

I'm shutting down console war nonsense. PS5 fanboys are doing the exact some nonsense Xbox fanboys did when it turned out the Xbox One was a less powerful system than the PS4 last time around. There's no secret sauces yet to be found, we have known the full specs of both systems for months/years at this point.
Please, have several seats.

...you are the one that bought up MS and XSX....

Christ...

Forget your console war nonsense...I'm just still shocked 'the PS5 isnt RDNA 2' is still a thing...its almost Sept 2021.

Ppl need to let that dumb ass narrative go....
 
Last edited:

Tripolygon

Banned
Of course L3 CPU cache is shared. It's also shared on PS4 CPU. The problem is the high latency if one CCX wants to access the other CCX L3 cache.

When people are talking about Zen 3 L3 shared cache they are talking about low latency for all L3 accesses from any CCX
The CPU has it local L3 cache for each CCX. What they are proposing here is an L3 cache shared across the whole SoC.

My uneducated guess at this point is that its the cache for the IO block and not necessarily an L3 cache.
 
I did this:

MR6daxb.jpg


No one would be discussing this if Sony had been open and told us the details, but anyway, look with attention.
On the SeX dieshot you can see some circuitry between the memory controllers, right? No look the PS5 dieshot, can't you see that BETWEEN that circuitry that exists on the SeX there's something more?


EDIT: what the fuck is people talking about? CPU L3 "shared across the whole SOC"? "Global L2"? FUCK THOSE PEOPLE, we are wasting too many words with confusion and insanity.
 
Last edited:

Md Ray

Member
Of course L3 CPU cache is shared. It's also shared on PS4 CPU. The problem is the high latency if one CCX wants to access the other CCX L3 cache.

When people are talking about Zen 3 L3 shared cache they are talking about low latency for all L3 accesses from any CCX
I don't think PS4 has a shared cache. There's also no L3 cache on PS4 CPU. It tops out at L2, and each cluster (4 Jaguar cores per cluster) has access to 2MB of L2$, for a total of 4MB (2x 2MB) for 8 cores.
 
Last edited:
I don't think PS4 has a shared cache. There's also no L3 cache on PS4 CPU. It tops out at L2, and each cluster (4 Jaguar cores per cluster) has access to 2MB of L2$, for a total of 4MB (2x 2MB) for 8 cores.
Yes you are right, it's L2 on PS4. I meant shared cache (so L2) on Jaguar which has the same principle as shared L3 cache on Zen 2 powered consoles. Each quad core Jaguar has access to the other L2 cache, albeit with a much higher latency.

But logically the cache is actually shared. If you don't care about latency you can code the CPU like if all L2 (PS4) cache was shared between 7 CPU cores but of course you shouldn't do it.
zk67A1m.png

 
The CPU has it local L3 cache for each CCX. What they are proposing here is an L3 cache shared across the whole SoC.

My uneducated guess at this point is that its the cache for the IO block and not necessarily an L3 cache.
That could be possible but some people seem to want to argue it's Infinity Cache for the GPU, which just isn't the case.

Several people include NX Gaming seem pretty certain it's a L3$ shared between the two CPU CCX clusters, which would make it a customization lifted from the Zen 3 roadmap.

I did this:

MR6daxb.jpg


No one would be discussing this if Sony had been open and told us the details, but anyway, look with attention.
On the SeX dieshot you can see some circuitry between the memory controllers, right? No look the PS5 dieshot, can't you see that BETWEEN that circuitry that exists on the SeX there's something more?


EDIT: what the fuck is people talking about? CPU L3 "shared across the whole SOC"? "Global L2"? FUCK THOSE PEOPLE, we are wasting too many words with confusion and insanity.

It could still just be a slightly different configuration of the memory controllers. PS5 has a different bus configuration than Series X because the bus is narrower (256-bit vs. 320-bit). Also according to Infinity Cache graphic diagrams available, you wouldn't partition cache between the memory controllers:

8PbgHfDBWQ6UJJXEqhw297.jpg

There's no block of cache located above the memory controller and between a fabric interconnect connecting to the rest of the GPU logic like can be seen above, on that PS5 die shot.

I'll try to see if there are better reference shots (and hopefully something similar to this for a PC RDNA 2 GPU die around somewhere) for comparison, but just going off what you have linked and this image of a RDNA 2 GPU with Infinity Cache, there's no similarity between it and PS5 to suggest the latter has any IC.
 
Last edited:

Garani

Member
Can't be salty when I own both because I buy everything. :messenger_tears_of_joy:

Keanu Reeves Thank You GIF by NETFLIX

And you come here to bitch about how XSX is better than PS5 when no one, and I mean NO ONE, even mentioned the XSX or even debated about which SoC is better?

You and X Xyphie need to cool down and just get the perspective right: there is a better dieshot of the PS5 SoC and people talk about what they see. That's it. There is no hidden agenda. No console warring. Just a plain discussion about a picture.
 
That could be possible but some people seem to want to argue it's Infinity Cache for the GPU, which just isn't the case.

Several people include NX Gaming seem pretty certain it's a L3$ shared between the two CPU CCX clusters, which would make it a customization lifted from the Zen 3 roadmap.

NO!
What people mean by "shared" is that the L3 is made of smaller slices that are connect together. Each core can access the slice nearest to it very fast with minimal latency and the distant slices at much slower speeds and latency. Zen 3 is "unified", it's single monolith slice, there's virtually no difference in latency for the cores to access all of it. People were wishing that Sony had worked close enough to AMD to pick features for their future road map improving the current products they had available to build the PS5 but this didn't happened, PS5's L3 cache is "partitioned" and "shared" like the old Zen 2 L3.

People here are really confused and mixing up CPU cache and GPU cache and misinterpreting meanings.
 
Last edited:
And you come here to bitch about how XSX is better than PS5 when no one, and I mean NO ONE, even mentioned the XSX or even debated about which SoC is better?

You and X Xyphie need to cool down and just get the perspective right: there is a better dieshot of the PS5 SoC and people talk about what they see. That's it. There is no hidden agenda. No console warring. Just a plain discussion about a picture.

Move on buddy, I responded to a false/misleading statement. Why so in your feelings? I'm barely paying attention to this thread anymore after I said what I said. I only respond now because of how upset you seem. And the statement I responded to clearly referenced series x by way of a direct comparison on framerates. This is now my final response in this thread. Enjoy.

Leaving See Ya GIF by MOODMAN
 
Move on buddy, I responded to a false/misleading statement. Why so in your feelings? I'm barely paying attention to this thread anymore after I said what I said. I only respond now because of how upset you seem. And the statement I responded to clearly referenced series x by way of a direct comparison on framerates. This is now my final response in this thread. Enjoy.

Leaving See Ya GIF by MOODMAN

except the statement you replied to wasn’t false or misleading at all. It was spot on

but console warriors will continue cherry picking data to fit their narrative/agenda
 
Last edited:

Dream-Knife

Banned
"The GDDR6X memory controller and PHY (physical layer) interface can be seen on the edge of the SoC, which is where the data enters and exits the processor. Looking closely, you can see the L3 cache there as well."

They are clearly talking about L3 cache between the GDDR6X controller and PHY on the edge of the SoC.
Not the L3 cache on CPU that are more in the middle left of the chip far away from the GDDR6x and PHY lol
GDDR6x? RDNA2 uses GDDR6, not X.
 

Lysandros

Member
I must say it, i think that using XboxOne/PS4 example as an analogy for PS5/XSX situation is laughable. We have plenty of titles having slightly higher average resolutions or framerates at same res. on PS5. Was this the case with XboxOne? And contrary to the majority, i am struggling to call a console which all of every single component of its GPU (from the CU's to the rasterizers, caches, geometry engine, command processor, ACE's etc.) is slower compared to its counterpart by ~20% (unquestionably) 'more powerful'. I am refusing to ignore hard facts which put PS5's GPU ahead of XSX's in some metrics relevant to game performance. One thing that i can easily say/accept is that XSX has slightly more 'compute' power and game engines are 'generally' compute bound.
 
Last edited:

Dream-Knife

Banned
I must say it, i think that using XboxOne/PS4 example as an analogy for PS5/XSX situation is laughable. We have plenty of titles having slightly higher average resolutions or framerates at same res. on PS5. Was this the case with XboxOne? And contrary to the majority, i am struggling to call a console which all of every single component of its GPU (from the CU's to the rasterizers, caches, geometry engine, command processor, ACE's etc.) is slower compared to its counterpart by ~20% (unquestionably) 'more powerful'. I am refusing to ignore hard facts which put PS5's GPU ahead of XSX's in some metrics relevant to game performance. One thing that i can easily say/accept is that XSX has slightly more 'compute' power and game engines are 'generally' compute bound.
You know that higher clocks don't always mean more performance right? It's actually pretty common for the weaker cards to have higher clocks for a given architecture. The 6600xt which is basically the PS5 with infinity cache has a game clock of 2359 mhz, yet is much weaker than a 6900xt which has a game clock of 2015 mhz.
 

rnlval

Member
Infinity Cache is actually configurable in size.
6800 has that big one but AMD said the smaller GPUs will have less.

For example RX 6600 has 32MB Infinty Cache only.
RX 6600 also has reduced 128-bit bus with 32 MB Infinity Cache.

RX 6800 has 256-bit bus with 128 MB Infinity Cache and 4 MB L2 cache.
RX 6700 has 192-bit bus with 96 MB Infinity Cache and 3 MB L2 cache.
RX 6600 has 128-bit bus with 32 MB Infinity Cache and 2 MB L2 cache.

"The GDDR6X memory controller and PHY (physical layer) interface can be seen on the edge of the SoC, which is where the data enters and exits the processor. Looking closely, you can see the L3 cache there as well."

They are clearly talking about L3 cache between the GDDR6X controller and PHY on the edge of the SoC.
Not the L3 cache on CPU that are more in the middle left of the chip far away from the GDDR6x and PHY lol
GDDR6X? That's wrong. PS5 has GDDR6-14000 MT/s

pz2WWBD.png
 

FireFly

Member
I must say it, i think that using XboxOne/PS4 example as an analogy for PS5/XSX situation is laughable. We have plenty of titles having slightly higher average resolutions or framerates at same res. on PS5. Was this the case with XboxOne? And contrary to the majority, i am struggling to call a console which all of every single component of its GPU (from the CU's to the rasterizers, caches, geometry engine, command processor, ACE's etc.) is slower compared to its counterpart by ~20% (unquestionably) 'more powerful'. I am refusing to ignore hard facts which put PS5's GPU ahead of XSX's in some metrics relevant to game performance. One thing that i can easily say/accept is that XSX has slightly more 'compute' power and game engines are 'generally' compute bound.
It's not just a compute advantage in favour of the XSX. It's also a texture rate, ray/triangle intersection rate and bandwidth advantage.
 

rnlval

Member
Of course L3 CPU cache is shared. It's also shared on PS4 CPU. The problem is the high latency if one CCX wants to access the other CCX L3 cache.

When people are talking about Zen 3 L3 shared cache they are talking about low latency for all L3 accesses from any CCX
Sorry, that's wrong. PS4's Jaguar quad-core has L2 cache. NO L3 cache.
 

Lysandros

Member
You know that higher clocks don't always mean more performance right? It's actually pretty common for the weaker cards to have higher clocks for a given architecture. The 6600xt which is basically the PS5 with infinity cache has a game clock of 2359 mhz, yet is much weaker than a 6900xt which has a game clock of 2015 mhz.
6600 xt is a 10.6 TF card with 256 GB/s RAM bandwidth along with 32 MB of IC an 64 ROPS. 6900 XT is a 23 TF GPU with 512 GB/s bandwidth, 128 MB IC and 128 ROPS... You think this is even remotely comparable to the XSX/PS5 situation? I hope that's a joke.

Edit: Furthermore contrary to the example cited, PS5 and XSX have exactly the same number of SE (2 Shader Engines) which means same number of fixed function units (not CU's). These units run at ~20% higher frequency in PS5's case and thus are more capable.
 
Last edited:

rnlval

Member
Type? It is called L3 because it is a level higher than L2 that is higher than L1 that is higher than L0.
You start from the close to the core and each new level of cache receive a number.
There not that much diference in terms of silicon look between them.

So L3, L2, L1 is more related to the place/level of the cache.

AMD's 4700S (PS5) APU Cinebench R15 and R20 benchmarks. PS5 Zen 2 CPU is slower than Ryzen 7 4750G Pro's Zen 2 CPU.

f47ZKG3.jpg


Real Zen 3 from APU i.e. Ryzen 7 5700G

psGrfGe.png


l6a1OOB.png


Where's Zen 3 unified L3 cache results from AMD 4700S APU (PS5)?
 

rnlval

Member
6600 xt is a 10.6 TF card with 256 GB/s RAM bandwidth along with 32 MB of IC an 64 ROPS. 6900 XT is a 23 TF GPU with 512 GB/s bandwidth, 128 MB IC and 128 ROPS... You think this is even remotely comparable to the XSX/PS5 situation? I hope that's a joke.

Edit: Furthermore contrary to the example cited, PS5 and XSX have exactly the same number of SE (2 Shader Engines) which means same number of fixed function units (not CU's). These units run at ~20% higher frequency in PS5's case and thus are more capable.
XSX GPU has 5 MB L2 cache which is 25% higher than 4MB L2 cache in PS5.
 

Lysandros

Member
It's not just a compute advantage in favour of the XSX. It's also a texture rate, ray/triangle intersection rate and bandwidth advantage.
I didn't say that 'XSX's only advantage is the compute', i equally didn't mention every PS5 GPU advantage like higher pixel fill rate, higher triangle throughput, higher amount of caches (at higher bandwidth) available per CU etc. Only, XSX's compute advantage is the more relevant (i think) and most well known one related the very famous TF number. I find the RAM bandwidth situation to be somewhat debatable with XSX's complex/faster-slower pools but i wont argue on that now.
 

Inviusx

Member
Imagine for a moment the thousands of man hours in RND, design and manufacturing for such an intricate and beautiful piece of technology, and then getting 60fps with drops at 1440p.
 

Loope

Member
Did we learn anything new from this?
Besides the fact that it might have or not a L3 cache on the GPU?

Only that apparently xbots was replaced with xtards, but given the person that made the comment, he will soon have another name for xbox fanboys.
 
Top Bottom