• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

Nowcry

Member
In that case bandwidth per TF would be a little higher on PS5.

It is not that simple, it depends on the caches, a higher clock makes the caches work better.

The PS5's cache is possibly bigger than the SX's and with the cache correctors it will possibly get more hits which could need a lot less than the bandwidth competition. And the most important thing remains, the SX would have 16 CU per Shader Array sharing caches between them compared to the 9 CU array in PS5 definitely PS5 will have a higher HIT in any case.

That would explain the UI that is capable of putting your friend's shared screen, or the game guides included in the PS + UI.

Until the 28th we will not know anything, but we are going 16 CU for ShaderArray vs 9 CU for ShaderArray we can assume that there will be more hits on PS5 with total security.

And then there would be the problem of having to deal with mid-game system interruptions from devs and see how to handle that. It is not easy to do this and it usually causes problems for devs they do not like that for sure.
 
Last edited:

DaGwaphics

Member
PC parts have many uses, some might excel at games, some at ray tracing, some at gaming for 1440p, some at 4k, some for server aplications or ML, 9 days we will find out what the specs are and how AMD market them and what they are good at.

True. Just from a marketing standpoint, I don't see a n21 product in the lineup that doesn't outperform all variants of the n22 linup. If the n21 lite gets released, I assume that the higher bandwidth and increased CU numbers allow it to outperform n22 by a measurable amount. At least that lineup would make sense n21 > n21 lite > n22 > n22 lite, etc. If n22 beats n21 lite, it seems like n22 derivatives should be able to fill out the lineup (unless n21 lite is a mobile part or something like that). Though Nvidia has done some crazy binning with their parts at times, but, even then, they do generally ensure that memory bandwidth and stream processors follow a trend that would avoid confusion.
 

user1337

Member
Just got this email. Win a Xbox every day (luck pending).

V4RAZyo.jpg

sMV8UgS.jpg
 

Bo_Hazem

Banned

Why did Sony coat the heatsink with "Sliver"?

Liquid metal, like Gallium, could affect certain metals. So they coated it with silver just around the APU area IIRC:





What would Gallium do to Aluminium (aka aluminum)?





More into it:




Another intelligent solution from Mr. Yasuhiro Ootori and Sony.
 
Last edited:

Lysandros

Member
It is not that simple, it depends on the caches, a higher clock makes the caches work better.

The PS5's cache is possibly bigger than the SX's and with the cache correctors it will possibly get more hits which could need a lot less than the bandwidth competition. And the most important thing remains, the SX would have 16 CU per Shader Array sharing caches between them compared to the 9 CU array in PS5 definitely PS5 will have a higher HIT in any case.

That would explain the UI that is capable of putting your friend's shared screen, or the game guides included in the PS + UI.

Until the 28th we will not know anything, but we are going 16 CU for ShaderArray vs 9 CU for ShaderArray we can assume that there will be more hits on PS5 with total security.
I know about all that, thanks for the reminder. :) I was talking about the RAM bandwidth only going by your theoretical numbers, exluding cache bandwidth. Real world bandwidth performance of the system as a whole would be a different matter certainly.
 

Ptarmiganx2

Member
Because we know the specs of both systems. We'll know in a couple of weeks when we compare games on both systems. I could be wrong of course, wouldn't be the first time.
You must have info the rest of us don't have access to. We still have no real details on the PS5 APU other than basic specs. Is the PS5 using Infinity cache?
 
Last edited:

Neo_game

Member
10 Channels will be at 560GB/s and the remaining 6 will be at 336GB/s.

In use, the XSX will have 10GB of 560GB/s memory, and 3.5GB of 336 GB/s memory, as 2.5GB is reserved by the OS:

As others have already mentioned. It can only access the faster pool for 560gb/sec. So games using 10gb or less Xbox has advantage but if it uses more than that there is going to be bottleneck and there is no way around it.
 
Last edited:
As others have already mentioned. It can only access the faster pool or slower pool of memory at once so games using 10gb or less Xbox has advantage but if it uses more than that there is going to be bottleneck and there is no way around it.

I think the big question is if games need more than 10GBs for next gen. But then maybe the developers will just limit themselves with those 10GBs so they don't have to deal with that bottleneck.
 

Esppiral

Member
All those deals are only for north america aren't they? they allways left non north americans out..... Europe deserves some love....
 
I think the big question is if games need more than 10GBs for next gen. But then maybe the developers will just limit themselves with those 10GBs so they don't have to deal with that bottleneck.
I think the problem will be if a dev wants to use more than 10GB of VRAM on the XSX, it’s kinda natural that you’ll use the RAM with higher bandwidth for the GPU.
 

DaGwaphics

Member
As others have already mentioned. It can only access the faster pool or slower pool of memory at once so games using 10gb or less Xbox has advantage but if it uses more than that there is going to be bottleneck and there is no way around it.

Just by the design of the system, I doubt a game gets released on XSX where more than 10GB is used for VRAM. Time will tell, but it just doesn't look like a workable solution, basically a hard limit there. Bandwidth from the slow pool should definitely not be a bottleneck, still a lot more bandwidth there than a CPU or Sound chip would traditionally have access too. They don't really need anywhere near that amount and will never have access to memory for an entire second (unless there is no video output for some reason, maybe loading AI or something procedural on initial game boot or something).

The proof is in the pudding though, I make no purchases without seeing side by side comparisons of identical software.
 
I think the problem will be if a dev wants to use more than 10GB of VRAM on the XSX, it’s kinda natural that you’ll use the RAM with higher bandwidth for the GPU.

I guess they will put the tasks that don't really require that bandwidth into the slower one. But that does sound like it would make development more difficult.
 
I guess they will put the tasks that don't really require that bandwidth into the slower one. But that does sound like it would make development more difficult.
Yep, but if you keep accessing both pools at the same time, the faster one will take a hit too (at least that’s how I think it works), so the things that need higher bandwidth are affected by that one thing that doesn’t. I guess it’ll be really weird to find a balance if you need more than 10GB. But hey... Nvidia also has a card with 10GB, I guess we’ll be fine.
 

SlimySnake

Flashless at the Golden Globes
According to Richard's tests, Series X can hit 211w when running gears 5.

it seems dirt 5 isnt pushing the console hard enough because it is only at 165w.

gears 5 is getting a series x optimized patch by a first party team so they are likely using the cpu and gpu to the max.

220w is where i expected the xsx to land. ps5 should be a bit above that. maybe 230-245.
 
Last edited:

geordiemp

Member
True. Just from a marketing standpoint, I don't see a n21 product in the lineup that doesn't outperform all variants of the n22 linup. If the n21 lite gets released, I assume that the higher bandwidth and increased CU numbers allow it to outperform n22 by a measurable amount. At least that lineup would make sense n21 > n21 lite > n22 > n22 lite, etc. If n22 beats n21 lite, it seems like n22 derivatives should be able to fill out the lineup (unless n21 lite is a mobile part or something like that). Though Nvidia has done some crazy binning with their parts at times, but, even then, they do generally ensure that memory bandwidth and stream processors follow a trend that would avoid confusion.

Ther is no mention of any other Lite other than the 56 CU Navi 21 Lite and its the only RDNA2 part < 2 GHz. Thats why is suspect its made slightly different process or assembly and not just a clock setting and some TIM upgrade.

Large L2 cache will be better for ray tracing, it depends what the intended use is. You cant rank them like that. One might be better for CAD or office, one for gaming, one for gaming with ray tracing option, you just dont know.

Also the 2.4 GHz part will need better cooling, although the socket power for the 56 GU part is more than the 40 CU part, however HBM is also mentioned so I give up.

Note even the 32 CU part has estimated 64 MB of infinity cache, which is 600 % faster to access at least than RAM, So we have to wait and see what the deal is.

Note the pC Infinity cache estimates are not mine. I would be surprised if ps5 has 16 MB, so take that for whatever
 
Last edited:
D

Deleted member 775630

Unconfirmed Member
You must have info the rest of us don't have access to. We still have no real details on the PS5 APU other than basic specs. Is the PS5 using Infinity cache?
Yeah that's suddenly going to make a crazy amount of difference, and everything going from CPU, GPU and RAM difference will be overcome. You guys sound like MisterX right before this gen was going to launch.
 

Zathalus

Member
Still not relevant, at least you think Gears represent what that hardware is capable using to close to 100%
that is stress enough the system.
Its been patched to push beyond PC Ultra settings and take advantage of newer RDNA 2 features. Gears of War 5 at those settings are enough to stress 2080 to 2080ti class GPUs. Its evident from the tests themselves when Gears of War 5 has 50w more power draw then Dirt 5.
 
So, do we (when I say "We" I mean "You") know enough of the new generation to make an educated guess on which one is more powerful.

As someone who loves gaming, but admittedly, not up to speed on the tech side of things, is it fair to say how the components are brought together and make to work in harmony is going to be the key here?
 

SlimySnake

Flashless at the Golden Globes
In the next chapter see how Richard lick the plastic of XSX, he described as tasty console ever made.

I mean seriously who test a hardware using BC mode and thinks is relevant for a stress test.
giphy.gif
he tested gears 5 which has been optimized to run at PC ultra settings with new UE4 features like realtime software based GI. the rtx 2080 settings basically so you can sure that the gpu is being taxed. the cpu maybe not so much since last gen games dont tax PC CPUs at all but thats maybe an extra 10w or so. maybe 20w.

the rtx 2080 is basically a 225w gpu on its own. the fact that the xsx can do this at 210w for the entire system is very impressive. usually a rtx 2080 desktop can easily consume over 300w.

i really dont understand the fascination gamers have with heat being produced. who gives a shit if you are getting 2080 performance in a quiet as fuck console? you cannot have both a powerful 12 tflops gpu and not have it produce heat.
 
Last edited:
Its been patched to push beyond PC Ultra settings and take advantage of newer RDNA 2 features. Gears of War 5 at those settings are enough to stress 2080 to 2080ti class GPUs. Its evident from the tests themselves when Gears of War 5 has 50w more power draw then Dirt 5.
he tested gears 5 which has been optimized to run at PC ultra settings with new UE4 features like realtime software based GI. the rtx 2080 settings basically so you can sure that the gpu is being taxed. the cpu maybe not so much since last gen games dont tax PC CPUs at all but thats maybe an extra 10w or so. maybe 20w.

the rtx 2080 is basically a 225w gpu on its own. the fact that the xsx can do this at 210w for the entire system is very impressive. usually a rtx 2080 desktop can easily consume over 300w.

i really dont understand the fascination gamers have with heat being produced. who gives a shit if you are getting 2080 performance in a quiet as fuck console? you cannot have both a powerful 12 tflops gpu and not have it produce heat.
I will put in other words Gears 5 even with those changes is far of be one three best graphics games you can see now, imagine the PS5 and the
high end PC doesn't exists then in that world the XSX should be able to reach the best graphics in the industry so yes should a lot hell more
than Gears 5 with patch.

I am defending how much more powerful is the console, if you think this is the limit then that console will be disappointing, thing which doesn't make sense.

Also Gears 5 is not optimize in any way to high end card of rtx 2000 so of course you have headroom to do it in XSX.
 
Last edited:

ethomaz

Banned

Why did Sony coat the heatsink with "Sliver"?

Liquid metal, like Gallium, could affect certain metals. So they coated it with silver just around the APU area IIRC:





What would Gallium do to Aluminium (aka aluminum)?





More into it:




Another intelligent solution from Mr. Yasuhiro Ootori and Sony.

That is probably why it costs more... Silver is expensive than Copper.
 
Status
Not open for further replies.
Top Bottom