• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry - Playstation 5 Pro specs analysis, also new information

I am a huge fan of Mark Cerny, but this is not his best work.

This console is under NDA, it has not even been announced, let alone tested....

And I read this...

No Way Seriously GIF by Paramount Network
 
Talking about CPU limited games:

pub-1710876360453.png


Half of 3080 is used here, game it bottlenecked by 5800x3D - cpu that is much better than Zen 2.
Do you see the 67ms frame-time spike and the CPU threads going from 15% to 84%? This is bad CPU optimization. A 30% better CPU wouldn't help the Pro here. And again consoles (notably PS5) have very different APIs than PC. We have seen games with plenty of "CPU" problems on PC running fine on PS5.
 
Last edited:

Loxus

Member
I still think this part of Road to PS5 speaks volumes.

"First, we have a custom AMD GPU based on their "RDNA2" technology. What does that mean? AMD is continuously improving and revising their tech for RDNA2. Their goals were roughly speaking, to reduce power consumption by re-architecting the GPU to put data close to where it's needed. To optimize the GPU for performance and to add new more advanced feature set."

"But that feature set is malleable, which is to say that we have our own needs for PlayStation and that can factor into what the AMD roadmap becomes."

"So collaboration is born."

"If we bring concepts to AMD that are felt to be widely useful, then they can be adopted into RDNA - and used broadly, including in PC GPUs."

"If the ideas are sufficiently specific to what we're trying to accomplish, like the GPU cache scrubbers I was talking about, then they end up being just for us."

"If you see a similar discrete GPU available as a PC card at roughly the same time as we release our console, that means our collaboration with AMD succeeded."

"In producing technology useful in both worlds, it doesn't mean that we, as Sony, simply incorporated the pc part into our console."




The fact that PS5's GFX is 1000, makes me believe the PS5's GPU is the first of RDNA architecture.
LMBTtAA.png


It's not based on anything.

The PS5's GPU was then revised, adding RT.


I would assume the same with the PS5 Pro.
It's GPU isn't based on RDNA3 or 4.
It's it own architecture, that add features due to collaboration.

The same can be said for the PS5 Pro CPU.

Sony would then call it whatever due to marketing.
 
Last edited:

Mr.Phoenix

Member
I mean this example falls flat here. rog ally/steam deck etc. are extremely GPU limited. you're talking about 4x ray tracing improvements but then come up with cpu not being a meaningful difference between rog ally and steam deck. isn't rog ally like a gtx 1650? of course it won't see much of a meaningful difference from a zen 4 cpu or zen 2 cpu

sony should've at least have the decency of putting a zen 3 there. it would turn unstable 30 fps CPU bound games to rock solid 30 FPS modes and would allow 40 FPS mode to be a thing, or maybe unlocked 45 FPS



10:40

regardless, notice how limiting 3600 is for 3060ti here in novigrad/witcher 3/ray tracing as well and see how much zen 3 with higher clocks helps with that. this 3600 literally drops below 30 fps in novigrad with ray tracing. zen 3 is a must for ray tracing for 3rd party games if you ask me.

I am sorry but I just cant take these things seriously. And I feel the exact same way about stuff like this when gaming on a PC. The problem is not the CPUs, it's the fucking devs. Just look at that video you posted, the CPU has all 12 cores just sitting at around 11-25% then one core maxing out at 89%+. Then we use that and talk about CPU bottlenecks. I see that and all I see is poor optimization.

And as I have said before, a console will never take the brute force approach. That's a luxury only PC gamers have, as far as I am concerned the devs should do a better job. And I am sure stuff like this informs sonys choices. They have data they can use and track CPU utilization on any number of games. And I am certain, that just like with this video you have posted here, all that data comes back showing them that the CPU is on average running at like 35% all cores combined or some shit like that.
But what exactly is wrong with their opinions, this PRO compared to the PS4 Pro is such a tiny margin in quality jump, just porves how useless it is. The sales of the PS4 Pro were 1/10th of the regular PS4. The way things are going, this is gonna sell half that.
I guess this was always going to happen, and that its happening now is probably a sign that we are at least going in the right direction. Albeit 5-6 years late.

This reminds me of when the 20xx series GPUs came onto the market and compared to the 10xx series before it, everyone thought it was an underwhelming improvement. Because their brains couldn't see nor understand this thing called RT and DLSS.

Thats whats happening now, people like you are looking at this and saying... meh, its only 60%+ better in raw compute than the PS5, but not seeing the 2-4x better RT and the fact that this now has ML-based reconstruction with PSSR. I guess to make people like you happy, Sony should have used up their silicon budget to just add more CUs instead of having better Rt cores and adding AI units in each CU. And that is something no one should have to tell DF. I mean they have been championing DLSS for ages, haven't they?
 
Last edited:

Bojji

Member
Bottlenecked by shitty optimization, not the cpu
In a scene that has nothing special to render, such low performance can only be lack of optimization.
It's not like there is a huge amount of detail. Or a large amount of NPCs. complex physics. etc.
When devs suck so much at optimizing their own game, no amount of CPU power ill ever be enough.

Obviously Capcom is to blame here, game runs terrible on all common hardware and doesn't look next gen at all. But what we can do? Games like that happen and and pro console won't help them in any way.

If GTA6 is limited like that it will be stuck to 30FPS, and GTA is doing FAR more than this game...

Do you see the 67ms frame-time spike and the CPU threads going from 15% to 84%? This is bad CPU optimization. A 30% better CPU wouldn't help the Pro here. And again consoles (notably PS5) have very different APIs than PC. We have seen games with plenty of "CPU" problems on PC running fine on PS5.

PS5 runs this game like shit. And supposed Pro CPU uplift is 10%.
 
They believe it's because of two main reasons: (1) clock speed changes and; (2) architecture size.

”PS5 Pro only has limited clock speed increases (or actual decreases potentially) and the size of the GPU architecturally has not doubled in the way it did with PS4 Pro”

CPU clock uplift is just 0.3ghz while calculated GPU clock speed (via stated TF figures) may actually be lower than standard PS5.

"What's curious is that Sony's stated teraflops figure suggests a peak GPU clock speed of 2.18GHz - which is actually a touch slower than the 2.23GHz in the standard PS5. Again, this does suggest either a conservative power limit, retaining the 6nm silicon process technology - or both"

DF have the tech credentials of a crackhead superhero team.

The 33.5 TF figure is dual-issue, and to reach it you'd need a GPU clock of about 2.34 GHz. 56 CUs * 64 Shader Cores * 2 * 2.34 GHz = 16.7 TF * 2 (dual-issue mode) = 33.5 TF.

Exactly the figure in the papers.

So a supposed "tech-literate" channel can't even calculate TFs for AMD GPUs or tell the difference between single-issue and dual-issue TFs? Or know that if it's a 60 CU chips then 4 CUs are likely disabled for yield purposes because that part of the APU is likely monolithic?

Like did they "conveniently" forget this just to push a smear campaign against the Pro using FUD? Not a single person corrected them that they should've calculated the numbers by disabling four CUs like with the PS5 & Series X? I'm pretty sure people like Kepler were saying there'd be disabled CUs on the chip so not all 60 are active.

What a clown show DF have become.
 

Radical_3d

Member
PS5 runs this game like shit. And supposed Pro CPU uplift is 10%.
Dragon's Dogma 2 currently has high demands on hardware, with the Steam Deck struggling to run at playable rates and even high-end machines of an RTX4090 paired with an AMD 5800X3D CPU can drop into the 30s whilst in the denser populated towns.

pub-1710876360453.png

fps-4-1710876390236.png
So, what’s the big idea here? Upgrading to a top of the line CPU isn’t doing much, is it? Do you want Sony to try and design a monster tower 4.000€ PC so people that can’t multi thread has more room in their CPU budget and still fail? People do get that this stuff still has to sell to a somewhat mass market appeal price, do they?

I’m not ruling out that we spend the rest of the rest of the generation with 1943 graphics at 30fps. What I’m saying is that the solution proposed is not reasonable for a niche mid range update. You don’t want your barely better PS5? That’s fair. But think about the value, potential target and price when criticising.
 

DenchDeckard

Moderated wildly
DF have the tech credentials of a crackhead superhero team.

The 33.5 TF figure is dual-issue, and to reach it you'd need a GPU clock of about 2.34 GHz. 56 CUs * 64 Shader Cores * 2 * 2.34 GHz = 16.7 TF * 2 (dual-issue mode) = 33.5 TF.

Exactly the figure in the papers.

So a supposed "tech-literate" channel can't even calculate TFs for AMD GPUs or tell the difference between single-issue and dual-issue TFs? Or know that if it's a 60 CU chips then 4 CUs are likely disabled for yield purposes because that part of the APU is likely monolithic?

Like did they "conveniently" forget this just to push a smear campaign against the Pro using FUD? Not a single person corrected them that they should've calculated the numbers by disabling four CUs like with the PS5 & Series X? I'm pretty sure people like Kepler were saying there'd be disabled CUs on the chip so not all 60 are active.

What a clown show DF have become.

Wait, I thought the rumour was 60 CUs but at a slower 2.18ghz?
 
DF have the tech credentials of a crackhead superhero team.

The 33.5 TF figure is dual-issue, and to reach it you'd need a GPU clock of about 2.34 GHz. 56 CUs * 64 Shader Cores * 2 * 2.34 GHz = 16.7 TF * 2 (dual-issue mode) = 33.5 TF.

Exactly the figure in the papers.

So a supposed "tech-literate" channel can't even calculate TFs for AMD GPUs or tell the difference between single-issue and dual-issue TFs? Or know that if it's a 60 CU chips then 4 CUs are likely disabled for yield purposes because that part of the APU is likely monolithic?

Like did they "conveniently" forget this just to push a smear campaign against the Pro using FUD? Not a single person corrected them that they should've calculated the numbers by disabling four CUs like with the PS5 & Series X? I'm pretty sure people like Kepler were saying there'd be disabled CUs on the chip so not all 60 are active.

What a clown show DF have become.

According to the Spec it's a 64 CU chip with 4 disabled= 60 CUs Active

But we will have to wait for confirmation
 
Last edited:

Zathalus

Member
DF have the tech credentials of a crackhead superhero team.

The 33.5 TF figure is dual-issue, and to reach it you'd need a GPU clock of about 2.34 GHz. 56 CUs * 64 Shader Cores * 2 * 2.34 GHz = 16.7 TF * 2 (dual-issue mode) = 33.5 TF.

Exactly the figure in the papers.

So a supposed "tech-literate" channel can't even calculate TFs for AMD GPUs or tell the difference between single-issue and dual-issue TFs? Or know that if it's a 60 CU chips then 4 CUs are likely disabled for yield purposes because that part of the APU is likely monolithic?

Like did they "conveniently" forget this just to push a smear campaign against the Pro using FUD? Not a single person corrected them that they should've calculated the numbers by disabling four CUs like with the PS5 & Series X? I'm pretty sure people like Kepler were saying there'd be disabled CUs on the chip so not all 60 are active.

What a clown show DF have become.
Funny that you are so condescending in this post when the leaks say it is a 64/60 design. So DF were right on the money.
 
Obviously Capcom is to blame here, game runs terrible on all common hardware and doesn't look next gen at all. But what we can do? Games like that happen and and pro console won't help them in any way.

If GTA6 is limited like that it will be stuck to 30FPS, and GTA is doing FAR more than this game...



PS5 runs this game like shit. And supposed Pro CPU uplift is 10%.
A PC CPU runs this like shit. 50% overclock wouldn't be enough for it to run at 60fps on console. A 5800x3d wouldn't be enough for it to be a solid 60fps game on PS5 Pro. Sony design is perfectly fine with their goal.

But that's not what DF are saying. they are saying they are disappointed by the 10% overclock saying it's not enough to double FPS. It's just unrealistic goal if you need a $500 CPU upgrade to not even accomplish that for one unique unoptimized game out of 500 games. Just BS narrative from them. Look at how the CPU threads are running, 50% average for both CPU and GPU with very disparate min and max on CPU threads. The game is just badly optimized.

Did they complain like crazy when their X1X was still using that piece of shit of Jaguar? did they made paragraphs, articles about articles about those "CPU limited games" that will plague the X1X generation and limit its potential? they are just Xbox and PCMR warriors out there to play with an unique goal to spread FUD after FUD against Sony next console. They are now 100% doing that and they'll do that until next Xbox. They are just become full propagandists for Microsoft. It's pathetic.
 
Last edited:
A PC CPU runs this like shit. 50% overclock wouldn't be enough for it to run at 60fps on console. A 5800x3d wouldn't be enough for it to be a solid 60fps game on PS5 Pro. Sony design is perfectly fine with their goal.

But that's not what DF are saying. they are saying they are disappointed by the 10% overclock saying it's not enough to double FPS. It's just unrealistic goal if you need a $500 CPU upgrade to not even accomplish that for one unique unoptimized game out of 500 games. Just BS narrative from them. Look at how the CPU threads are running, 50% average for both CPU and GPU with very disparate min and max on CPU threads. The game is just badly optimized.

Did they complain like crazy when their X1X was still using that piece of shit of Jaguar? did they made paragraphs, articles about articles about those "CPU limited games" that will plague the X1X generation and limit its potential? they are just Xbox and PCMR warriors out there to play with an unique goal to spread FUD after FUD against Sony next console. They are now 100% doing that and they'll do that until next Xbox. They are just become full propagandists for Microsoft. It's pathetic.
crazy loco GIF
 

Elysium44

Banned
Damn, people in here are really trying to suggest that an 8/16 Zen 2 cpu is the limiting factor when going from 30 => 60 fps. I can see a potential issue at 120fps but there's no way, unless a game is terribly unoptimised that the PS5 cpu is going to hold back anything at such low frame rates.

Why in your opinion is Starfield 30fps on Xbox but any half decent CPU from the last five years can run it at 60+ ?
 
Did they complain like crazy when their X1X was still using that piece of shit of Jaguar? did they made paragraphs, articles about articles about those "CPU limited games" that will plague the X1X generation and limit its potential? they are just Xbox and PCMR warriors out there to play with an unique goal to spread FUD after FUD against Sony next console. They are now 100% doing that and they'll do that until next Xbox. They are just become full propagandists for Microsoft. It's pathetic.

Actually they parrotted that "Jaguar Evolved" thing that Microsoft wrote in the spec sheet for X1X

LOL

Pure comedy, they should go back to talk about teraflops, that's their thing
 
Did they complain like crazy when their X1X was still using that piece of shit of Jaguar? did they made paragraphs, articles about articles about those "CPU limited games" that will plague the X1X generation and limit its potential? they are just Xbox and PCMR warriors out there to play with an unique goal to spread FUD after FUD against Sony next console. They are now 100% doing that and they'll do that until next Xbox. They are just become full propagandists for Microsoft. It's pathetic.

This.
 

Snake29

RSI Employee of the Year
DF have the tech credentials of a crackhead superhero team.

The 33.5 TF figure is dual-issue, and to reach it you'd need a GPU clock of about 2.34 GHz. 56 CUs * 64 Shader Cores * 2 * 2.34 GHz = 16.7 TF * 2 (dual-issue mode) = 33.5 TF.

Exactly the figure in the papers.

So a supposed "tech-literate" channel can't even calculate TFs for AMD GPUs or tell the difference between single-issue and dual-issue TFs? Or know that if it's a 60 CU chips then 4 CUs are likely disabled for yield purposes because that part of the APU is likely monolithic?

Like did they "conveniently" forget this just to push a smear campaign against the Pro using FUD? Not a single person corrected them that they should've calculated the numbers by disabling four CUs like with the PS5 & Series X? I'm pretty sure people like Kepler were saying there'd be disabled CUs on the chip so not all 60 are active.

What a clown show DF have become.

They will be called out hard when GTA 6 just has a 60fps mode on both next and current gen consoles.
 

Zathalus

Member
A PC CPU runs this like shit. 50% overclock wouldn't be enough for it to run at 60fps on console. A 5800x3d wouldn't be enough for it to be a solid 60fps game on PS5 Pro. Sony design is perfectly fine with their goal.

But that's not what DF are saying. they are saying they are disappointed by the 10% overclock saying it's not enough to double FPS. It's just unrealistic goal if you need a $500 CPU upgrade to not even accomplish that for one unique unoptimized game out of 500 games. Just BS narrative from them. Look at how the CPU threads are running, 50% average for both CPU and GPU with very disparate min and max on CPU threads. The game is just badly optimized.

Did they complain like crazy when their X1X was still using that piece of shit of Jaguar? did they made paragraphs, articles about articles about those "CPU limited games" that will plague the X1X generation and limit its potential? they are just Xbox and PCMR warriors out there to play with an unique goal to spread FUD after FUD against Sony next console. They are now 100% doing that and they'll do that until next Xbox. They are just become full propagandists for Microsoft. It's pathetic.
Relax bro, it was a single section in a video about how the CPU is a bit underwhelming. Not a fucking witch-hunt.
 

shamoomoo

Member
DF have the tech credentials of a crackhead superhero team.

The 33.5 TF figure is dual-issue, and to reach it you'd need a GPU clock of about 2.34 GHz. 56 CUs * 64 Shader Cores * 2 * 2.34 GHz = 16.7 TF * 2 (dual-issue mode) = 33.5 TF.

Exactly the figure in the papers.

So a supposed "tech-literate" channel can't even calculate TFs for AMD GPUs or tell the difference between single-issue and dual-issue TFs? Or know that if it's a 60 CU chips then 4 CUs are likely disabled for yield purposes because that part of the APU is likely monolithic?

Like did they "conveniently" forget this just to push a smear campaign against the Pro using FUD? Not a single person corrected them that they should've calculated the numbers by disabling four CUs like with the PS5 & Series X? I'm pretty sure people like Kepler were saying there'd be disabled CUs on the chip so not all 60 are active.

What a clown show DF have become.
Again,the Pro is likely to have 64 CUs work because you have 60 CUs work 2 shader engines if that's real.

Also,in the chart comparison that DF made,they compared the number of active units on the PS5 vs the Pro.
 
Lmao no way??

Find the differences:


"All signs point to the upclocked Jaguar cores we find in Xbox One, and Scorpio's CPU set-up is indeed an evolution of that tech, but subject to extensive customisation and the offloading of key tasks to dedicated hardware."

Scorpio is console hardware pushed to a new level​

Opinion and analysis from Digital Foundry's Rich Leadbetter.


 
Last edited:
Find the differences:


"All signs point to the upclocked Jaguar cores we find in Xbox One, and Scorpio's CPU set-up is indeed an evolution of that tech, but subject to extensive customisation and the offloading of key tasks to dedicated hardware."

Scorpio is console hardware pushed to a new level​

Opinion and analysis from Digital Foundry's Rich Leadbetter.


Christ, can’t even make this shit up.
 
Find the differences:


"All signs point to the upclocked Jaguar cores we find in Xbox One, and Scorpio's CPU set-up is indeed an evolution of that tech, but subject to extensive customisation and the offloading of key tasks to dedicated hardware."

Scorpio is console hardware pushed to a new level​

Opinion and analysis from Digital Foundry's Rich Leadbetter.


The only difference is that you are emotionally invested in the Sony console and it hurts your feelings when they are not impressed by the Pro specs.

Sad Faking It GIF by Bounce
 

Microsoft Xbox One X: the Digital Foundry verdict​

The workmanship that's gone into Microsoft's latest console is exceptional. To quadruple graphics power over the original model but to retain essentially the same form factor and the same acoustics points to a level of engineering that really does take console design to the next level. Xbox One X is a beautifully designed little box that does the job assigned to it without taking up much space or making much noise - the latter being our biggest bugbear with PlayStation 4.

Beyond that, what we can definitely say is that the machine is a love letter to the core gamer, with many forward-looking features. The implementation of FreeSync support - something we didn't have time to fully test - is the kind of feature we didn't expect to see until at least the next console generation. Meanwhile, the backwards compatibility features really are superb - if you've stayed with Xbox across the generations, you're in for a real treat here. There's a sense that Microsoft is paying homage to its roots, honouring its past successes and making genuine efforts in curating a great library - all at no cost to the user.



Now PS5 Pro is a piece of shit, of course...

They are not getting an exclusive teardown from Sony as Cerny doesn't give a fuck about Digital Foundry
 
Last edited:

Codeblew

Member
The PC doesn't have custom hardware for that. Just pointing out you don't need to use the CPU for it. Decompression can be done on the GPU.
The PS5 has shared memory so neither the CPU or the GPU has to decompress anything, the custom hardware does it. You think the engineers just put the custom hardware in the PS5 for nothing?
 

Zathalus

Member
The PS5 has shared memory so neither the CPU or the GPU has to decompress anything, the custom hardware does it. You think the engineers just put the custom hardware in the PS5 for nothing?
No. I never said anything of the sort. The I/O block is a great addition to the PS5 that allows for rapid decompression of assets. But you don't need 9 CPU cores on PC to do that, it can be done on the GPU with minimal overhead.
 
Last edited:
Why in your opinion is Starfield 30fps on Xbox but any half decent CPU from the last five years can run it at 60+ ?

Are you really gonna hit me with Creation Engine 2 here. Steve from gamers nexus released an excellent video analysing cpu bottlenecks in Starfield. It was indeed pretty damning. A 4090 paired with a AMD 3600 cpu lost around 50% of its perf (though still > 60fps). It’s only a 6/12 chip but likely somewhat comparable to the Zen 2 8/16 with lower clocks. This was an extreme scenario running at the lowest in-game settings. The More you stress the GPU the less the CPU becomes the bottleneck. That’s why games when benchmarking different CPUs show little difference at higher resolutions.

With the above, the majority of games are nowhere near as heavy on the cpu as Starfield. For the vast majority of games, the cpu is not the limiting factor going from 30fps => 60fps. It’s almost always GPU limited. Why is that statement controversial at all?
 

James Sawyer Ford

Gold Member
Are you really gonna hit me with Creation Engine 2 here. Steve from gamers nexus released an excellent video analysing cpu bottlenecks in Starfield. It was indeed pretty damning. A 4090 paired with a AMD 3600 cpu lost around 50% of its perf (though still > 60fps). It’s only a 6/12 chip but likely somewhat comparable to the Zen 2 8/16 with lower clocks. This was an extreme scenario running at the lowest in-game settings. The More you stress the GPU the less the CPU becomes the bottleneck. That’s why games when benchmarking different CPUs show little difference at higher resolutions.

With the above, the majority of games are nowhere near as heavy on the cpu as Starfield. For the vast majority of games, the cpu is not the limiting factor going from 30fps => 60fps. It’s almost always GPU limited. Why is that statement controversial at all?

But it’s not next gen unless I can store 1,364,554 hot dog buns in my ship’s closet

Friday Movie GIF
 

James Sawyer Ford

Gold Member

Microsoft Xbox One X: the Digital Foundry verdict​

The workmanship that's gone into Microsoft's latest console is exceptional. To quadruple graphics power over the original model but to retain essentially the same form factor and the same acoustics points to a level of engineering that really does take console design to the next level. Xbox One X is a beautifully designed little box that does the job assigned to it without taking up much space or making much noise - the latter being our biggest bugbear with PlayStation 4.

Beyond that, what we can definitely say is that the machine is a love letter to the core gamer, with many forward-looking features. The implementation of FreeSync support - something we didn't have time to fully test - is the kind of feature we didn't expect to see until at least the next console generation. Meanwhile, the backwards compatibility features really are superb - if you've stayed with Xbox across the generations, you're in for a real treat here. There's a sense that Microsoft is paying homage to its roots, honouring its past successes and making genuine efforts in curating a great library - all at no cost to the user.



Now PS5 Pro is a piece of shit, of course...

They are not getting an exclusive teardown from Sony as Cerny doesn't give a fuck about Digital Foundry

No mention at all of Xbox one X’s biggest bottleneck, the CPU they didn’t change…..
 

Codeblew

Member
No. I never said anything of the sort. The I/O block is a great addition to the PS5 that allows for rapid decompression of assets. But you don't need 9 CPU cores on PC to do that, it can be done on the GPU with minimal overhead.
It decompress at the speed of 9 Zen-2 CPU cores doesn't mean you "need" 9 cores to decompress. Still the point being, you cannot compare apples to apples between PC and PS5 with similar cpu/gpu specs. PS5 is much more efficient with its custom hardware designed specifically for gaming workloads. This includes the PS5's SOC and motherboard design as well.
 

Fafalada

Fafracer forever
Please do. I know that the NES had a really old processor but the goal was to be cheap, not bleeding edge. No idea what you are talking about in the PS1 era. I read that it had many processors, but nothing else.
When PS1 launched in 1994, its R3000 CPU@33mhz was about 5x slower than the fastest P5 Pentium (100mhz) available.
3 years in(to compare a similar point in time to today) - PII 300Mhz got to the market, which would be something like 18x faster.
N64 fared marginally better with a 100mhz R4300 (4xxx series wasn't really a major generational uplift - still single issue, in-order, but it did reach much higher clock speeds) - but that was also bottlenecked pretty badly due to botched memory subsystem. And by the time we were 3 years into N64 launch, Athlon CPUs were on the market so things only looked worse from there.

@Fafalada maybe some of what you are saying regarding cache optimization is true on first-party, but third party games have been tested on the console CPUs running the standard PC version and the results are essentially identical to what we see on console. Not a lot of secret sauce and/or magic optimization to be found there. In fact, most evidence points to CPUs running worse on GDDR6 than DDR4 due to latency issues.
I don't have direct comparison with this gen consoles as never benchmarked a PS5 hands-on. But I also take issue with benchmarks taken on different codebase and different architecture - that just can't be a reliable methodology if the goal is to measure architecture performance.
What I can say is that PS4 Pro CPU scaled nearly linearly with Mhz, compared to PS4, without any special optimisations taken. So at least that class of CPUs was not memory access bottlenecked.

Now - as I stated before - not every console was this lucky - in fact if we look back, PS3/360, PSP and N64 all fared really poorly, performing well below their on-paper spec.
The PPC architecture in PS3/360 was so problematic - I remember pathological cases where a PS3 CPU could actually run certain things slower than the PS2.
PS2/DC/PS1/XBox were 'ok' (more cache would have helped - but nothing dramatic). GCN performed well.
 

SonGoku

Member
I think it is the Slim coming for free, optimising for 6 nm is still a cost. I would not trust this process of porting the design to it to be completely free.

Anyways, doing the entire design with 5nm would be a bigger change and I do not think Sony thinks it is worth that cost. Given what they have designed / the target they have (unless it is crazy expensive) they may be right.
Bigger change from? Remember Pro is a new design not a shrink so might as well invest the extra R&D for cost savings down the line on cooling and die size
I doubt its on 6nm theres nothing on its leaked specs that scream 6nm its just conjecture from DF, but we'll see I guess.
 
they will be happy because at least they're not being forced to optimize extreme CPU bound code to hit 30 fps on 1.6 ghz jaguar cores

but they will have no trouble targeting the very same 30 fps on zen 2 cores as well. which is why some people are getting worked up. if ps5 pro focused on CPU upgrade while keeping GPU more or less similar or with slight upgrade + big upgrade on upscaling, it would've been better for high framerate enjoyers

i dont really care about playing games at 30 fps or 60 fps so i dont actually care what ps5 pro ends up with. but it is fun to participate in discussions regardless. if you have to ask me though, I'd prefer more balanced builds rather than xbox one x-like builds where the focus is on resolution and graphics (despite myself building a PC that has the mindset of xbox one x but that I'm just an odd person overall).

d it. If you told them you built a ryzen 3600 rtx 4070 rig, they would probably be like "but that cpu will hold that gpu back, why didnt you get something decent, modern that can accompany 4070 properly. but if sony does it, cerny is a genius, ps5 is not cpu limited at all, rules are different, spiderman runs at this framerate, 3rd party suck, gta 6 sucks anyways, etc. etc.

it is like someone with 3600 and 4070 getting 80+ fps in spiderman and saying game is optimized and their rig is fine. and when they heavily get bottlenecked in jedi survivor, they blame the developer. when in reality, they could've solved the bottleneck by pairing that 4070 with at least a ryzen 7600. that is the core of the problem. sometimes you gotta give the GPU the CPU it deserves. otherwise you're just limiting the build to specific resolution/framerate parameters. which is okay by itself. you can still get great mileage out of that GPU. but you still squander potential for high framerate experiences

with a 3600 and a 4070, you won't be CPU limited at 4k in a vast majority of titles especially the ones that are released before 2021. more so if you just push ultra settings and 4070 can take it too. but then you try jedi survivor, hogwarts legacy, dragons dogma and quickly realize this CPU is simply not meant for 4070 unless you specifically gimp 4070 and push extreme graphical settings that target 30 FPS. all that because the CPU can't keep up. where is the sense in that? why sacrifice 4k/optimized settings/dlss quality 60 fps experience and go for native 4k, unoptimized ultra ray tracing settings just so that you can saturate GPU at 30 fps target? it is what Cerny is trying to do here by keeping the same CPU and giving the GPU a great ray tracing and mild raster improvements. it is the exact same logic they had with xbox one x and ps4 pro. it is a mistaken approach. but they keep doing it. because people have no trouble with 30 fps indeed on consoles
All of this is correct the could shove a 4090 in here it won’t matter cause of that cpu what an awful console
 

Mr.Phoenix

Member
Why in your opinion is Starfield 30fps on Xbox but any half decent CPU from the last five years can run it at 60+ ?
You realize DF did a test using the exact XSX CPU think its called the 4700s kit) and paired it with a 6700 GPU and ran Starfield right? And the game was GPU bottlenecked on that set up not CPU bottlenecked. And ran an average of 40fps and in some cases even 60fps. Using FSR quality 4K. When they switched it to FSR quality 1440p, it was mostly 60fps everywhere.

And that aside, this is also one of those games that are just not very well optimized, optimization has never been Bethesda strong suit.

My issue with this whole CPU bottleneck argument is that these CPUs are not even really being used properly. As I said previously, you have these situations where an 8c/16t CPU, has like only two threads clocked at 90%+ utilization, and the rest of them are averaging sub 25% utilization. That is just flat-out poor optimization. And some of you expect sony to do what? Give devs a CPU that can clock to like 5GFhz to make up for their inability to optimize for multicore processors properly.

If you are a system architect, and you are compiling usage data of your current hardware from hundreds of games to determine which areas need improvement. On the CPU side of things, you are not seeing games struggling at 30-40fps. What you are seeing sis CPU utilization. If you see that those games that are averaging 30-40fps have an average utilization of 40%... how in your right mind do you decide that what you need is a faster CPU?
 
Last edited:
Top Bottom