• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry - Playstation 5 Pro specs analysis, also new information

Panajev2001a

GAF's Pleasant Genius
I also think that the PS5 Pro should be on N5 or N4.
But the reality is that some GPUs in the RDNA3 line do use N6, such as the 7600 and 7600XT.
Ah I forgot about the existing RDNA3 GPUs already on N6, so well that and Slim being on N6 settled it. It should have settled when the Slim was revealed to be on N6 as they follow the same playbook (in a way) as PS4 Pro and PS4 Slim.

For what PS5 Pro is meant to be (and cost to Sony), N5 was likely never on the cards.
 

Panajev2001a

GAF's Pleasant Genius
If they're relying on WMMA, then that still uses the existing ALUs.
It is very likely that there is some additional HW and there is also a lot of existing SIMD units reuse just like for INT and FP workloads. AMD is just using the same way of representing the chip as nVIDIA does:

Ya8f49j.jpg
4BlMy8n.jpg


Replace Tensor Core with AI MATRIX Accelerator.
 

Zathalus

Member
It is very likely that there is some additional HW and there is also a lot of existing SIMD units reuse just like for INT and FP workloads. AMD is just using the same way of representing the chip as nVIDIA does:

Ya8f49j.jpg
4BlMy8n.jpg


Replace Tensor Core with AI MATRIX Accelerator.
Tensor cores are independent units in the SM. You can have the Tensor cores working on one thing, while your general SM units can be working on another. RDNA3 all AI functions are done by the CUs, even though those CUs can greatly accelerate them. For Nvidia you can do general FP32 while working on INT8 via the Tensor cores simultaneously, a RDNA3 CU can do FP32 or accelerate INT8 but not at the same time. RDNA4/PS5 Pro is unknown and can be different compared to RDNA3 of course, AMD or Sony might have changed it.
 

Panajev2001a

GAF's Pleasant Genius
Tensor cores are independent units in the SM. You can have the Tensor cores working on one thing, while your general SM units can be working on another.
Dispatched in different cycles, documentation provided does not preclude HW re-use per se in part or whole. According to AMD diagrams so are the AI Matrix Accelerators so there is that.

Then again now rumors are pointing to the AI solution being custom Sony work so well we will see 🤷‍♂️.
 

Elysium44

Banned
If you have a game now that has to devote most calculations to the CPU as the GPU is flat out using 36 CUs for rendering logic and you suddenly say "hey here's another 24 CUs, oh and they're better and they do all this other stuff..."

Why as a dev wouldn't you move some of your logic onto *at least some of these additional CUs* (if you have a job system that can utilise CUs in this fashion)?

This is why I'm waiting for actual game performance instead of looking at the CPU and saying "oh that's terrible"...

Because the game has to run on the lower spec console as well. The underlying code will have to be the same on the PS5 Pro as the base PS5. Just like the Series S dictates what the Series X can do.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
keeping an already bottlenecked CPU.
What is it already? Is it low single digit percentage of the titles coming out on PS5 that seem bottlenecked by CPU logic ? That would point to developers being more at fault or having decided to design their games around a 30 FPS budget than the console CPUs being underpowered.

Pro consoles have never done major CPU upgrades so far and the reason they received a modest frequency boost last time is that they were actually more of a problem for devs than Zen 2 is so that needed to be addressed.
 

Panajev2001a

GAF's Pleasant Genius
Because the game has to run on the lower spec console as well. The underlying code will have to be the same on the PS5 Pro as the base PS5.
For RT you can still do that if you accept not to have RT or more limited RT in the base consoles.

If you take Spider-man 2, pick the high frequency CPU mode (1.5% GPU clock speed is probably more than balanced by the faster CUs, much more powerful RT capabilities, etc…), optimise a bit for the new PSSR upscaling (might allow you to drop the native resolution just a tad further down to compensate for the clock speed drop on the GPU if you really want), you can greatly increase the resolution of the RT effects and the visible detail in them while increasing the visual quality.

I think a Spider-man 2 patch could bring the game Quality setting to 60 FPS (it already reaches 40 FPS) and still improve IQ and RT effects throughout the game for example.
 

Allandor

Member
Same zen 2 cpu. 10% clock speed boost.

pMSCQKv.jpg


Im out.
Jerry Seinfeld Reaction GIF
Don't forget, this CPU is actually on zen+ speed, as the cache was cut in half for the consoles (as well as the mobile zen2 CPUs). So a desktop zen2 CPU can actually do more per clock. Not going to zen3 or 4 looks like a lost opportunity, as those have more IPC and can therefore clock lower to reach the same goal.

The dual issue stuff for the GPU, well it is more or less like hyperthreading on CPUs around 10% more performance with only a little bit more die space. Also on consoles this can be used a bit more efficient. But around 45% uplift is not much. Well 1440p => ~1700p (without other improvements). Really don't know why they do this after psvr2 more or less flopped. The current market situation is not that great. There will be people (like me) that might buy it but I guess only a fraction will buy it. I can't see a scenario where they are at a ps4 pro level (because of price and the little increase).
But my guess is, that future PS5 titles won't offer RT effects anymore until you upgrade.

Currently there are already titles that go below 1080p. This trend of unoptimized games will continue even more on the "amateur" consoles with new hardware.
 
Last edited:

Redneckerz

Those long posts don't cover that red neck boy
So another unbalanced upgrade like ps4 pro because sony can't make a premium upgrade at a premium price uh??!

Goodbye gta6 or any future cpu intensive game at 60 fps, you had a non-existent good run.
Again what is it with this Zen 2 CPU is holding things back spiel. This case was perfectly valid in the PS4/XBO era since well, Jaguar was literally a netbook core.

Zen 2 however is a full desktop core. I am aware of the significant improvements imposed by Zen 3 and certainly Zen 4, but i am not buying into the concept that Zen 2 is already holding the current-gen significantly back.

Considering the games we have seen on display, i feel there is a lot of untapped potential. Sadly, as i feel (thus feelings) does not make a compelling argument, i have to wager that middleware such as UE5 proves to be a significant CPU burden, despite the fact that cross-gen enhanced titles like Metro Exodus Enhanced Edition perfectly highlight what Zen 2 is capable of with less of a hit.

So my argument is this: Is Zen 2 actually that bottlenecked, or is current-gen middleware just heavier overall and thus less optimized for what it sets out to do?
 

welshrat

Member
Again what is it with this Zen 2 CPU is holding things back spiel. This case was perfectly valid in the PS4/XBO era since well, Jaguar was literally a netbook core.

Zen 2 however is a full desktop core. I am aware of the significant improvements imposed by Zen 3 and certainly Zen 4, but i am not buying into the concept that Zen 2 is already holding the current-gen significantly back.

Considering the games we have seen on display, i feel there is a lot of untapped potential. Sadly, as i feel (thus feelings) does not make a compelling argument, i have to wager that middleware such as UE5 proves to be a significant CPU burden, despite the fact that cross-gen enhanced titles like Metro Exodus Enhanced Edition perfectly highlight what Zen 2 is capable of with less of a hit.

So my argument is this: Is Zen 2 actually that bottlenecked, or is current-gen middleware just heavier overall and thus less optimized for what it sets out to do?

Yeah its quite a lot of nonsense, a good indication of that is BF2042 as on my Zen 3 5900X all cores are loaded up and it pushes quite high temp in 128 player matches, yet the PS5 can handle that same player base just as well. I really don't have any worries about Zen 2 at all especially with good optimisation. remember the ps4 pro could not even handle 64 player battles without significant frame drops on BF4 and BF5
 

Rudius

Member
Based on what they are saying it seems like the more likely scenario is the PS5 pro will help games run at a more consistent FPS and resolution but not improve the potential of more FPS.

So if a game is 4K/ 30 FPS it will run at a more consistent 30 FPS instead of dropping frames or resolution like we have seen with some PS5 games.
Or if it runs at a very consistent 30 (meaning internally between 30 and 40) they can push it up to 40 with VRR on top.
 

silent head

Member
The fastest gaming CPU(7800X3D ) just twice as fast as the PS5PRO CPU ( 3600)
xKZezrb.png

If a game runs below 30/40fps on the PS5PRO (due to optimization and engine issues) 95% of PC gamers can't run the same game at 60fps
 

winjer

Member
The PS4 Pro CPU was 33% more powerful than the base console, the PS5 Pro's is only 10%. You don't need to be a master chef to understand the implications.

On the other hand, the Jaguar CPU was woefully underpowered from the get go.
Just remember that Jaguar was running at 1.6 and then 2.3Ghz, at a time when desktop CPUs were doing 4Ghz+. And the jaguar CPU was just a 2-wide pipeline. And t didn't even have L3 cache.
Zen2 was not high end when it launched, but it wasn't that far off. The PS5 version had lower clocks, but only by 0.5Ghz, compared to the desktop version. And it still has some L3 cache.
And it has the full 4-wide pipeline of the desktop CPUs.
So although the Jaguar in the PS4 had a bigger jump in clocks, it started from a much lower performance point.

Another thing to consider is that AMD GPUs have a dedicated instruction scheduler. While on PCs, NVidia GPUs, have a simplified scheduler, resulting in higher driver overhead for the CPU.
Also, a console like the PS5 has an API that is lower level than even DX12 and Vulkan on PC.
And consoles don't have the gigantic amount of bloatware that Windows installs by default.
 
Tensor cores are independent units in the SM. You can have the Tensor cores working on one thing, while your general SM units can be working on another. RDNA3 all AI functions are done by the CUs, even though those CUs can greatly accelerate them. For Nvidia you can do general FP32 while working on INT8 via the Tensor cores simultaneously, a RDNA3 CU can do FP32 or accelerate INT8 but not at the same time. RDNA4/PS5 Pro is unknown and can be different compared to RDNA3 of course, AMD or Sony might have changed it.
I believe some ressources are shared with SM.
 

Snake29

RSI Employee of the Year
Again what is it with this Zen 2 CPU is holding things back spiel. This case was perfectly valid in the PS4/XBO era since well, Jaguar was literally a netbook core.

Zen 2 however is a full desktop core. I am aware of the significant improvements imposed by Zen 3 and certainly Zen 4, but i am not buying into the concept that Zen 2 is already holding the current-gen significantly back.

Considering the games we have seen on display, i feel there is a lot of untapped potential. Sadly, as i feel (thus feelings) does not make a compelling argument, i have to wager that middleware such as UE5 proves to be a significant CPU burden, despite the fact that cross-gen enhanced titles like Metro Exodus Enhanced Edition perfectly highlight what Zen 2 is capable of with less of a hit.

So my argument is this: Is Zen 2 actually that bottlenecked, or is current-gen middleware just heavier overall and thus less optimized for what it sets out to do?

All these people with their "bla bla bla bla, im OUT", are the same people who were hyping up the 12TF, MOST POWERFUL CONSOLE EVER, FULL RDNA2, VRS and all the other Xbox nonsense, which translated into almost nothing!

Even today on the PC side. These Zen2 chips are still not bottlenecking. I'm running a 5800X3D and that thing is most of it's time just eating out of his nose. Yes it's Zen 3 but still these 3700X CPU's aren't bottlenecking in PC games.
 
Last edited:

ChiefDada

Gold Member
sorry that my initial video was without ray tracing. But I've managed to find one with ray tracing and still a CPU bottleneck (6:40 and so on):

So your videos and mistake disproves your theory and suggests ray tracing isn't the culprit for the GPU being bottlenecked at 1080p; in fact, when RT is turned on in your initial video, GPU utilization gets maxed. It is very likely asset streaming causing this issue specifically on PC
 

Elysium44

Banned
On the other hand, the Jaguar CPU was woefully underpowered from the get go.
Just remember that Jaguar was running at 1.6 and then 2.3Ghz, at a time when desktop CPUs were doing 4Ghz+. And the jaguar CPU was just a 2-wide pipeline. And t didn't even have L3 cache.
Zen2 was not high end when it launched, but it wasn't that far off. The PS5 version had lower clocks, but only by 0.5Ghz, compared to the desktop version. And it still has some L3 cache.
And it has the full 4-wide pipeline of the desktop CPUs.
So although the Jaguar in the PS4 had a bigger jump in clocks, it started from a much lower performance point.

Another thing to consider is that AMD GPUs have a dedicated instruction scheduler. While on PCs, NVidia GPUs, have a simplified scheduler, resulting in higher driver overhead for the CPU.
Also, a console like the PS5 has an API that is lower level than even DX12 and Vulkan on PC.
And consoles don't have the gigantic amount of bloatware that Windows installs by default.

In spite of all that, the PS5 (and Xbox) this gen are struggling to get anywhere near maintaining 60fps in a lot of games. So 10% while helpful isn't going to come close to solving the problem.
 
In spite of all that, the PS5 (and Xbox) this gen are struggling to get anywhere near maintaining 60fps in a lot of games. So 10% while helpful isn't going to come close to solving the problem.

So, the only benefit is a 10% higher clocked CPU? We literally have the perfect example between the XSS and XSX, identical CPUs, different memory/GPU configurations. That’s like saying “oh, they’ll be a marginal benefit by running the same game on the XSX”.
 

Allandor

Member
The fastest gaming CPU(7800X3D ) just twice as fast as the PS5PRO CPU ( 3600)
xKZezrb.png

If a game runs below 30/40fps on the PS5PRO (due to optimization and engine issues) 95% of PC gamers can't run the same game at 60fps
It's not a 3600 as the 3600 has more IPC because of much more cache. A 3600 (or 3700) in the PS5 could do much more because of the low level API. So it is hard to compare.
 
Last edited:
More power is always welcome in my book, bring it on.
I also wonder if this pro could encourage developers to make better use of the series X hardware, since the Pro and X both have a significate amount of extra compute units compared to the PS5, which will still be the main target for developers but the new Sony dev kits will encourage developers to think about how they make use of the extra CUs.
 

GametimeUK

Member
This vid is like food critics, that can't even cook an egg themselves, say that the dish is going to be terrible based on just the raw ingredients they've seen.
So you're criticising their video? Where's your YouTube channel with a million subscribers?
 

Loxus

Member
Tensor cores are independent units in the SM. You can have the Tensor cores working on one thing, while your general SM units can be working on another. RDNA3 all AI functions are done by the CUs, even though those CUs can greatly accelerate them. For Nvidia you can do general FP32 while working on INT8 via the Tensor cores simultaneously, a RDNA3 CU can do FP32 or accelerate INT8 but not at the same time. RDNA4/PS5 Pro is unknown and can be different compared to RDNA3 of course, AMD or Sony might have changed it.
That's were I think dual-issue comes in.
RDNA3/4 has 4 SIMD32 per CU.

What has changed about Zenji Nishikawa's 3DGE: Radeon RX 7900 XTX/XT? Explore the secrets of the Navi 31st generation, which has achieved significant performance improvements.
In addition, the added SIMD32 calculator does not have exactly the same function as the original SIMD32 calculator. The original SIMD32 calculator is "Float/INT/Matrix SIMD32" in the block diagram of the CU. It is written as , and corresponds to vector operations and matrix operations of integers. On the other hand, the added SIMD32 operator is "Float/Matrix SIMD32" in the block diagram. This does not support integer operations.
MP3rBXq.jpg


As the image shows, 2 SIMD32 can do (Float or Int) the other 2 SIMD32 can only do (Float).

So the GPU can do Float and Int at the same time. And this explains why RDNA3 GPUs don't perform as their TFLOPS suggests.

My idea is that only the 2 SIMD32 (Float) are currently at play in RDNA3 and dual-issue comes in to play when the AI Accelerators are used.
 

Elysium44

Banned
So, the only benefit is a 10% higher clocked CPU? We literally have the perfect example between the XSS and XSX, identical CPUs, different memory/GPU configurations. That’s like saying “oh, they’ll be a marginal benefit by running the same game on the XSX”.

In CPU bound games that is the case on Xbox, which is why Starfield doesn't do 60fps on XSX.
 

winjer

Member
In spite of all that, the PS5 (and Xbox) this gen are struggling to get anywhere near maintaining 60fps in a lot of games. So 10% while helpful isn't going to come close to solving the problem.

In most cases, that is because of the GPU. Not so much the CPU.

Unlike what DF says, the closest comparison to the PS5 CPU would not be a 3600, but rather a 4650G.
This CPU is also Zen2, but it has a cutdown L3 cache to 8+8MB. It's not as low as the PS5 CPU, but it's closer than the 16+16MB of the 3600.

There aren't many modern reviews, but take a look at these results in UE4 games. And mind you that several games are still being released now, using UE4.
I'll post a few results, but there are more in the full review, for FC5, RDR2, GTA5, etc.
But as you can see, this CPU has no trouble maintaining performance above 60fps. Quite often, it goes around 100fps.
Not consider that UE5.4 will bring big threading optimizations to spread load across more cores, unlike loading just one or two main threads.


119778.png


119824.png


119848.png
 

Mr.Phoenix

Member
So PSSR is hardware based and there will be a PSSR core inside each of the 60 CUs on the GPU? Or did I understand that totally wrong in the DF video?

If so does that mean Sony have potentially made something that can rival DLSS? I hope so it'll certainly make IQ pop!!
From all indications, the PS5pro AIUs are built into each CU. And if so, it does mean that yes, Sony has something that can rival DLSS or at worst XeSS.
If you put a graphics card that is 50%/60% better into any PC and don't upgrade the CPU I guarantee you will get much better frame rates. I've seen it and done it time and time again.

Why people think this would be different in the PS5 I'm not sure 🤷‍♂️
Boggles the mind... but then again I am not surprised. No matter what was confirmed, there are people that would have leached onto any thread to dismiss the thing. No matter how little sense that makes. Reading through this thread, you would think that 95% of every game released thus far is grossly CPU limited and only managing 30fps. Which is the only scenario that not having upgraded the CPU would be considered a problem.

But that simply couldn't be further from the truth. Being that its more like 95% of games on the PS5 does have a performance mode, meaning the CPU is already able to handle game logic for 60fps. With the only exceptions to this being games that arguably are just weirdly optimized and even having issues on PC too. The reality of the matter is that there are more games out and coming that ship with a 60fps on the base PS5, which means a CPU upgrade wouldn't have been needed for them even on the PS5pro.

But ah well...
The more I look into this it looks like the bulk of all the upgrades will be due to CU count increase and going for a wide GPU.
This is not true at all. For whatever reason, the "CU count" increase and going for a "wide GPU", only accounts for about 1.45x increase in raster performance. Even though it should be more like 1.6x... go figure.

The bulk of all the upgrades will literally come down to things like the built in AI hardware allowing for PSSR, the better RT hardware, giving devs more RAM and having more bandwidth.

I am curious to know how you looked at this and arrived at such a far swung different conclusion.

I can't wait to see this thing in action because my brain can't work out how we can actually see any performance increase in games(outside of frame generation) Only resolution and raytracing as surely the cpu is going to be eaten up even more handling the increased raytracing or PSSR?
Now I don't know if you are being purposefully disingenuous or if you really just don't see it lol. But I will indulge you...

There is only one scenario possible where all we see are things like better rez and RT. And that is if dealing with a game that has ONLY a 30fps mode on the base PS5. That would suggest that that game is most likely at best not running higher than 33-35fps internally. Its CPU bottlenecked.

What I am finding weird, is that this scenario I just described, is more of the exception and not the norm just looking at all the software released for the PS5 thus far. So I don't get why people are on here making it look like the best the PS5 has been able to do thus far is 30fps. Lets put this into perspective. Even Alan Wake 2 and Pandora... both have 60fps modes on the standard PS5. Hell even Baldur's gate 3 has it too. In reality, majority of all the games released have a 60fps mode. And that is just fact.

So this narrative that the biggest misstep on the planet is not having a better CPU is factually false. And why is this even important, well, its because if the games logic can already manage 60fps on the PS5, then you will see the very kind of improvements that the PS5pro is designed to give. Which is/are, 60fps PSSR mode with IQ similar to or better than IQ in the standard 30fps fidelity mode.

The PS4 Pro CPU was 33% more powerful than the base console, the PS5 Pro's is only 10%. You don't need to be a master chef to understand the implications.
Read above.

In spite of all that, the PS5 (and Xbox) this gen are struggling to get anywhere near maintaining 60fps in a lot of games. So 10% while helpful isn't going to come close to solving the problem.
And you know the reason they are struggling to get 60fps is due to a CPU limit? Or you just think it is?

This is the PS5 launch all over again. What I assume happens when specs evangelists see numbers or specs that just doesn't rhyme with what they expect them to be. Then when you start seeing the thing perform, people are left scratching their heads and wondering how its possible.

I am beginning to realize that a lot of posters here have no idea what kinda specs/stats to even look at to paint a picture of what to expect. And rather than ask questions, they just come out with some of the weirdest, ignorant hot takes and call it a day lol.
 
Last edited:

Schmendrick

Member
From all indications, the PS5pro AIUs are built into each CU. And if so, it does mean that yes, Sony has something that can rival DLSS or at worst XeSS.
sorry but this hyperbole can`t be left uncommented.
There are a few years of R&D and quite a few billions of budget for said endeavor between having ML hardware acceleration and having really good ML hardware acceleration for a very specific model that does exactly what you want at runtime in exactly the quality that you want.

Nvidia isn`t technology leader because this stuff is easy to do.......
So maybe...let`s wait until we see results before we praise Sony?
 
Last edited:
This is absolutely nonsense. Completely agree with Alex.

More power consumption, just to throw more pixels and better raytracing, but keeping an already bottlenecked CPU.

I'm out.
Where does anyone state that the CPU is a bottleneck? Just because its upped by only 10% doesnt mean that the unaltered original CPU wouldn't have done the job.

Graphics are GPU intensive. CPU not as much.

I swapped my GPU in my pc. Went from a 2070 to a 4070. CPU is now 4 years old and the only component in my PC with that age.
Its not bottlenecking anything, its almost sitting idle when gaming.

Just because it's old doesnt mean there isnt any space for more computing ya know......
 
Last edited:

yamaci17

Member
Never said it was free. What i said is that it was always going to give you a net positive. Especially when you are looking at the subject matter. In this case, that subject matter is a 10TF console vs a 16TF console. So its not a direct apples-to-apples comparison. As in, you are not just trying to take 2160p@30fps on the ogPS5 and putting that on another ogPS5. You are taking that and putting that on a PS5Pro.

So right off the bat, you already are getting a performance advantage. Taking the 1.45x at face value, that means your 2160p@30fps PS5 game becomes a 43.5fps PS5pro game. Just by running it on the PS5pro. Then you are dropping the rez....bla bla bla... I am sure you get the point.

And yes, game by game it would always be different... and as I said elsewhere... (cant remember where) that has to do mostly with how the game's engine handles post-processing or all rendering after the reconstruction pass. Some games are top-heavy, so most of the rendering is done pre-reconstruction and the lower rez, in these cases you would notice there isnt that much of a hit/cost. Some games oin the other hand are bottom-heavy, with lots of post-processing that is done on the post-reconstruction output rez. As can imagine... that shit would be expensive.

But all that work was not necessary, all you have proved is that this thing varies on a game-by-game basis. What you needed to do was take the work of sony, Nvidia and AMD. And all parties have literally spelt out exactly what DLSS, FSR and PSSR cost. How devs use that... is a different matter entirely.
net positive compared to native.

it wont be net positive for uninformed console users like Slimysnake himself

he sees "720p" internal upscaled to 4k and thinks why console needs a "9x" pixel reduction to get 2x framerate. people go on to talk about how ps5 is a 720p console

native 4k 20 FPS
native 1440p 41 FPS
native 1080p 70 FPS
1440p dlss quality 68 FPS

4k dlss quality (internal 1440p) 34 FPS
4k dlss performance (internal 1080p) 50 FPS
4k dlss ultra performance (internal 720p) 70 FPS


then goes on youtube, look at native 1080p output benchmarks and says "this ps5 equivalent gpu can hit 60 fps at 1080p so why ps5 is going down to 720p!11" as you can see 720p upscaled to 4K is as heavy as running the game at native 1080p (also looks better than native 1080p). but uninformed user will just think "oh you're getting 60 fps at 720p"

it is unfortunate but there's not much we can do about it. it is DF and alike's duty to explain this to masses so that developers can target 4K/upscaling instead of 1440p upscaling (which will be more blurry, no matter what)

gFtxddv.png



this is the reason upscaling gets hate from a certain group of people because they see absurdly low internal resolutions and get angry. i dont know how we can overcome this

even on PC most people dismiss 4K gaming because they don't want to play at "internal 1080p" and would rather play proudly at "native 1440p". go figure
 
Last edited:
All these people with their "bla bla bla bla, im OUT", are the same people who were hyping up the 12TF, MOST POWERFUL CONSOLE EVER, FULL RDNA2, VRS and all the other Xbox nonsense, which translated into almost nothing!

Even today on the PC side. These Zen2 chips are still not bottlenecking. I'm running a 5800X3D and that thing is most of it's time just eating out of his nose. Yes it's Zen 3 but still these 3700X CPU's aren't bottlenecking in PC games.
Same with my 4 year old i9. GPU is where it's at. CPU is great for Excel sheets.
 

Elysium44

Banned
Ok. Well, thank you for confirming that you don't. Carry on I guess.

It is common knowledge and it is easy to find Digital Foundry and other experts saying so. We can also see for ourselves that the consoles frequently choke on games which run at 60fps easily on modest PC CPUs.
 
It is common knowledge and it is easy to find Digital Foundry and other experts saying so. We can also see for ourselves that the consoles frequently choke on games which run at 60fps easily on modest PC CPUs.
No its not. You and DF are making assumptions which you confuse with common knowledge since the cpu load isnt measurable in the PS5. Not by you, not by DF.

Phoenix is right. 3D is GPU intensive. NOT cpu.

Now if we were to go back to the gaming stone age, then yes. But hardware rendering exists for roughly 30 years.
Before say 3dfx cards, everything was done by the CPU (software rendering). So all rendering was done based on instructions sent to the CPU.

But this is literally almost 3 decades ago when the term GPU didnt even exist.

Hence the name.


Tell me this:

I swapped an old GPU for a new one. FPS and quality are MUCH better then before. Yet my CPU is still almost running idle when playing on native 4K, as it was when I played on 1440P with my old GPU.
CPU usage went up from 10% load during gaming to 12-15% Which leaves me with 85% headroom at least.

The CPU isnt an issue, but everyone starts crying when they see numbers they dont want to see. The CPU is and will be fine.
 
Last edited:

foamdino

Member
Because the game has to run on the lower spec console as well. The underlying code will have to be the same on the PS5 Pro as the base PS5. Just like the Series S dictates what the Series X can do.
Agreed, however the additional capabilities can be put to use to make games run at higher frames/resolutions on the Pro consoles - just as X has higher frames/res compared to S
 

Radical_3d

Member
So either you and DF are stuck halfway through the 90's knowledge wise, or its a lack of it.
It's more a hyper focus thing, than a lack of knowledge. The problem are not the unoptimized games (we came from Jaguar ffs) that only hit 30fps like Starfield. Those are a tiny fraction of the games released nowadays. 95% of the issues in the video analysis of the very same DF videos are that the 60fps mode is a blurry mess. That's what this console is going to fix. And yes, it's only going to sell among us, the very hardcore gamers. It's not worth risking compatibility issues and increasing the BOM of the Pro beacuse devs with the same track record as Bethesda can't multi-thread. Everyone else is fine. The top 2 games of DF's last year better graphics awards had 60fps modes on consoles, people.
 
Last edited:

mansoor1980

Gold Member
series s suffers from low texture detail compared to series x
ps5 base and pro will have same texture detail setings
 
Last edited:
Agreed, however the additional capabilities can be put to use to make games run at higher frames/resolutions on the Pro consoles - just as X has higher frames/res compared to S
Yup, this can be done with config files which unlock the extra settings.
If HWID = consolex then load configX
Same as how currently the 60fps performance and 30FPS quality modes work. Its a config file. Nothing else.
 
It's more a hyper focus thing, than a lack of knowledge. The problem are not the unoptimized games (we came from Jaguar ffs) that only hit 30fps like Starfield. Those are a tiny fraction of the games released nowadays. 95% of the issues in the video analysisof the same DF are that the 60fps mode is a blurry mess. That's what this console is going to fix. And yes, it's only going to sell among us, the very hardcore gamers. It's not worth risking compatibility issues and increasing the BOM of the Pro beacuse devs with the same track record as Bethesda can't multi-thread. Everyone else is fine. The top 2 games of DF's last year better graphics awards had 60fps modes on consoles, people.
I see where you're goin with this and I'll walk with ya.

People clutch to certain aspects, seeing numbers they dont like, but dont have the knowhow to understand these numbers or how they will effect them.

"Only 10% CPU increase. PS5 Pro BAD!!!" Ehhhh no!? PS5 fine with extra headroom.
 
Top Bottom