• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry The PS5 GPU in PC Form? Radeon RX 6700 In-Depth - Console Equivalent PC Performance?

Bojji

Member
Here is me running Cyberpunk at 720p internal resolution using dlss performance at 1440p.

Notice the GPU usage at 87%. And yet the game is at 51 fps. Why? Clearly, the GPU isnt maxed out. You guys seem to think that the CPUs cant be bottlenecked in the 50s and CPU limits only come to play at 100+ resolutions. I can find you similar bottlenecks in Starfield and Star Wars. Hell, Avatar has a CPU benchmark built in and shows exactly how much CPU is taxed during combat encounters. I could run other two benchmarks at a locked 4k 60 fps using dlss quality, but the cpu benchmark was consistently in the 50s. And this CPU shits on most zen 2 and zen 3 cpus in its class. The PS5 cpu is way worse than those zen 2 and zen 3 cpus and is holding back the GPU.

3z0MiNb.jpg

This is how CPU limit looks like, Starfield running 1080p with DLSS to quality so that means 720p internal. I'm switching to ultra performance so resolution is ~360p or something and FPS stays the same:



If you are CPU limited changing resolution won't change FPS.
 

yurinka

Member
Overall, the RX 6700 and PS5 are often a close match, with the RX 6700 typically pulling slightly ahead
System: i9-13900K+32GB DDR5 6000 MT/s
These CPU and memory aren't PS5 equivalents at all. He should use a similar CPU and memory instead.
 

Leonidas

Member
first of all, no one is buying a 6700. And secondly, like i said, 6600xt is a closer comparison.
Its selling some on the used market (eBay, etc.), and it was an actual product that some people bought.
6700 was used for its identical to PS5 CU count, as you already mentioned...

Good thing he also included GPUs that did sell/are selling.

These CPU and memory aren't PS5 equivalents at all. He should use a similar CPU and memory instead.
Weren't meant to be.

You can't put high latency GDDR6 in a PC, unless you use the 4800S, which then cuts PCIe bandwith by 1/4.
 

Gaiff

SBI’s Resident Gaslighter
first of all, no one is buying a 6700. And secondly, like i said, 6600xt is a closer comparison.
The 6600 XT is not closer though. The 6600 XT only has 8GB, even lower bandwidth with a miserable 128-bit bus and 256Gb/s and fewer TMUs.

The 6700 XT has 10GB, closer to what the PS5 probably uses, and more bandwidth that still doesn't come quite to the level of the PS5, even with the Infinity Cache. It's definitely closer. Aside from the TFLOPs, the PS5 GPU is quite a bit ahead of the 6600 XT in every respects whereas it has some advantages over the 6700 and the 6700 boasts other advantages over it.
 
Last edited:

sachos

Member
Man i've been waiting for a video like this for ages! It would have been awesome to see that GPU paired with an equivalent CPU to the PS5 or compare a $400 budget PC vs PS5 or a $800 budget PC ( 400 PS5 + ~7 years of PS Plus). I wish they did more of these kind of tests.
 

yamaci17

Member
eh. this kind of metacommentary is not conducive to any good discussion. its no different from accusing each other of console warring. i thought we were above this.

Besides, you of all people should know that those zen 1 and zen 2 ryzens are completely trash. you play most of your pc games at 30-40 fps. you and i have been in dozens of these threads where im like my 3080 is giving me locked 60 fps while others with zen 2 CPUs complain about unoptimized ports instead of simply realizing that the low clocked zen 2 CPUs were not a good investment in the long run.

game after game after game, my i7-11700k has beaten its zen 2 counterparts. the ps5 cpu is even worse and acts more like a zen 1 cpu. similar to your 2700x and according to rich's own analysis, much worse than even that 2700x. I had people asking me how i was running Cyberpunk Path Tracing at 60 fps even at 720p internal resolutions.

This thread is ignoring years of data we have on console 60 fps modes struggling to hit 60 fps requiring severe downgrades to their resolutions. Those are clear indicators of CPU bottlenecks.


60 fps requiring severe downgrades to their resolutions"

so this part makes no sense from a cpu bottleneck perspective. like, literal no sense.

ps5 might be cpu bottlenecked but if they keep reducing resolution even further despite an apparent CPU bottleneck, that's a huge developer issue. so i dont think df or any other reviewer has to account for that. i wouldnt account for that

here's dogtown's one of the most cpu bound areas with ray tracing at 1080p dlss quality and 720p dlss ultra performance (237p, 100k pixels)

kbsAUmt.png

3w0Tixx.png



again you're bringing path tracing into discussion (cyberpunk path tracing 60 fps argument) which is actually a bit more CPU bound than regular ray tracing. The reason I target 40 fps is above, I dangerously get close to GPU bound at 1080p dlss quality. so anything targeting 1440p and ray tracing makes my framerate average towards 40 FPS. does it also help with crap CPU? sure.

but would playing this game at 240p would allow me to get a consistent 60 fps ? no. it simply can't. it won't help. it won't help on console either.
 
Last edited:

mrcroket

Member
so this part makes no sense from a cpu bottleneck perspective. like, literal no sense.

ps5 might be cpu bottlenecked but if they keep reducing resolution even further despite an apparent CPU bottleneck, that's a huge developer issue. so i dont think df or any other reviewer has to account for that. i wouldnt account for that

here's dogtown's one of the most cpu bound areas with ray tracing at 1080p dlss quality and 720p dlss ultra performance (237p, 100k pixels)

kbsAUmt.png

3w0Tixx.png



again you're bringing path tracing into discussion (cyberpunk path tracing 60 fps argument) which is actually a bit more CPU bound than regular ray tracing. The reason I target 40 fps is above, I dangerously get close to GPU bound at 1080p dlss quality. so anything targeting 1440p and ray tracing makes my framerate average towards 40 FPS. does it also help with crap CPU? sure.

but would playing this game at 240p would allow me to get a consistent 60 fps ? no. it simply can't. it won't help. it won't help on console either.
A bit offtopic but heck, it's amazing how DLSS is able to transform an NES resolution into a decent looking 720p image.
 


Summary coming later.

  • TLOU Part I at 4K/PS5 settings sees the PS5 average 36.2fps vs 27.9 on the RX 6700 a 30% advantage
  • That differences can balloon up to 43% in brief sequences
  • RX 6700 is VRAM-limited and Rich had to set texture streaming down to Fast from Fastest to go below the VRAM limit
  • Frontiers of Pandora has the RX 6700 around 10% ahead of the PS5 in Performance Mode
  • RTX 4060 is almost 19% faster than the PS5
  • RTX 2070S is 3% ahead of the PS5
  • RTX 3060 has 88% of the PS5's performance
  • Alan Wake 2 is 6% faster on the PS5 in Quality Mode
  • The RX 6700 is 11% faster than the PS5 in Performance Mode
  • RTX 4060 is 6% faster than the PS5 in Quality Mode, 7% in Performance Mode
  • RTX 2070S is equal to the PS5 in Quality Mode
  • RTX 3060 has 92% of the PS5's performance in Quality Mode. 96% in Performance Mode
  • Hitman 2 on the RX 6700 is 44% faster than on PS5 in RT Mode
  • Monster Hunter Rise has the RX 6700 outperform the PS5 by 32% in the 4K Mode
  • 6700 is 30% faster in the 2700p Mode
  • Cyberpunk 2077 RT Mode sees the PS5 pulling ahead of the RX 6700 by 45% in a GPU-limited scene
  • In Performance Mode, the RX 6700 using the same DRS windows as the PS5 is locked to 60fps whereas the PS5 can drop down to the low 40s
  • Rich suspects that DRS on PS5 is not working properly and too slow to respond, as evidence by the fps jumping from 51fps to 60fps in the same area
  • RX 6700 without DRS has performance in-line with PS5 with DRS
  • A Plague Tale Requiem has them both average 36.5fps
  • RTX 2070S has 97.5% of the PS5's performance
  • RTX 4060 has 94% of the PS5's performance
  • RTX 3060 has 83% of the PS5's performance
  • Overall, the RX 6700 and PS5 are often a close match, with the RX 6700 typically pulling slightly ahead
  • System: i9-13900K+32GB DDR5 6000 MT/s

So pair a monster cpu in a gpu comparison especially with the performance modes? Really wish there was a version of the ps5 with a 13900
 
  • Thoughtful
Reactions: amc
ya exactly. normally people online and even on these comparisons generalize "PC" as if everyone has the $1500 version. comparisons that actually show more similar hardware are far more interesting
This hardware still doesn’t seem similar when the cpu is nearly 3x the ps5 one it’s not a stretch to imagine it has some effect especially in the performance modes of these comparisons why not us the 3700x (which is still technically a bit above the ps5 cpu)
 
  • Thoughtful
Reactions: amc
Richard makes the observation that PS5 does slightly better at 4k while the 6700 has a lead at the 60fps modes (at 06:30 mark). This makes sense, the PS5 has the advantage of higher memory bandwidth while the 6700 relies on Infinity Cache which as AMD themselves pointed out has a larger miss rate at higher resolutions.
And… the fact that he’s using a 13900k in this comparison
 

Zuzu

Member
So it’s roughly equivalent to a 6700 on average. Pretty nice for 2020 hardware but obviously tech has moved on as it always does. Looking forward to seeing what the Pro offers.
 

Kataploom

Gold Member
What's Richard Leadbetter's obsession with these GPU to console comparisons? He keeps doing them and they're completely useless.
It is great to know if your build is up to date enough and with which parts you can have console equivalent performance and believe me, mid range PC players like knowing. There are many reasons to get a PC over console and power isn't even the greatest one, but power enough to run everything is good
 

Kataploom

Gold Member
Like i said, just one test. This is all an academic exercise anyway. If the idea is to compare the PS5 GPU then there is no harm in downclocking to get a 1:1 comparison especially since there are so many different variances in games.

The CPU thing is simply inexcusable. I went back to the beginining of the video twice and he simply fails to mention it. Even if he chose the 3600, he shouldve mentioned it.
Nope which kinda makes the whole video pointless. Doing a gpu comparison video without discussing the components used and the test methodology is amateur ngl.
I don't see the problem, even low end PCs have CPUs more powerful than console's, if anything, PC can overcome some bottlenecks with bruteforce and you don't even need very expensive parts, any mid range CPU (say, 5600X) is miles ahead on consoles and 32GB RAM is dirt cheap and pretty standards already.

Even 16GB is mostly good for console-like settings (they wouldn't bruteforce bad ports like TLOU or Jedi Survivor tho).

Problem is consoles hardware are very VERY compromised on everything but GPU that even low end parts tend to just way better on PC, they had to meet a price goal. PC is very powerful on anything CPU related, even the RAM it uses is way better for CPU related tasks than the one consoles use which is designed for graphics mostly.
 

Zathalus

Member
And… the fact that he’s using a 13900k in this comparison
Except Alan Wake 2 is not heavy on the CPU at all. A CPU similar to the one in the PS5 (3600) can do over 80fps. The XSX, having an almost identical CPU, manages to hold a stable 60fps as well.
 

Darius87

Member
argument like cpu doesn't matter is so stupid and unfair, even if it's true, but nothing is allways true to 100%, if it doesn't matter so why didn't richard use close to ps5 cpu? it should make no differences.
also ps5 has infinity cache just in another form it's called cache scrubbers.
 
they dont but the GPUs have to work overtime which is why they have to downgrade the resolution further than they otherwise would.

if uncharted 4 could run at 1080p 30 fps on a ps4 gpu then it shouldve been able to run at 720p 60 fps but the modders had to drop the resolution all the way down to 540p just because the jaguar CPUs still couldnt run those games at 720p 60 fps. thats a quarter of the 1080p resolution. we are seeing the same thing this gen with all these native 4k 30 fps modes dropping resolutions all the way down to 1080p because the CPUs cant run them at 1440p forcing the GPU to work overtime which means downgrading the GPU load by reducing resolutions.
That's a very good point of how console are so much CPU limited on consoles when trying to reach 60fps.
It doesn't matter on gpu limited scenaries
Most games on consoles will be CPU limited at 60fps and above. People do not realize how weak those laptop CPUs are. They are similar to Zen 1 when benchmarked against PC CPUs.
 
Last edited:

buenoblue

Member
What's Richard Leadbetter's obsession with these GPU to console comparisons? He keeps doing them and they're completely useless.
It's to get content and money from old hardware he bought that's sitting on a shelf🤷‍♂️.

As someone who still runs a 2070 super it's quite interesting to me though
 

Bojji

Member
argument like cpu doesn't matter is so stupid and unfair, even if it's true, but nothing is allways true to 100%, if it doesn't matter so why didn't richard use close to ps5 cpu? it should make no differences.
also ps5 has infinity cache just in another form it's called cache scrubbers.

It doesn't matter as you said, results would be the same. I don't care to be honest and he could use 3600 for example but I don't get why some people here are so obsessed with this 13900k when it's just rendering as many frames as 6700 is able to display.
 

Elysium44

Banned
It doesn't matter as you said, results would be the same. I don't care to be honest and he could use 3600 for example but I don't get why some people here are so obsessed with this 13900k when it's just rendering as many frames as 6700 is able to display.

You've answered your own question - it gives the 6700 an unfair advantage. It's why when PC GPUs are reviewed they are always done on the same system with same motherboard, CPU and RAM, so there are no other variables to skew the results. You can say it doesn't matter because it's GPU limited, well if Richard had confidence in that then he would have used a CPU as close to the PS5 as practicable, like a 3600. Better yet, not bothered at all because the whole thing is stupid.
 

Bojji

Member
You've answered your own question - it gives the 6700 an unfair advantage. It's why when PC GPUs are reviewed they are always done on the same system with same motherboard, CPU and RAM, so there are no other variables to skew the results. You can say it doesn't matter because it's GPU limited, well if Richard had confidence in that then he would have used a CPU as close to the PS5 as practicable, like a 3600. Better yet, not bothered at all because the whole thing is stupid.

It doesn't give any advantage. I will show you something, look at these tests with different CPUs, 6700XT rendering helldrivers 2 in 1080p with 14900K and 3600X (it's 2% faster than 3600):


z2O59DY.jpg
oe57AiY.jpg


Notice any difference? :pie_thinking:
 

Elysium44

Banned
We're going around in circles.

Btw Richard is being called out in the YouTube comments for the methodology failures in his comparisons, both for using the powerful CPU and also for clearly not running the games at equivalent settings. The whole thing is a mess and embarrassment.
 
Last edited:

winjer

Gold Member
It doesn't give any advantage. I will show you something, look at these tests with different CPUs, 6700XT rendering helldrivers 2 in 1080p with 14900K and 3600X (it's 2% faster than 3600):


z2O59DY.jpg
oe57AiY.jpg


Notice any difference? :pie_thinking:

Although I have been guilty of also posting gamegpu tables, I have to admit their results are not that reliable.
A lot of people have pointed out issues with them, suggesting they probably only test a few configurations and then just scale the rest of the results with some percentage.
And this is one of those cases. I could understand that the 6700XT with a 3600 and a 14900K could have similar average fps.
But the 14900K should have much better 1% lows.
 

hinch7

Member
It doesn't give any advantage. I will show you something, look at these tests with different CPUs, 6700XT rendering helldrivers 2 in 1080p with 14900K and 3600X (it's 2% faster than 3600):


z2O59DY.jpg
oe57AiY.jpg


Notice any difference? :pie_thinking:
Issue being when you're not so much GPU bound and there's an insane gulf in performance between a heavily gimped Zen 2 verses a 14th gen Intel with much higher clock speeds and IPC. That enables the GPU to breathe.

Daft comparison either way since Rich does have a 4800S which is literally the CPU+GDDR6 used in the Series consoles and closer matches the PS5. Also doesn't factor in API's, drivers, builds etc but that goes without saying. But yeah if the PS5 GPU is being bottlenecked any way, it would be more fair if the 6700 was closely matched with a similar system.
 
Last edited:

Bojji

Member
Although I have been guilty of also posting gamegpu tables, I have to admit their results are not that reliable.
A lot of people have pointed out issues with them, suggesting they probably only test a few configurations and then just scale the rest of the results with some percentage.
And this is one of those cases. I could understand that the 6700XT with a 3600 and a 14900K could have similar average fps.
But the 14900K should have much better 1% lows.

1% lows can be on CPU OR GPU, when game has CPU headroom 1% lows are entirely on GPU. As you can see WHEN CPU limited 1% lows are 114FPS for fastests AMD cards so why 6700XT should be affected by CPU in any way?

CPU limit looks like this:

1440p native:

pZ3bgY4.jpeg


853x480 (DLSS UP):

MKVHJ4p.png


Many people don't understand CPU and GPU limited scenarios that's why there is so much confusion in this thread + usual PS fanboys.
 
Last edited:

winjer

Gold Member
1% are not exactly always CPU dependent, when game has CPU headroom one 1% lows are entirely on GPU. As you can see WHEN CPU limited 1% lows are 114FPS for fastests AMD cards so why 6700XT should be affected by CPU in any way?

CPU limit looks like this:

1440p native:

pZ3bgY4.jpeg


853x480 (DLSS UP):

MKVHJ4p.png


Many people don't understand CPU and GPU limited scenarios that's why there is so much confusion in this thread + usual PS fanboys.

A couple of years ago, when I switched from a 3700X to a 5800X3D, and I was still using a 2070S, the averages were similar. Just a tad higher on the 5800X3D.
But the immediate difference I noticed was that games ran smoother. Much smoother. The 1% lows were vastly improved.
And it's not the first time I noticed that. I remember when I switched from a Q9550 to a 2600K, the difference in smoothness was night and day.
You are right in saying that the CPU is not the only thing affecting the 1% lows, as vram and pcie bus, and other things can also play a part.
But a 14900k with DDR5 should have much greater 1% lows than a 3600 with DDR4. Even with a GPU like a 6700XT.
 

Bojji

Member
A couple of years ago, when I switched from a 3700X to a 5800X3D, and I was still using a 2070S, the averages were similar. Just a tad higher on the 5800X3D.
But the immediate difference I noticed was that games ran smoother. Much smoother. The 1% lows were vastly improved.
And it's not the first time I noticed that. I remember when I switched from a Q9550 to a 2600K, the difference in smoothness was night and day.
You are right in saying that the CPU is not the only thing affecting the 1% lows, as vram and pcie bus, and other things can also play a part.
But a 14900k with DDR5 should have much greater 1% lows than a 3600 with DDR4. Even with a GPU like a 6700XT.

It's totally dependent on games only, CPU heavy games will see big differences but when GPU is not strong enough to show them there should be no difference. Every game can also have places that murders CPUs and others that are GPU limited.

Look at 5700XT here with Zen 1 to Zen 3:

WD-Ultra-1080p.png
ACV-Ultra-1080p.png
CP-Ultra-1080p.png
RSS-Ultra-1080p.png
HZD-Ultra-1080p.png
SotTR-Ultra-1080p.png


and average:

Avg-Ultra-1080p.png
 

Gaiff

SBI’s Resident Gaslighter
I think we’re all guilty of oversimplifying the bottleneck.

As Boji showed, it’s unlikely to be a CPU issue but there is definitely something else at play. It’s not just a simple problem of high resolution = GPU bottleneck and vice-versa. It’s a whole system working together and there are instances such as in Hogwarts Legacy before the patches where even lowering the resolution to 1080p on a 4090 for some reason led to lower fps than on a 7900 XTX despite both of them using the exact same configuration.

It could be a myriad of things; memory latency, PCIe getting bogged down, a streaming issue, driver overhead, or perhaps something causing a CPU stall. Who knows?

Whatever the case, I think this shows that multiplatform games on consoles are nowhere near as optimized as we were led to believe (despite the insistence that the PS5 is the lead platform). That the PS5 starts losing ground at less than 60fps is very puzzling when in many of those games, even cheap-ass CPUs have no trouble whatsoever at those performance points.

As I said before, my only real point of contention is the Hitman benchmark that could be a CPU bottleneck but I browsed beyond3d and the guys there don’t seem to think this is the case there either.

Richard could have used a 4800S but no one in the real world uses that desktop kit, and even assuming it caused the 6700 to drop in performance, then it wouldn’t be any more useful as a data point than the current ones since this is a GPU test and shifting the bottleneck to a different component would be just as pointless. It still wouldn’t be representative of the 6700’s performance which is what Rich aimed to do.

Still not a big fan of the suped-up config he has because this is just as unrealistic. Nobody pairs a 13900K with a 6700. Ultimately, I think to avoid controversy and display a proper methodology, Rich should have used a CPU people are likely to actually pair with the 6700 such as a 3600, 3700X, or 12400F. Now, I know that on PC we use top-tier CPUs and components to isolate the bottleneck to the GPU in GPU tests, but this is a console. It's not possible to replicate the exact same configuration devoid of potential bottlenecks. Doesn't mean you should force artificial bottlenecks either by throwing in a completely unrealistic desktop kit that isn't even available for purchase in most of the world for customers.

At this point, I don’t think we can make the results fair. I do think we can make them realistic though.
 
Last edited:

hinch7

Member
I think we’re all guilty of oversimplifying the bottleneck.

As Boji showed, it’s unlikely to be a CPU issue but there is definitely something else at play. It’s not just a simple problem of high resolution = GPU bottleneck and vice-versa. It’s a whole system working together and there are instances such as in Hogwarts Legacy before the patches where even lowering the resolution to 1080p on a 4090 for some reason led to lower fps than on a 7900 XTX despite both of them using the exact same configuration.

It could be a myriad of things; memory latency, PCIe getting bogged down, a streaming issue, driver overhead, or perhaps something causing a CPU stall. Who knows?

Whatever the case, I think this shows that multiplatform games on consoles are nowhere near as optimized as we were led to believe (despite the insistence that the PS5 is the lead platform). That the PS5 starts losing ground at less than 60fps is very puzzling when in many of those games, even cheap-ass CPUs have no trouble whatsoever at those performance points.

As I said before, my only real point of contention is the Hitman benchmark that could be a CPU bottleneck but I browsed beyond3d and the guys there don’t seem to think this is the case there either.

Richard could have used a 4800S but no one in the real world uses that desktop kit, and even assuming it caused the 6700 to drop in performance, then it wouldn’t be any more useful as a data point than the current ones since this is a GPU test and shifting the bottleneck to a different component would be just as pointless. It still wouldn’t be representative of the 6700’s performance which is what Rich aimed to do.

Still not a big fan of the suped-up config he has because this is just as unrealistic. Nobody pairs a 13900K with a 6700. Ultimately, I think to avoid controversy and display a proper methodology, Rich should have used a CPU people are likely to actually pair with the 6700 such as a 3600, 3700X, or 12400F. Now, I know that on PC we use top-tier CPUs and components to isolate the bottleneck to the GPU in GPU tests, but this is a console. It's not possible to replicate the exact same configuration devoid of potential bottlenecks. Doesn't mean you should force artificial bottlenecks either by throwing in a completely unrealistic desktop kit that isn't even available for purchase in most of the world for customers.
In any case both GPU's - 6700 and the PS5 are quite similar in performance as the specs suggest and results show. The only real outlier is TLOU, which is a poor port and RT in CP2077. Which can be down to bespoke hardware and software solutions, optimsations, APIs, VRAM etc etc.

Since most tests were heavily GPU bound, changing CPU wouldn't make a big difference in averages.
 
Last edited:

Kenpachii

Member
To see what a GPU theoretically can maximal push u put the best CPU together with it.
To see what a equal specced PC does versus a PS5, u pick equal parts, but this is going to be nonsense anyway as u can't find a cheap 16gb gpu around PS5 specs and u can't find a PC that does run without system ram. Let alone find a game that runs at the exact same settings as the PC counter part. Most games have lower then low settings u find on the PS5.

So the whole combine a PC that pushes the same hardware as PS5 is kind a pointless to start with, and nobody on PC should aim for PS5 specs hardware to start with. Because PC has different demands.

For example i won't use a PC without 2 monitors, so having more CPU power then games require is for me a big thing. I play a ton of RTS games so spending more on CPU is more interesting anyway specially as GPU's can be replaced easier down the road.

PC has tech such as framegen, that double the GPU and CPU performance and DLSS which increases image quality without having to deal with upscaling problems like consoles have that lower the quality considerable.

Also resolution is far different on PC, nobody plays a 4k 30 fps game on a PC over a lower resolution and 60+ fps.

Nobody with a 4060 as example will play cyberpunk at 1080p on medium console like settings.

Also nobody buys AMD gpu's over a Nvidia one when they are miles behind on feature sets and "framegen" performance in games or care about imagine quality and performance DLSS.. U gotta be a complete idiot to do so.

If your budget is 500 bucks, buy a console, if your budget is a 1000 bucks start looking for PC's.
 
Last edited:

IDWhite

Member
In any case both GPU's - 6700 and the PS5 are quite similar in performance as the specs suggest and results show. The only real outlier is TLOU, which is a poor port and RT in CP2077. Which can be down to bespoke hardware and software solutions, APIs, VRAM etc etc.

TLOU part I isn't a outlier, is what happens when you have highly optimized code on console. On PC you can't reach that performance efficiency even on Dx12 ultimate when low level code comes to play.
 
Last edited:

Bojji

Member
TLOU part I isn't a outlier, is what happens when you have highly optimized code on console. On PC you can't reach that performance efficiency even on Dx12 ultimate when low level code comes to play.

When 1 out of 10 games (just example not related to posted tests) shows that X =/= Y while other 9 shows that X = Y what has more probability?:

- Only one game out of 10 is optimized on X
- One game out of 10 is not optimized on Y
 

IDWhite

Member
When 1 out of 10 games (just example not related to posted tests) shows that X =/= Y while other 9 shows that X = Y what has more probability?:

- Only one game out of 10 is optimized on X
- One game out of 10 is not optimized on Y

We aren't talking about probability. It's what happens on game development right now.

The reality is that only a small fraction of devs take the time to do efficient code on close hardware (consoles). And most of them are first party studios.
 

Bojji

Member
We aren't talking about probability. It's what happens on game development right now.

The reality is that only a small fraction of devs take the time to do efficient code on close hardware (consoles). And most of them are first party studios.

You can also say that based on the number of issues this game had since day one on PC they really didn't put much love into optimizing it on PC. There is no good reason for console game to perform much worse on equivalent PC hardware with Vulcan and DX12 outside of maybe heavy use of decompression hardware inside PS5 (but this level of constant streaming is rarely needed).

It was first ND game on PC so maybe they just fucked up...
 

peish

Member
  • System: i9-13900K+32GB DDR5 6000 MT/s

This is nonsense

Ps5 uses a crappy zen2 cpu, zen3 with its improved l3 cache and turbo boost crushed zen2 in gaming
 

yamaci17

Member
When 1 out of 10 games (just example not related to posted tests) shows that X =/= Y while other 9 shows that X = Y what has more probability?:

- Only one game out of 10 is optimized on X
- One game out of 10 is not optimized on Y
last of us part 1 on pc uses crazy amounts of shared vram, even if you have plentiful of vram free. this causes huge slowdowns and stalls on GPU as a result. it is a broken game with no proper texture streaming and instead relies on shared vram. but funny thing is, even if you have enough free vram, game will still heavily rely on shared vram which will still cause performance stall. it has nothing to do with console optimization;

even at low settings at 1080p, game uses extreme amounts of shared vram, EVEN if you have plentiful free vram. which means this idiotic design choice affects all GPUs ranging from 12 GB to 24 GB as well. notice how game has free DXGI budget to use but instead decides to use A LOT of shared vram which is uncalled for.


KX31Yrc.png


it is a design failure that affects all GPUs right now. of course DF is not aware of it (but they should). technically it should not tap into shared vram when there's free vram to work with. technically no game should ever hit shared vram anyways, they should intelligently stream textures like a modern engine should. but their engine is hard coded to offload a huge amount of data to shared vram no matter what, which causes stalls because this is NOT how you want your games to run on PC

a proper game with proper texture streamer like hogwarts or avatar or some other actual modern engine won't have weird issues like this

cyberpunk in comparison:

see how game uses all available DXGI budget as it should, and just uses a small amount of shared memory (for mundane stuff, unlike in last of us)
oIpzYk4.png


YI9lmGP.png


even the stupid hogwarts legacy port is better than tlou in vram management

ZmDZi0r.png


I've literally seen no other game that uses more than 1 GB of shared memory, let alone 4 GB of shared memory. it just makes the GPU stall and wait for shared VRAM calls all the time, which explains the "outlier" performance difference.

also in this example, notice how most games can only use 6.8-7 GB of DXGI budget on a 8 GB GPU. In most cases games will only be to utilize around %87-90 of your total VRAM as DXGI budget. For a 10 GB GPU, that becomes 8.7-9 GB of usable DXGI budget, which is still far below what PS5 can fully offer to games ranging from 10 GB to 13.5 GB (out of 13.5 GB)

console CPU bound RAM usage is almost trivial compred to PC in most cases.

For example, horizon forbidden west runs the same physics and simulations on PS4 and PS5. On PS4, game most likely uses 1 GB or 1.5 GB of CPU bound data. Considering both versions function same, it is most likely the game is also using 1-1.5 GB of CPU bound data on PS5 as well. That gives horizon forbidden west 12 or 12.5 GB OF GPU bound memory to work with, pure memory budget that is not interrupted by anything else. So in such cases even 12 GB desktop GPU can run into VRAM-bound situations. Because even 12 GB GPUs will often have 10.4-10.7 GB of usable DXGI budget for games. it is how PS5 mainly offers much higher resolution and better texture consistency over PS4 in HFW. It has insane amounts of GPU memory available to it on PS5. This game will have issues even on 12 GB GPUs when it is launched on PC at 4K/high quality textures, and remember my words when it happens.

it is why "this gpu has 10 gb, ps5 usually allocates 10 gb gpu memory to games so it is fair" argument is not what it seems it is. it is not fair, and it can't be. it is a PC spec issue (just like how PS5's CPU is a PS5 spec issue). Comparisons made this way will eventually land into the 2 GB hd 7870 vs. PS4 comparisons where 7870 buckled DUE TO VRAM buffer not being enough for 2018+ games (not a chip limitation, but a huge vram limitation) Same will happen at a smaller intensity for all 8-10 GB GPUs. Maybe 12 GB GPU can get away from being limited by this most of the time, as long as games use around 10 GB GPU memory on PS5. But if a game uses 11-12 GB GPU memory on PS5 like HFW (as i theorized), even the 12 GB GPU will have issues, or weird performance scaling compared to console.

so don't look into it too much. in this comparison, ps5 is gimped by its cpu and 6700 is gimped by its VRAM, bus and some other factors. you literally can't have 1 to 1 comparison with 6700 and a PS5. Ideally you would ensure the GPU has free 13.5 GB of usable DXGI memory budget, because you can never know how much the game uses for GPU bound operations on PS5. you can guess and say it is 10 GB, 11 GB but it will depend wildly from game to game. Which is why trying to match everything at some point becomes pointless. Ideally I'd compare 4060ti 16 GB to PS5 throughout all the gen to ensure no VRAM related stuff gets in the way.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
lol can we at least agree that the PS5 is CPU bottlenecked in this scenario?

3uyAvGS.jpg


Leave it to Richard to pair a $550 card with a $600 CPU. Im sure gamers everywhere are running this setup.
 

IDWhite

Member
You can also say that based on the number of issues this game had since day one on PC they really didn't put much love into optimizing it on PC. There is no good reason for console game to perform much worse on equivalent PC hardware with Vulcan and DX12 outside of maybe heavy use of decompression hardware inside PS5 (but this level of constant streaming is rarely needed).

It was first ND game on PC so maybe they just fucked up...

Again, Vulkan and Dx12 aren't comparable to PS APIs.

The same goes to hardware architecture. This isn't only GPU and CPU, you have to take into account all the system. And Ps5 has unique hardware features for data flow through the system that are impossible to emulate with the same performance on "equivalent" PC hardware. Like the performance on PC with full RT hardware acceleration possible to emulate on consoles.

What happen is that most devs only make a general prourpose engine whit a comon feature set on both consoles and PC and this is the reason why most games on consoles perform close to PC parts on same conditions.
 

yamaci17

Member
lol can we at least agree that the PS5 is CPU bottlenecked in this scenario?

3uyAvGS.jpg


Leave it to Richard to pair a $550 card with a $600 CPU. Im sure gamers everywhere are running this setup.
let's agree; but why should I care ? a cheap i5 13400f will have similar performance in those scenes. you're purposefully avoiding the points I made, while trying to potray things as extreme by constantly making remarks on "600 bucks cpu"

the CPU you people keep obsessing over won't have any meaningful difference over something like 13400f/14400f which is much much cheaper.

this 200 bucks cpu roflstomps this game



and its the minimum cpu you should pair a 4070+ and above GPU with. and that's the bottom line. i also wish they did test with 13400f myself, BUT JUST SO that you wouldn't be able to keep lamenting about the mythical "600+ bucks CPU" that some of you try to portray as the only way to get these GPUs working at high framerates

ps5 is bottlenecked or not, a 200 bucks run off the mill midrange i5 CPU is able to keep up with a GPU caliber of 7800xt/4070 in this game, which is a pretty cheap CPU by itself. 13900k or 13400f wont matter for 4070/7800xt class of GPU. 10400f to 13400f will matter, but 13400f to 13900k won't really matter here. it is all about IPC/single thread performance which new generation CPUs have plenty of.

why should I be mad about DF using an 13900k 600 bucks CPU when I can get %80-90 of the same performance for 200 bucks price with an i5????
sure they could do it but it would be a chore. Rich deep down knows that these tests would have almost a similar outcome were he to use a 13400f. so why should he even bother, lmao. anyone who pairs 4070 and above with a CPU worse than 13400f are doing themselves as a disservice anyways. that 13400f will also destroy much expensive 12900k or 11900k easily. it is all about single thread performance and it is really easy to access unlike flagship CPUs that cost a lot.

13900k's price doesn't even make sense.
 
Last edited:

Gaiff

SBI’s Resident Gaslighter
lol can we at least agree that the PS5 is CPU bottlenecked in this scenario?

3uyAvGS.jpg


Leave it to Richard to pair a $550 card with a $600 CPU. Im sure gamers everywhere are running this setup.
Possibly but that'd be pathetic. Look at how it runs with old-ass CPUs and slow memory. Doesn't seem particularly CPU-intensive. On a 4090 by the way.

D6JMqGm.png
 

FoxMcChief

Gold Member
What's Richard Leadbetter's obsession with these GPU to console comparisons? He keeps doing them and they're completely useless.
I was just thinking that it would be a fun job to do tests that mean absolutely nothing to absolutely nobody.
 

SlimySnake

Flashless at the Golden Globes
. it is all about IPC/single thread performance which new generation CPUs have plenty of.
yes, I believe i linked the tweet of that Avatar tech director who said the same thing. Now tell me if comparing a 6.0 Ghz CPU with a 3.5 Ghz console CPU would not make a difference even at 50 fps instead of 100.
 
Top Bottom