• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry: Deathloop PC vs PS5, Optimised Settings, Performance Testing + More

martino

Member
I do not assume, see my post above. I am using 2 specs that are popular within the steam survey and that I have access to. The 5600XT was a machine that I built after the 2070, currently have another build that I will have ready to use soon. Should out a smile on some people's faces I am sure.
No, with your available hardware you re not opting for more representative combination if we go by steam survey
I know it's so hard to switch gpus...But can you do the huge effort to pair the 2070 with the 3600 next time ?
 

Kenpachii

Member
NX Gamer has recently uploaded a video which includes a comparison between the PS5 and his overclocked RTX 2070.

At approximately the 7:35 minute mark he begins a statement where he says that the PS5 performs approximately 30% better than the 2070 at equivalent quality settings when the character is standing still and between 28-42% better when the character is moving. At approximately the 10:53 minute mark he begins a comparison between the 2070 running the equivalent ray traced settings to the PS5 and concludes that the PS5 is outperforming the 2070 at up to 20% better in its ray tracing mode.

This is obviously a different conclusion to Alex's video. I know there's been debate on this forum regarding the possibility of NX Gamer's cpu being a bottleneck to his RTX 2070. But would this really be the case for the RTX mode where both machines are dropping below 30fps? That seems to be a GPU bottleneck. This seems to provide some strong evidence that the PS5 is more powerful than an overclocked RTX 2070.

NX Gamer's video is here:



NX gamers PC comparisons are useless. Dude has no clue what he's doing. Stop linking his PC video's so we can keep our sanity.

At approximately the 10:53 minute mark he begins a comparison between the 2070 running the equivalent ray traced settings to the PS5 and concludes that the PS5 is outperforming the 2070

  • PS5 RT is lower than the lowest PC setting

05c6e52f962847085163d481b91be077.png


I ran the demo myself, you don’t have to explain me that. The demo scene The Acient wasnt more impressive then the PS5 demo, and my fps was higher in editor mode. You could mess with the environment that were the ram usage will go sky rocket.

When i ran the playable demo, the fps was a bit lower since ghe scenes geometry and texture detail changed, vfx was added on top of it and you had a playable character.

I explain you the reality of things. What u find more impressive is great for you, but it isn't relevant when u compare pc vs ps5 version.



  • at launch, stutter is from botched mouse movement
  • because of this, stutter is apparent with VRR
  • lock your framerate
  • at first launch, there's a consistent (actual) stutter, any new launch removes the stutter
  • beta patch "juliashotme" fixes the mouse stutter, does not work on non-60/120fps refreshes yet
  • to maintain a target framerate, you need a lot of headroom on your gpu, recommended to use dynamic resolution
  • performance dynamic res aggressively scales the image to maintain framerate
  • optimized settings is essentially ps5 settings
  • PS5 uses balanced ambient occlusion
  • ps5 model detail is at high
  • ps5 water detail uses very high setting
  • ps5 using medium motion blur
  • shadows should be high or ultra
  • very high terrain
  • ultra decals (affects bullet holes)
  • optimized increased 31% over ultra settings
  • visual quality mode ps5 has similar performance between a 2060S and 2070S / 10% higher than a 5700
  • PS5 RT is lower than the lowest PC setting
  • 2060S can produce locked 30fps and better, not blurry IQ than PS5's RT mode
  • 3080 and 6800XT has same performance, rasterized
  • RT adds 7.2ms of render time for 3080, 11.2ms for 6800
  • performance RTAO is the better option
  • PS5 uses High Textures
  • going over budget does not cause stuttering, just cause low res textures closer to the camera
  • performance scales with bandwidth
  • 4.5% performance loss v.high vs low on a 2070S
  • v.high textures are worth it
  • Ryzen 3600/2060S can do dynamic 4K/60 (raster only)
  • 3080 can do native 4K/~70 (raster only)
  • with RT, dynamic 1440/60
  • with RT, 3080 dynamic 4K/60



Nice video.

However imagine locking framerate towards 60 fps on PC and focus on resolution in a shooter. straight up useless.
 
Last edited:

Snake29

RSI Employee of the Year
NX gamers PC comparisons are useless. Dude has no clue what he's doing. Stop linking his PC video's so we can keep our sanity.



  • PS5 RT is lower than the lowest PC setting

05c6e52f962847085163d481b91be077.png




I explain you the reality of things. What u find more impressive is great for you, but it isn't relevant when u compare pc vs ps5 version.




Nice video.

However imagine locking framerate towards 60 fps on PC and focus on resolution in a shooter. straight up useless.

I made my point in my comment. The reality is that i used the engine so I know exactly what I'm talking about and what my output was.
 

hlm666

Member
What's with all this billions of triangles talk? I'm pretty confident the ps5 has never and will never have billions of triangles in a scene(frame)
He doesn't understand that in the ue5 demo he keeps referencing the 16 billion triangles are the models in memory but it's only rendering like 180k at most, it's the whole point of nanite but he just didn't get it. You can link him videos on youtube of guys throwing 50 billion polygons of instanced model data into memory and say it's rendering 50 billion triangles if you want, I think your better off just moving on.

edit: ment 180 million.
 
Last edited:

Loxus

Member
Because these are the same frustrated people in the UE5 thread who keep spamming with :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: ....but have no arguments.
What are you talking about, I didn't even had an account when Unreal Engine 5 PS5 demo was shown.

The demo was shown in May, while I created my account in September. That's 5 months in between UE5 and I creating an account.

If you said spamming in PS5/XBSX thread, now that's another story. :messenger_grinning_sweat:
 

ToTTenTranz

Banned
As I have said before many upgrade GPU before CPU, some and many will have a Zen2700 CPU and then upgrade from a 2018 built machine with a RX580 to a 2070 or 2070Super etc.

Some people will have Zen 1 and Zen 1.5 CPUs, but most people at this point will have either a Zen 2, Intel Haswell or later processors. This means that, just like the 9th-gen consoles that most games are made for nowadays, most people are using CPUs with dual 256bit FMAC units per core, so they all have twice the floating point throughput per core than the the Zen 1.5 CPU you're using.
The upgraded FPU on Zen2 s the largest culprit for the higher IPC over Zen 1 and Zen 1.5, especially in videogame benchmarks.


I'm not jumping on the ridiculous bandwagon of criticizing your work because I honestly enjoy it and I thank you for that. But now that we're almost a year into the new consoles with Zen 2 CPUs I think it might be a good time to upgrade to a CPU that at least keeps feature parity with the new consoles, or you're probably going to end up CPU-limited in many scenarios.
 

Snake29

RSI Employee of the Year
What are you talking about, I didn't even had an account when Unreal Engine 5 PS5 demo was shown.

The demo was shown in May, while I created my account in September. That's 5 months in between UE5 and I creating an account.

If you said spamming in PS5/XBSX thread, now that's another story. :messenger_grinning_sweat:

But you PC guys alway come along and derail the topic.

I was referring to this.
 

Loxus

Member
What's with all this billions of triangles talk? I'm pretty confident the ps5 has never and will never have billions of triangles in a scene(frame)
I'm tired of you PC fanboys.
You guys hate for no reason at all.
Provide just baseless things that doesn't help but ruin threads.

This is next-gen: see Unreal Engine 5 running on PlayStation 5
What kind of detail levels are we talking about here? The 'Lumen in the Land of Nanite' demo includes a close-up on a statue built from 33 million triangles with 8K textures. It's displayed at maximum fidelity within the scene, with no developer input required. Moving into the next room, the demo wows us with almost 500 of those same statues in place (485 to be precise), all displayed at the same maximum quality. That's 16 billion triangles in total, running smoothly in-scene. It sounds impossible, but what next-gen delivers are the tools to deliver on an age-old rendering vision that seemed unattainable - until now.

It's displayed in the scene. Triangles are how detailed you can get the meshes to be. Why do you think it's so detailed when you zoom closeup to the object?

A statue with 33 million triangles is still 33 triangles in game, same with the 8K textures. It's 8K so when zoomed in close, it will still maintain a high level of detail.

It's not any different to Horizon Zero Dawn's creatures.
Horizon: Zero Dawn’s Thunderjaw
Horizon: Zero Dawn’s Thunderjaw is made up of 550.000 polygons according to game director Mathijs De Jonge. In comparison – Killzone 3 used a maximum of 250.000 polygons for everything on the screen combined.
 
I'm tired of you PC fanboys.
You guys hate for no reason at all.
Provide just baseless things that doesn't help but ruin threads.

This is next-gen: see Unreal Engine 5 running on PlayStation 5
What kind of detail levels are we talking about here? The 'Lumen in the Land of Nanite' demo includes a close-up on a statue built from 33 million triangles with 8K textures. It's displayed at maximum fidelity within the scene, with no developer input required. Moving into the next room, the demo wows us with almost 500 of those same statues in place (485 to be precise), all displayed at the same maximum quality. That's 16 billion triangles in total, running smoothly in-scene. It sounds impossible, but what next-gen delivers are the tools to deliver on an age-old rendering vision that seemed unattainable - until now.

It's displayed in the scene. Triangles are how detailed you can get the meshes to be. Why do you think it's so detailed when you zoom closeup to the object?

A statue with 33 million triangles is still 33 triangles in game, same with the 8K textures. It's 8K so when zoomed in close, it will still maintain a high level of detail.

It's not any different to Horizon Zero Dawn's creatures.
Horizon: Zero Dawn’s Thunderjaw
Horizon: Zero Dawn’s Thunderjaw is made up of 550.000 polygons according to game director Mathijs De Jonge. In comparison – Killzone 3 used a maximum of 250.000 polygons for everything on the screen combined.
You are misunderstanding. The source is that many triangles. Nanite scales with resolution so at 1440p it's in the 10 million triangles per frame range. Sooooo when you back up the triangles stay the same, but you don't need as many to have high quality visuals from further away. When you move in really close you get fantastic detail because it's still 10 million or whatever triangles. It's never billions. You are mistaken the source having billions and nanite intelligently scaling that down to a reasonable amount to still offer insanely good detail at any distance.

Gonna add a quick edit of please stop attacking people unless you're 100% sure you are right, because now you look silly.
 
Last edited:

yamaci17

Member
Some people will have Zen 1 and Zen 1.5 CPUs, but most people at this point will have either a Zen 2, Intel Haswell or later processors. This means that, just like the 9th-gen consoles that most games are made for nowadays, most people are using CPUs with dual 256bit FMAC units per core, so they all have twice the floating point throughput per core than the the Zen 1.5 CPU you're using.
The upgraded FPU on Zen2 s the largest culprit for the higher IPC over Zen 1 and Zen 1.5, especially in videogame benchmarks.


I'm not jumping on the ridiculous bandwagon of criticizing your work because I honestly enjoy it and I thank you for that. But now that we're almost a year into the new consoles with Zen 2 CPUs I think it might be a good time to upgrade to a CPU that at least keeps feature parity with the new consoles, or you're probably going to end up CPU-limited in many scenarios.

yeah problem is zen/zen+ particularly

even a 8th gen i5 (4 year old) will easily wreck the 2700x.


even a 4 core 4 thread i3 tends to stomp the gimmick zen/zen+ chips. 3300x flatout leaves the 2700x in dust. literal dust. there's up to %60 gains in some scenarios

i remember my friend sidegrading from his i3 8100 to a ryzen 1600af only to get %40 less fps in gta 5 and csgo. this is how brutal zen/zen+'s performance is

he thought them cores would be better but yeah... horrendous ipc with glued cpu clusters is not a good recipe for actual gaming scenarios

i'd say even zen 2 is not decent,



see glued cores can only perform so much. everyone will say diff. between zen 3 and zen 2 is like %20-25 and not worth the upgrade

then comes an actual cpu bound in-game scenario where zen 3 beats zen 2 by %55-65

---

i dont bash anyone. i just dont accept the notion of "average gamer has worse cpu than 2700". nah, average gamer have waaaay more capable cpu than a random 2700. as i said, even a 8th gen i5 from 2017 will have way more pleasing gaming experience than the zen chip
 
Last edited:

Loxus

Member
You are misunderstanding. The source is that many triangles. Nanite scales with resolution so at 1440p it's in the 10 million triangles per frame range. Sooooo when you back up the triangles stay the same, but you don't need as many to have high quality visuals from further away. When you move in really close you get fantastic detail because it's still 10 million or whatever triangles. It's never billions. You are mistaken the source having billions and nanite intelligently scaling that down to a reasonable amount to still offer insanely good detail at any distance.

Gonna add a quick edit of please stop attacking people unless you're 100% sure you are right, because now you look silly.
Bruh, Epic developers literally said 16 billion triangles when they walked in the room. What more proof do you what? It's not like the PS5 isn't a generational leap up from the PS4.

Also, in-game texture size and meshes doesn't change with output resolution. You ever modded a game? You still notice the difference between texture sizes in-game, even if your playing at 1080p.

Put a low polygon count/low texture door in-game, you notice a difference no matter resolution.

Unreal Engine 5 also have next-gen tools. To say it's not possible even though it's clearly said to be 16 billion triangles by Epic developers with maximum fidelity.

I bring sources, you just post random shit to make yourself feel better.
 

SlimySnake

Flashless at the Golden Globes
A Console is more than a GPU a PC is more than a GPU,
You say that and yet completely ignore the CPU bottlenecking your system by your own account like when you mentioned switching to the 3600 fixes many frame related issues.

If you want to compare the PS5 to your system then fine, say that you are comparing the systems... NOT the GPUs. I have watched your video twice now and you continue to make the comparison between the PS5 GPU and the Nvidia 2070. I am sorry, but that is simply inaccurate. Your comparison is only valid for the CPU/GPU combo. It is simply not an accurate comparison between the two GPUs.

The 5600xt is a 7.1 tflops card outperforming the 9.1 Tflops 2070 in your tests all because you switched to the 3600 in your tests for the 5600xt. That shouldve been your first clue.
 

SlimySnake

Flashless at the Golden Globes
I don't really see an issue. Deathloop is targeting 60fps on PS5, here even a 4770K quad-core CPU that is almost a decade old is able to deliver 60+fps. NXG was also targeting 60fps in his tests anyway.
Thats a 1080p test. NXgamer was testing higher resolutions.

UuGZiFG.png


Do you see how far behind the 5600xt is? Even the 10.6 tflops 6600xt is behind the 2070. His test results are all wrong.
 

NXGamer

Member
You say that and yet completely ignore the CPU bottlenecking your system by your own account like when you mentioned switching to the 3600 fixes many frame related issues.

If you want to compare the PS5 to your system then fine, say that you are comparing the systems... NOT the GPUs. I have watched your video twice now and you continue to make the comparison between the PS5 GPU and the Nvidia 2070. I am sorry, but that is simply inaccurate. Your comparison is only valid for the CPU/GPU combo. It is simply not an accurate comparison between the two GPUs.

The 5600xt is a 7.1 tflops card outperforming the 9.1 Tflops 2070 in your tests all because you switched to the 3600 in your tests for the 5600xt. That shouldve been your first clue.
I state "multiple times" in the video about the CPU and even highlight the single core impact within the Video. I even state that the game runs better on AMD GPU than Nvidia GPU and the scaling does not work as often on the RTX2070.

Also if you watched the video you would see I have a fast RX card which is over 8Tf not 7 as locked at 1780MHz and has 14Gbps Ram so 336GB/s not 288.

Watch the video before saying things that are both inaccurate and fully explained within.

So many hurty feels in this thread.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
I state "multiple times" in the video about the CPU and even highlight the single core impact within the Video. I even state that the game runs better on AMD GPU than Nvidia GPU and the scaling does not work as often on the RTX2070.

Also if you watched the video you would see I have a fast RTX card which is over 8Tf not 7 as locked at 1780MHz and has 14Gbps Ram so 336GB/s not 224.

Watch the video before saying things that are both inaccurate and fully explained within.

So many hurty feels in this thread.
I told you I watched your video multiple times.

If anything I think you are mistaken about your own facts. You say you have an rtx card that is over 8 not 7 tflops. Well Every 2070 rtx card is over 9 tflops at average in game clocks. You also state the game performs worse on nvidia cards which isn’t true if you see the benchmarks above. Go plug in the 2070 in your 3600 system and see the results for yourself.

And I do know that you mentioned the 3600, I said so in my post, and pointed out why you were getting better results with the 3600.

The problem I have with your tests is that you compare the GPUs, you can’t do that when you are using a cpu that is bottlenecking your 2070 to the point where it is performing worse than a gpu that has 2 fewer tflops. You should compare your systems, not say the 2070 is worse than the ps5 GPU. You should say that your system is performing worse than the ps5 console.

Go back and watch your video and you will see how many times you compare the gpu instead of the system.

Put the 5600xt in your 2700 system and the 2070 in your 3600 system and you will see why your results are so off. You are comparing two gpus running on different cpus and drawing conclusions. Its simply inaccurate.
 
Last edited:

NXGamer

Member
I told you I watched your video multiple times.

If anything I think you are mistaken about your own facts. You say you have an rtx card that is over 8 not 7 tflops. Well Every 2070 rtx card is over 9 tflops at average in game clocks. You also state the game performs worse on nvidia cards which isn’t true if you see the benchmarks above. Go plug in the 2070 in your 3600 system and see the results for yourself.

And I do know that you mentioned the 3600, I said so in my post, and pointed out why you were getting better results with the 3600.

The problem I have with your tests is that you compare the GPUs, you can’t do that when you are using a cpu that is bottlenecking your 2070 to the point where it is performing worse than a gpu that has 2 fewer tflops. You should compare your systems, not say the 2070 is worse than the ps5 GPU. You should say that your system is performing worse than the ps5 console.

Go back and watch your video and you will see how many times you compare the gpu instead of the system.

Put the 5600xt in your 2700 system and the 2070 in your 3600 system and you will see why your results are so off. You are comparing two gpus running on different cpus and drawing conclusions. Its simply inaccurate.
RX card that was a typo by me and no, RTX2070 is not a 9tf as standard, even a 2070S is only JUST over 9tf. A stock 2070 is 7.5Tf and a stock RX5600XT is 7.

I state the full specs at the start of the video.

I call out being CPU bound (at points not always)

I SHOW you the results.

You bring a graph, answer me this, which part in that Graph reflects what scene in the game compared to my video?
 
So its ok to attack others when you're 100% sure you are right.

I like you.
You do you. My comment was only about looking silly.
Bruh, Epic developers literally said 16 billion triangles when they walked in the room. What more proof do you what? It's not like the PS5 isn't a generational leap up from the PS4.

Also, in-game texture size and meshes doesn't change with output resolution. You ever modded a game? You still notice the difference between texture sizes in-game, even if your playing at 1080p.

Put a low polygon count/low texture door in-game, you notice a difference no matter resolution.

Unreal Engine 5 also have next-gen tools. To say it's not possible even though it's clearly said to be 16 billion triangles by Epic developers with maximum fidelity.

I bring sources, you just post random shit to make yourself feel better.
See you still don't understand. I'm not sure you watched that video I told you to watch. They explain it very well. The ps5 has not and probably will not render billions of triangles in a frame ever.
 

SlimySnake

Flashless at the Golden Globes
RX card that was a typo by me and no, RTX2070 is not a 9tf as standard, even a 2070S is only JUST over 9tf. A stock 2070 is 7.5Tf and a stock RX5600XT is 7.

I state the full specs at the start of the video.

I call out being CPU bound (at points not always)

I SHOW you the results.

You bring a graph, answer me this, which part in that Graph reflects what scene in the game compared to my video?
Nah, you are going by Nvidia specs which do not list ingame clocks. In game clocks for 2070 cards is around 1950 mhz. You can look at your own video and it will show it hovering around 1950 mhz going up to 1980 mhz at one point.

XMyfVNc.jpg

At 1950 mhz, thats a 8.985 tflops card.

2304 shader processors * 2 instructions per clocks * 1.95 ghz = 8.985 tflops.

The founders edition can hit up to 1.935 Ghz. That comes out to be 8.91 Tflops.


The 2070 Super Founders Edition hits 1.935 Ghz. It has 2560 Shader cores. Thats 9.9 tflops.


My graph shows the CPUs normalized. You need to do the same. Take your 2070 out of your 2700 system and plug it in the 3600 system. You will see it perform better than the 5600xt.

And the issue I have with your video is again your insistence in comparing the 2070 GPU with the PS5 GPU. Compare the PS5 system with YOUR 2700/2070 system and we are all good. The verbage is the issue here.
 

Rikkori

Member
You can't look at PC as if only one build exists like in consoles. That's the whole bloody point - with a PC you can mix & match, and upgrade different parts at different times so bottlenecks are inevitable. If you only look at PC from the standpoint of the performance of the build with the highest end components currently available then you miss the point of the ecosystem entirely.

The people high on PCMR should lay off of it, they develop really bizzare insecurities and feel the need to compare against consoles all the time. And I say that as a PC-only guy.
 
You can't look at PC as if only one build exists like in consoles. That's the whole bloody point - with a PC you can mix & match, and upgrade different parts at different times so bottlenecks are inevitable. If you only look at PC from the standpoint of the performance of the build with the highest end components currently available then you miss the point of the ecosystem entirely.

The people high on PCMR should lay off of it, they develop really bizzare insecurities and feel the need to compare against consoles all the time. And I say that as a PC-only guy.
I think it's more about certain groups saying incorrect things.
 

Loxus

Member
You do you. My comment was only about looking silly.

See you still don't understand. I'm not sure you watched that video I told you to watch. They explain it very well. The ps5 has not and probably will not render billions of triangles in a frame ever.
Straight from the Unreal Engine 5 PS5 demo about the statues.

"Remember we mentioned high poly assets?
This statue was imported directly from ZBrush, and is more than 33 million triangles.
No baking of normal maps, no authored LODs.
And we can do more than a single statue... there are nearly 500 of that exact statue at the same detail level placed in this room.
For a total of over 16 billion triangles from statues alone."
"Over this entire demo, there are hundreds of billions of triangles. So with Nanite, you have limitless geometry..."

How is all that possible?
Epic had worked closely with Sony in optimizing Unreal Engine 5 for the PlayStation 5, with Epic collaborating with Sony on the console's storage architecture.

We know the PS5 can pull assets directly from RAM, utilizing the RAM to it's full potential.

I don't know why you can't expect it.
 
Last edited:

Guilty_AI

Member
Straight from the Unreal Engine 5 PS5 demo about the statues.

"Remember we mentioned high poly assets?
This statue was imported directly from ZBrush, and is more than 33 million triangles.
No baking of normal maps, no authored LODs.
And we can do more than a single statue... there are nearly 500 of that exact statue at the same detail level placed in this room.
For a total of over 16 billion triangles from statues alone."
"Over this entire demo, there are hundreds of billions of triangles. So with Nanite, you have limitless geometry..."

How is all that possible?
Epic had worked closely with Sony in optimizing Unreal Engine 5 for the PlayStation 5, with Epic collaborating with Sony on the console's storage architecture.

We know the PS5 can pull assets directly from RAM, utilizing the RAM to it's full potential.

I don't know why you can't expect it.
i think you're just playing a fool at this point.

 
Straight from the Unreal Engine 5 PS5 demo about the statues.

"Remember we mentioned high poly assets?
This statue was imported directly from ZBrush, and is more than 33 million triangles.
No baking of normal maps, no authored LODs.
And we can do more than a single statue... there are nearly 500 of that exact statue at the same detail level placed in this room.
For a total of over 16 billion triangles from statues alone."
"Over this entire demo, there are hundreds of billions of triangles. So with Nanite, you have limitless geometry..."

How is all that possible?
Epic had worked closely with Sony in optimizing Unreal Engine 5 for the PlayStation 5, with Epic collaborating with Sony on the console's storage architecture.

We know the PS5 can pull assets directly from RAM, utilizing the RAM to it's full potential.

I don't know why you can't expect it.
Read this post of mine like it's true, watch that video I posted for you earlier and watch the first 10 minutes as if what I'm saying is true and you will see where your mistake is coming from.
You are misunderstanding. The source is that many triangles. Nanite scales with resolution so at 1440p it's in the 10 million triangles per frame range. Sooooo when you back up the triangles stay the same, but you don't need as many to have high quality visuals from further away. When you move in really close you get fantastic detail because it's still 10 million or whatever triangles. It's never billions. You are mistaken the source having billions and nanite intelligently scaling that down to a reasonable amount to still offer insanely good detail at any distance.

Gonna add a quick edit of please stop attacking people unless you're 100% sure you are right, because now you look silly.
 
Last edited:

NXGamer

Member
Nah, you are going by Nvidia specs which do not list ingame clocks. In game clocks for 2070 cards is around 1950 mhz. You can look at your own video and it will show it hovering around 1950 mhz going up to 1980 mhz at one point.

XMyfVNc.jpg

At 1950 mhz, thats a 8.985 tflops card.

2304 shader processors * 2 instructions per clocks * 1.95 ghz = 8.985 tflops.

The founders edition can hit up to 1.935 Ghz. That comes out to be 8.91 Tflops.


The 2070 Super Founders Edition hits 1.935 Ghz. It has 2560 Shader cores. Thats 9.9 tflops.


My graph shows the CPUs normalized. You need to do the same. Take your 2070 out of your 2700 system and plug it in the 3600 system. You will see it perform better than the 5600xt.

And the issue I have with your video is again your insistence in comparing the 2070 GPU with the PS5 GPU. Compare the PS5 system with YOUR 2700/2070 system and we are all good. The verbage is the issue here.
Stock RTX is not hitting 1950, many cards do not get over 1880 and in my video you can also see it hitting 2010Ghz which is over 9Tf (not that even matters that much).

2304 * 2 * 2010 = 9.26Tf as I state.

The video is accurate and informative, I clearly call show and demonstrate the specs, config and variations cleanly.

Again where in your graph = my tested section? which is also Max details and may not even be using Resolution scaling, you are missing so much relevant detail in all info and examples.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Stock RTX is not hitting 1950, many cards do not get over 1880 and in my video you can also see it hitting 2010Ghz which is over 9Tf (not that even matters that much).

2304 * 2 * 2010 = 9.26Tf as I state.

The video is accurate and informative, I clearly call show and demonstrate the specs, config and variations cleanly.

Again where in your graph = my tested section? which is also Max details and may not even be using Resolution scaling, you are missing so much relevant detail in all info and examples.
Did you not click on my links? The founders edition cards hit a max of 1935.

Besides, this is the sentence you took an issue with:

The 5600xt is a 7.1 tflops card outperforming the 9.1 Tflops 2070 in your tests all because you switched to the 3600 in your tests for the 5600xt. That shouldve been your first clue.
Your card in that clip was around 1950 mhz. Thats around 9 tflops. So we agree on this. My question is why do you think it's beating your 5600xt in like to like scenarios? Isnt it because you tested your 5600xt with a more powerful CPU?

As for my graph, it should give you pause because yes, it IS using max details and NO resolution scaling which means the settings are higher, the GPU load is bigger, and yet the 2070 is outperforming the 5600xt by a massive margin unlike your test results.

There is absolutely no need to argue. You can test this in five seconds by replacing the GPU in your two PCs.
 
Last edited:

NXGamer

Member
Did you not click on my links? The founders edition cards hit a max of 1935.

Besides, this is the sentence you took an issue with:


Your card in that clip was around 1950 mhz. Thats around 9 tflops. So we agree on this. My question is why do you think it's beating your 5600xt in like to like scenarios? Isnt it because you tested your 5600xt with a more powerful CPU?

As for my graph, it should give you pause because yes, it IS using max details and NO resolution scaling which means the settings are higher, the GPU load is bigger, and yet the 2070 is outperforming the 5600xt by a massive margin unlike your test results.

There is absolutely no need to argue. You can test this in five seconds by replacing the GPU in your two PCs.
You are not getting this at all.

At 4K locked with no scaling and higher effects (such as Textures) then in your graph the 5600XT becomes far more VRAM bound both from footprint and Bandwidth owing to the lower levels of 2GB and approx 100GB/s slower throughput not to mention then being more reliant on PCIE and DDR4 speeds. Do you not understand this? and how settings on a game scale beyond the ideal, optimum, you know, what the consoles tend to use and I tested in my video and clearly stated umpteen times.


If your only issue is hurt feelings whenever I mention GPU in the video, then imagine I always say (GPU, with all the caveats I mentioned elsewhere in this video here as well so API, Slight changes to RT, CPU IPC, Single thread limits, also Sys Ram bandwidth and maybe even some thermal throttling on the GPU... oh yeah and also that the game and engine favours the RDNA Arch due to being designed with that in mind for a good portion of its development)

Just say this in your head and we all good, you are arguing I am just repeating.
 
Last edited:

TrackZ

Member
NX and comps like that are pointless to me. I want to see what a real PC maxed out can do and looks like vs a PS5. Show me an i7 and 3080 ti game quality and performance results.

There’s no value in trying to match a PC spec to console. What does that tell anyone? Who cares. If you have a low spec PC just play console. It’s probably way better, simpler, and easier.

What I want to know is if PC is really so much better that it’s worth it over console.
 
Last edited:
NX and comps like that are pointless to me. I want to see what a real PC maxed out can do and looks like vs a PS5. Show me an i7 and 3080 ti game quality and performance results.

There’s no value in trying to match a PC spec to console. What does that tell anyone? Who cares. If you have a low spec PC just play console. It’s probably way better, simpler, and easier.

What I want to know is if PC is really so much better that it’s worth it over console.
VFX is probably last reason for most PC gamers.
 

Loxus

Member
Only proves my point.
Nothing contradicts what I've said about the statues being 33 million and 16 billion. The statue was 33 million triangles before and after being imported into the PS5 version of UE5.

Benefits of Nanite​

  • Multiple orders of magnitude increase in geometry complexity, higher triangle and objects counts than has been possible before in real-time
  • Frame budgets are no longer constrained by polycounts, draw calls, and mesh memory usage
  • Now possible to directly import film-quality source arts, such as ZBrush sculpts and photogrammetry scans
  • Use high-poly detailing rather than baking detail into normal map textures
  • Level of Detail (LOD) is automatically handled and no longer requires manual setup for individual mesh's LODs
  • Loss of quality is rare or non-existent, especially with LOD transitions

I don't know why you guys don't believe the Epic devs. If there were varying levels of detail based on the camera view, the devs would of said between 10 million and 100 million for the scene. Not flat out 16 billion triangles in the room.
Then again, you guys didn't believe Tim Sweeney either when he talked about the PS5 SSD, but now all happy about Direct Storage.
 

Guilty_AI

Member
I don't know why you guys don't believe the Epic devs. If there were varying levels of detail based on the camera view, the devs would of said between 10 million and 100 million for the scene. Not flat out 16 billion triangles in the room.
Now read beyond the beginning mr. smartpants:

How does Nanite work?​


Nanite integrates as seamlessly as possible into existing engine workflows, while using a novel approach to storing and rendering mesh data.

  • During import: meshes are analyzed and broken down into hierarchical clusters of triangle groups.
  • During rendering: clusters are swapped on the fly at varying levels of detail based on the camera view, and connect perfectly without cracks to neighboring clusters within the same object. Data is streamed in on demand so that only visible detail needs to reside in memory. Nanite runs in its own rendering pass that completely bypasses traditional draw calls. Visualization modes can be used to inspect the Nanite pipeline.
 
Last edited:

Loxus

Member
Read this post of mine like it's true, watch that video I posted for you earlier and watch the first 10 minutes as if what I'm saying is true and you will see where your mistake is coming from.
Why would I take what someone with 3 videos on there channel, said about what they know about something being done in editor mode, over official things said from Epic Games in an official video done by them on the PS5 version of UE5?
 

Loxus

Member
Now read beyond the beginning mr. smartpants:

How does Nanite work?​


Nanite integrates as seamlessly as possible into existing engine workflows, while using a novel approach to storing and rendering mesh data.

  • During import: meshes are analyzed and broken down into hierarchical clusters of triangle groups.
  • During rendering: clusters are swapped on the fly at varying levels of detail based on the camera view, and connect perfectly without cracks to neighboring clusters within the same object. Data is streamed in on demand so that only visible detail needs to reside in memory. Nanite runs in its own rendering pass that completely bypasses traditional draw calls. Visualization modes can be used to inspect the Nanite pipeline.
That's how Nanite work at draw distances, hence why you don't need LODs.

I don't know why you guys just don't get it through your heads. The 16 billion triangles came from developers when talking about the nearly 500 statue in the room. They never said anything about the statues losing details. But you guys are so ignorant, it's unbelievable.
 
Why would I take what someone with 3 videos on there channel, said about what they know about something being done in editor mode, over official things said from Epic Games in an official video done by them on the PS5 version of UE5?
The somebody taking in the video was the senior graphics programmer at epic. Also this is what Tim Sweeny says about it
"I suppose the secret is that what Nanite aims to do is render effectively one triangle for pixel, so once you get down to that level of detail, the sort of ongoing changes in LOD are imperceptible," answers Nick Penwarden. "That's the idea of Render Everything Your Eye Sees," adds Tim Sweeney. "Render so much that if we rendered more you couldn't tell the difference, then as the amount of detail we're rendering changes, you shouldn't be able to perceive difference."

I'll let you do the math
 

Shmunter

Member
As soon as console comes out on top people take issue with pc config. Should have used a different cpu, a different GPU, ram, mouse, keyboard, gsync. Endless ways to pick holes.

Probably the best way to do these is consistent set of category(s). E.g. Always compare to Most common config based on steam - covers majority of pc user base +/-, and if it makes sense a current high end rig for the enthusiasts like on neogaf.
 
As soon as console comes out on top people take issue with pc config. Should have used a different cpu, a different GPU, ram, mouse, keyboard, gsync. Endless ways to pick holes.

Probably the best way to do these is consistent set of category(s). E.g. Always compare to Most common config based on steam - covers majority of pc user base +/-, and if it makes sense a current high end rig for the enthusiasts like on neogaf.
I say use the minimum spec for similar performance. That way you know if you have prettier or not
 

hlm666

Member
That's how Nanite work at draw distances, hence why you don't need LODs.

I don't know why you guys just don't get it through your heads. The 16 billion triangles came from developers when talking about the nearly 500 statue in the room. They never said anything about the statues losing details. But you guys are so ignorant, it's unbelievable.
Hey brah looks like your 16 billion polygon demo is old news, we already over 100 billion now.

 

Zuzu

Member
It really does not, this is a graph or what scene, how long, what happened.

I can test scenes on a 5600XT with an unlocked frame-rate than shows averages above 70fps. People using these as some basis are just looking for confirmation bias, these are only good to compare GPU to GPU or CPU to CPU so long as they are all identical shots. This does not reflect anything more than the individual sites tests with no pictures we can only guess what that was.
Ok, cheers I see what you’re saying. You’re analysis is fine, so I take back that comment I made. You’re comparing system to system rather than specifically the 2070 gpu to the PS5 gpu. The 2070 within that specific system is performing worse than a PS5.
 

Darius87

Member
NX and comps like that are pointless to me. I want to see what a real PC maxed out can do and looks like vs a PS5. Show me an i7 and 3080 ti game quality and performance results.

There’s no value in trying to match a PC spec to console. What does that tell anyone? Who cares. If you have a low spec PC just play console. It’s probably way better, simpler, and easier.

What I want to know is if PC is really so much better that it’s worth it over console.
there's no point comparing 2K system vs 500$ consoles. It's the stupidest thing to do 2K PC everytime will beat consoles by x2 or over that amount of FPS with same/better settings, that's answers to your question, so what's more to see?
By comparing ~ same spec systems you can see advantages/dissatvantages of manufacturers chip architectures or theyr driver pro/cons or how PC OS impact game performance etc. even better idea would be try to find PC setup to match performance with same settings that of a console, that's what i'm always expect from DF to do but they overshot by a mile with theyr PC setups.
 

Elog

Member
The PS5 is doing great and punching way above its weight - just like Cerny stated it would. And for seemingly the reasons he stated as well.

A lot of weird posts from the unholy alliance between Xbox lovers and PC Master Race fanatics. Just be happy that the PS5 is doing so well - or is that not an option?
 
Last edited:

Loxus

Member
so you DO understand there are no 16 billion triangles being rendered right?
The somebody taking in the video was the senior graphics programmer at epic. Also this is what Tim Sweeny says about it
"I suppose the secret is that what Nanite aims to do is render effectively one triangle for pixel, so once you get down to that level of detail, the sort of ongoing changes in LOD are imperceptible," answers Nick Penwarden. "That's the idea of Render Everything Your Eye Sees," adds Tim Sweeney. "Render so much that if we rendered more you couldn't tell the difference, then as the amount of detail we're rendering changes, you shouldn't be able to perceive difference."

I'll let you do the math
ixYkhqO.png

Clearly you guys don't know what I'm talking about.
The arrow (viewing frustum) is were I'm talking about. It's were the statues are that add up to 16 billion triangles.

Arrow (near view plane) is the screen, is what you guys are talking about. You think I don't know how Nanite works?

In the UE5 PS5 demo the dev said,
"There are over a billion triangles of source geometry in each frame, that Nanite crunhes down losslessly to around 20 million drawn triangles."
Source geometry = (3d model)
Drawn triangles = (What's displayed on screen)

3D rendering is the 3D computer graphics process of converting 3D models into 2D images to display on a screen.

I was from the very beginning telling you guys I'm talking about the statues (3d models), but you guys kept making me look like I don't know what I'm talking about.
The statue was imported directly from ZBrush and is more than 33 million triangles. Directly, meaning it wasn't edited to reduce triangle count.

The reason for so many triangles and 8k textures, is when you get closeup to an object. It will maintain full details and fidelity when rendered, even though it's output resolution is only 1440p.

Watch this video, cause clearly you guys know little about videogames.


Back to my main point about not seeing the benefits of fast and narrow. Fast and narrow design maybe to help handle billions small triangles, which Deathloop doesn't have. So will have to wait for a game with that many triangles on both PC and consoles to judge Cerny.
 
ixYkhqO.png

Clearly you guys don't know what I'm talking about.
The arrow (viewing frustum) is were I'm talking about. It's were the statues are that add up to 16 billion triangles.

Arrow (near view plane) is the screen, is what you guys are talking about. You think I don't know how Nanite works?

In the UE5 PS5 demo the dev said,
"There are over a billion triangles of source geometry in each frame, that Nanite crunhes down losslessly to around 20 million drawn triangles."
Source geometry = (3d model)
Drawn triangles = (What's displayed on screen)

3D rendering is the 3D computer graphics process of converting 3D models into 2D images to display on a screen.

I was from the very beginning telling you guys I'm talking about the statues (3d models), but you guys kept making me look like I don't know what I'm talking about.
The statue was imported directly from ZBrush and is more than 33 million triangles. Directly, meaning it wasn't edited to reduce triangle count.

The reason for so many triangles and 8k textures, is when you get closeup to an object. It will maintain full details and fidelity when rendered, even though it's output resolution is only 1440p.

Watch this video, cause clearly you guys know little about videogames.


Back to my main point about not seeing the benefits of fast and narrow. Fast and narrow design maybe to help handle billions small triangles, which Deathloop doesn't have. So will have to wait for a game with that many triangles on both PC and consoles to judge Cerny.

Ah so you were just playing the fool. You do realize that a rx480 has no issue with 10,000,000,000 triangles of source geometry with nanite right? There's videos of 3090's doing 150 billion. It's literally the point of nanite.
Peace
 
there's no point comparing 2K system vs 500$ consoles. It's the stupidest thing to do 2K PC everytime will beat consoles by x2 or over that amount of FPS with same/better settings, that's answers to your question, so what's more to see?
By comparing ~ same spec systems you can see advantages/dissatvantages of manufacturers chip architectures or theyr driver pro/cons or how PC OS impact game performance etc. even better idea would be try to find PC setup to match performance with same settings that of a console, that's what i'm always expect from DF to do but they overshot by a mile with theyr PC setups.
Alex used a r5 3600 and 2060 super. If that's overshot by a mile I'm not sure what to say
 

Guilty_AI

Member
You think I don't know how Nanite works?
I don't think you don't know, i'm sure you don't know because:

The arrow (viewing frustum) is were I'm talking about. It's were the statues are that add up to 16 billion triangles.
Wrong. Whats there are lower poly models that have been reduced by nanite. Thats what nanite does, automatic LOD, thats the whole point of it. That happens before the construction of the 2D image thats displayed on the screen. It what allows a gtx 980 ti to display 3.2 trillion triangles worth of data on the screen with no apparent loss of detail, and has nothing to do with how the ps5 handles graphics or any cerny talks.



So why deathloop doesn't have billions of triangles? Because it doesn't use nanite. Thats all.
If it did use similar tech, it would be able have billions of polygons on all platforms.
 
Last edited:

Arioco

Member
You can disagree with it, but in comparison to the Xbox series consoles we are not seeing the performance of the PS5 differ more then the specs would suggest. You would think based on what cerny said regarding the PS5 it would punch above it weight against the seriesX, but it doesent.
The seriesX and seriesS have shown that performance scales with the Tflops and memory bandwidth regardless of what the clock speed.


OK, let's have a look at the latest comparison we have (XBOX tests were made today).





So same resolution in all modes, but better performance on PS5 despite having 2 fewer Tflops, way less external bandwidth (internal bandwidth is higher on PS5 thanks to higher clocks, of course), and despite Series X/S supporting hardware VRS. And by the way, this comparison doesn't take into account the latest firmware update for PS5, which is said to improve frame rate by 1 to 3%.

So if PS5 is not punching above its weight, how do you explain that? Oh, yes, THE TOOLZ. 🙄

PS: It's funny you say that because DF is saying literally from the very first comparison they made (DMC V SE) that PS5 was "punching above its weight", which they repeated several times throughout the analysis. 🤷‍♂️
 

Loxus

Member
Ah so you were just playing the fool. You do realize that a rx480 has no issue with 10,000,000,000 triangles of source geometry with nanite right? There's videos of 3090's doing 150 billion. It's literally the point of nanite.
Peace
I don't think you don't know, i'm sure you don't know because:


Wrong. Whats there are lower poly models that have been reduced by nanite. Thats what nanite does, automatic LOD, thats the whole point of it. That happens before the construction of the 2D image thats displayed on the screen. It what allows a gtx 980 ti to display 3.2 trillion triangles worth of data on the screen with no apparent loss of detail, and has nothing to do with how the ps5 handles graphics or any cerny talks.



So why deathloop doesn't have billions of triangles? Because it doesn't use nanite. Thats all.
If it did use similar tech, it would be able have billions of polygons on all platforms.

Imagine comparing editor mode to gameplay. Y'all reaching to much.

Most people compare performance when the games are running during gameplay not when making games.
But y'all trying so hard to prove me wrong, it don't even make sense anymore.
 
Top Bottom