• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[VGTECH] Crash 4 PS5 / XSX / XSS

DenchDeckard

Moderated wildly
They didn't really push anything, they just unlocked the resolution to target 4K60 and called it a day. It's a lazy port after all. Both consoles can't really deal with that resolution in cutscenes, but the devs didn't want to put in the work to get the game to a solid 60 fps all of the time.

I guess both are using drs but it’s not very well coded due to the cutscene issues. Deffo seems to be a quick port and done job.
 
It still was built off a game that was possible on really old hardware. So whole it did look next gen it wasn't built to be only possible on the PS5.

If that makes any sense.
We are talking about density of assets in a game with big open levels. There are no limits to how many polygons and textures you can use, even in a small area. But DS has quite big areas.
 

MonarchJT

Banned
I was laughing at the E3 build up yes because we hear it every year from all console manufacterers
Given how much you get into technicalities in certain threads, there are two things or you don't even understand what you write or you know very well how things are and how much the "tools" are fundamental and your mock is only given by your known bias towards Sony. Still apart from a few titles like Hivebuster and Gears tactics no game uses core parts of the Xbox hardware. We know that the vrs alone has given up to 14% performance boost in fps you think it will be hard to recover the 0.5% advantage (half of which in cutscenes) that we see in this game on the ps5? lol
We also know that vrs is applied at the end of the rendering pipeline (on the Xbox). The bulk of the work will be done in the mesh shaders by culling and it will all be lightened further by the sfs all peculiarities practically not used by any of these games.
You will understand for yourself why I am saving some posts like yours with the label "crows pre-orders"
 
Last edited:

MonarchJT

Banned
I'm talking about memory bandwidth too. X1X had 50% more of it when compared to Pro, which is huge.

What's the bandwidth difference like with PS5/XSX? 25% at best. The 6GB segment is actually slower than PS5 by -25%, not all of it is faster like it was last-gen. One X had a clear advantage over Pro from day 1 in hardware and it showed.

PS5 has some advantages over Series X in some aspects and it shows in many games. Similarly, Series X has some advantages over PS5 in some aspects and it shows in those other games. Like my first post in this thread said, "different games will favor different parts of the GPU".
Objectively, the discourse is that as Cerny said it is easier to handle 36 cu with higher clocks. Nobody can expect old games to use rdna2's peculiarities as I said. And it is easier to manage the work on less cu (36) than on 52. There are games like this where the clock is advantageous but in lots other games we have seen the Xbox have advantages of resolution that if brought to the level of ps5 would have given a clear advantage of fps to the console.Unfortunately to judge the consoles we have to wait for UE5 and the new versions of the engines that will exploit the hw to judge the performance of the consoles with these games is misleading
 
Last edited:

Heisenberg007

Gold Journalism
The evidence and data is in the performance videos. PS5 can't sustain half of 4K, averages at half of 1800p and drops to half of 1440p at the lowest. Do you think Avengers devs were too lazy to change the resolution, while they changed basically everything else?
If the resolution really is the same, it is clear that the devs didn't change the resolution settings at all to lighten the workload and to avoid more detailed optimization.

Now, do you think the Crash devs were too lazy to change the shadow settings on XSX when they changed a lot of other things?
 
If the resolution really is the same, it is clear that the devs didn't change the resolution settings at all to lighten the workload and to avoid more detailed optimization.

Now, do you think the Crash devs were too lazy to change the shadow settings on XSX when they changed a lot of other things?
Feel free to list the "a lot of other things" Crash devs changed.

Bored Daily Show GIF by CTV Comedy Channel
 
If the resolution really is the same, it is clear that the devs didn't change the resolution settings at all to lighten the workload and to avoid more detailed optimization.

Now, do you think the Crash devs were too lazy to change the shadow settings on XSX when they changed a lot of other things?
Assuming both yours and his are theories. His makes more sense from a logical workload standpoint.
 

Methos#1975

Member
If the resolution really is the same, it is clear that the devs didn't change the resolution settings at all to lighten the workload and to avoid more detailed optimization.

Now, do you think the Crash devs were too lazy to change the shadow settings on XSX when they changed a lot of other things?
I don't think they changed anything at all outside resolution, they really just phoned this one in unlike the work put into Avengers.
 

Heisenberg007

Gold Journalism
Assuming both yours and his are theories. His makes more sense from a logical workload standpoint.
I'm completely okay believing this theory that the devs half-assed it. My point is that then we should be consistent.

I'm okay believing that both devs didn't change the higher shadow settings in Crash but XSX could easily render it. But then the same principle should apply to Avengers as well with the resolution settings (both are next-gen ports, not running in BC).

My disagreement is that he is saying that Crash devs used old-gen shadow settings on XSX because they ran out of time. Then he is saying Avengers devs used old-gen res settings on PS5 because that's all the console could manage, and Avenger devs didn't run out of time.

Why the inconsistency?
 

geordiemp

Member
Given how much you get into technicalities in certain threads, there are two things or you don't even understand what you write or you know very well how things are and how much the "tools" are fundamental and your mock is only given by your known bias towards Sony. Still apart from a few titles like Hivebuster and Gears tactics no game uses core parts of the Xbox hardware. We know that the vrs alone has given up to 14% performance boost in fps you think it will be hard to recover the 0.5% advantage (half of which in cutscenes) that we see in this game on the ps5? lol
We also know that vrs is applied at the end of the rendering pipeline (on the Xbox). The bulk of the work will be done in the mesh shaders by culling and it will all be lightened further by the sfs all peculiarities practically not used by any of these games.
You will understand for yourself why I am saving some posts like yours with the label "crows pre-orders"

Less the ad hominem, you cant discuss anything without resorting to warrior mode ?

VRS will be dead with UE5, 1 triangle per pixel, but good for racing and VR.

VRS is also more efficent in software - see call of duty. So whatever.

Cerny said performance when triangles are SMALL, look it up. Do I need to explain why ?

SFS is just blending in the late MIP in hardware, all consoles have PRT+ so we have no idea what your point is.

As I said, you believe whatever floats your boat.
 
Last edited:
  • Upgraded resolution
  • 60 FPS
  • Implemented DualSense features
  • Implemented 3D Audio
  • Improved loading speed
This is pretty much all the things that Avengers also included in the next-gen version, apart from some other graphical tweaks in the quality mode.
So they unlocked the resolution to 4K, 60 FPS isn't new, and all the other features are either PS5 only or are there by default (loading speed). Totally comparable to Avengers :messenger_winking: Just accept the fact that it's a lazy port, at least for Xbox.
 
I'm talking about memory bandwidth too. X1X had 50% more of it when compared to Pro, which is huge.

What's the bandwidth difference like with PS5/XSX? 25% at best. The 6GB segment is actually slower than PS5 by -25%, not all of it is faster like it was last-gen. One X had a clear advantage over Pro from day 1 in hardware and it showed.

PS5 has some advantages over Series X in some aspects and it shows in many games. Similarly, Series X has some advantages over PS5 in some aspects and it shows in those other games. Like my first post in this thread said, "different games will favor different parts of the GPU".

I 100000% agree 😉
And never forget: wider vs deeper, each topology has its advantages and drawbacks over the other one.
 
Last edited:

DenchDeckard

Moderated wildly
  • Upgraded resolution
  • 60 FPS
  • Implemented DualSense features
  • Implemented 3D Audio
  • Improved loading speed
This is pretty much all the things that Avengers also included in the next-gen version, apart from some other graphical tweaks in the quality mode.
To be completely fair, on avengers they have updated the entire water modelling, textures are improved, physics and more. Its reduced load times from over a minute to 4 seconds. It's more on the level of spiderman miles morales than this quick up res port.

Just to add, all games should strive for a consistent locked framerate in gameplay sections and ps5 has the advantage on this game so this one is deffo in favour of ps5. Better shadows and more consistent framerate, and if there is a resolution difference I sure as hell can't see it so if I was to play crash 4 I would want to play it on ps5.
 
Last edited:

Elog

Member
That was before RDNA2 and Tier 2 VRS, there was a massive jump as seen in Gears Tactics which was moved from Tier 1 to Tier 2.
Can you tell me what the key differences are between Tier 1 and Tier 2 VRS? Hint: It is functions - which ones?
 
Last edited:

MonarchJT

Banned
Can you tell me what the key differences are between Tier 1 and Tier 2 VRS? Hint: It is functions - which ones?
granularity. and this leads to an increase in performance. not being able to interact directly with the screen space textures ....the use of Tier 1 was highly snubbed because it involved a significant visual loss. which does not happen in tier 2

 
Last edited:

Mr Moose

Member
Tier 1

Shading rate can only be specified on a per-draw-basis; nothing more granular than that
Shading rate applies uniformly to what is drawn independently of where it lies within the rendertarget
Use of 1x2, programmable sample positions, or conservative rasterization may cause fall-back into fine shading

Tier 2

Shading rate can be specified on a per-draw-basis, as in Tier 1. It can also be specified by a combination of per-draw-basis, and of:
Semantic from the per-provoking-vertex, and
a screenspace image
Shading rates from the three sources are combined using a set of combiners
Screen space image tile size is 16x16 or smaller
Shading rate requested by the app is guaranteed to be delivered exactly (for precision of temporal and other reconstruction filters)
SV_ShadingRate PS input is supported
The per-provoking vertex rate, also referred to here as a per-primitive rate, is valid when one viewport is used and SV_ViewportIndex is not written to.
The per-provoking vertex rate, also referred to as a per-primitive rate, can be used with more than one viewport if the SupportsPerVertexShadingRateWithMultipleViewports cap is marked true. Additionally, in that case, it can be used when SV_ViewportIndex is written to.
 

Elog

Member
I've posted the article before go read it, point is it gives better image quality than Tier1 which was very obvious on Gears Tactics. Also it's possible for Xbox to use both solutions hardware and software.
You are not really answering the key question.

There are two key areas of added functionality:

- Per primitive shading rate (i.e. you can choose how much shading work that is to be done on a per primitive basis and hence you can choose to prioritize your primitives according to whatever function you program in your software)
- Per screenspace image shading rate (i.e. you can choose how much shading work that is done based on the where on the rasterized image a geometry is e.g. foveated rendering)

Both of these functionalities exist through the PS5 GE as well but they are not implemented under the VRS umbrella (that is a pre-specified industry standard). There is no difference in functional capability between e.g. a XSX and a PS5 here.

Is one performing better than the other? Who knows. Just do not state that one machine can do something that the other cannot since that is simply untrue.
 
Last edited:

geordiemp

Member
granularity. and this leads to an increase in performance. not being able to interact directly with the screen space textures ....the use of Tier 1 was highly snubbed because it involved a significant visual loss. which does not happen in tier 2


And as I said, The move with nanite to 1 triangle per pixel.

So Nanite will not use VRS anyway, software or hardware, tier 1 or tier 2.

VRS is a feature which will be used less and less, but if you want to put your faith in covering half the screen in the same shader larger blocks to save a little performance if XSX when struggling feel free, but you know Nantite is coming and VRS is not coming to that party.
 

Riky

$MSFT
You are not really answering the key question.

There are two key areas of added functionality:

- Per primitive shading rate (i.e. you can choose how much shading work that is to be done on a per primitive basis and hence you can choose to prioritize your primitives according to whatever function you program in your software)
- Per screenspace image shading rate (i.e. you can choose how much shading work that is done based on the where on the rasterized image a geometry is e.g. foveated rendering)

Both of these functionalities exist through the PS5 GE as well but they are not implemented under the VRS umbrella (that is a pre-specified industry standard). There is no difference in functional capability between e.g. a XSX and a PS5 here.

Is one performing better than the other? Who knows. Just do not state that one machine can do something that the other cannot since that is simply untrue.

It's untrue to say I said that when I didn't.
I was responding to the old Call Of Duty MW slide that has been superseded, maybe read posts in context?
Since we haven't seen or heard anything about hardware VRS on PS5 we will wait to see if it ever supports it.
 

geordiemp

Member
You are not really answering the key question.

There are two key areas of added functionality:

- Per primitive shading rate (i.e. you can choose how much shading work that is to be done on a per primitive basis and hence you can choose to prioritize your primitives according to whatever function you program in your software)
- Per screenspace image shading rate (i.e. you can choose how much shading work that is done based on the where on the rasterized image a geometry is e.g. foveated rendering)

Both of these functionalities exist through the PS5 GE as well but they are not implemented under the VRS umbrella (that is a pre-specified industry standard). There is no difference in functional capability between e.g. a XSX and a PS5 here.

Is one performing better than the other? Who knows. Just do not state that one machine can do something that the other cannot since that is simply untrue.

At least you got a snigger from the village idiot, join the club.

He snigger emote at most peoples posts, does not realise everyone is laughing at this behavoir

Yes the VRS foveated rendering Sony method is well documented.
 
Last edited:

Elog

Member
It's untrue to say I said that when I didn't.
I was responding to the old Call Of Duty MW slide that has been superseded, maybe read posts in context?
Since we haven't seen or heard anything about hardware VRS on PS5 we will wait to see if it ever supports it.
PS5 does not support the pre-specified industry standard VRS - Sony implemented the key functionalities through their GE (unclear what is architecture changes and what is API changes - and we will most likely never know).

Same destination - different roads. Performance difference? Who knows - time will tell.
 

Riky

$MSFT
PS5 does not support the pre-specified industry standard VRS - Sony implemented the key functionalities through their GE (unclear what is architecture changes and what is API changes - and we will most likely never know).

Same destination - different roads. Performance difference? Who knows - time will tell.

I didn't mention PS5.
I was talking about the software VRS vs Hardware one liner from the old slide, it's older than RDNA2 and Tier 2 VRS. That's all.
 

ethomaz

Banned
What? Avengers changed basically everything but resolution, how can it be a lazy port? They stuck with the resolution because PS5 can't push higher pixels. Meanwhile in Crash 4, they didn't even increase shadows from the One S version. The definition of a lazy port.
Avengers is using the exactly same PS4 Pro resolution.

If you can a small detail not changed as being lazy then the not changed resolution is lazy too.
 
Last edited:

MonarchJT

Banned
And as I said, The move with nanite to 1 triangle per pixel.

So Nanite will not use VRS anyway, software or hardware, tier 1 or tier 2.

VRS is a feature which will be used less and less, but if you want to put your faith in covering half the screen in the same shader larger blocks to save a little performance if XSX when struggling feel free, but you know Nantite is coming and VRS is not coming to that party.
first at all not all developers use UE and it is strange that you mention it so often
especially when none of the top Sony first party teams using it or there are rumors that they will. Surely not guerrilla, naughty dog or the ssm, studios who have invested years and so many resources in creating their engine. You probably expect miracles from the only team that use UE worthily, Bend?
And again, VRS as you know is applied in the rdna2 architecture at the end of the pipeline the bulk of the work will be done by the mesh shader with the help of the SF and SFS. You know some games who uses the mesh shader or the sf?
 
Last edited:

MonarchJT

Banned
PS5 does not support the pre-specified industry standard VRS - Sony implemented the key functionalities through their GE (unclear what is architecture changes and what is API changes - and we will most likely never know).

Same destination - different roads. Performance difference? Who knows - time will tell.
can i have a official talk about any form of vrs (tier 1 or 2 or custom) in ps5?
From the amount of things that there are hidden inside the GE at this point I wouldn't be shocked if it also make coffee
 
Last edited:

MonarchJT

Banned
Sony Bend crying that they aren't a top first party team.
Well, it took them 7 years to produce Days gone which is definitely not (with an MC of 71) an acclaimed multi-platform title and before this in 2012 Unhcharted fight for fortune (mc 67) . Let's say that other teams at Sony have had better luck
 
Last edited:

Md Ray

Member
the discourse is that as Cerny said it is easier to handle 36 cu with higher clocks.
He didn't say that. While higher clock speeds have their own benefits, what he said was, "it is easier to fully use 36 CUs in parallel than it is to fully use 48 CUs". His reasoning behind that was: "...when triangles are small it's much harder to fill all those CUs with useful work". "Triangles are small"... Does this ring a bell? UE5's Nanite tech is just that. A micro polygon renderer - that can render huge amounts of triangles very fast while bringing the size of those triangles down to the size of pixels according to Epic which results in increased geometric detail. This is perhaps where the industry is moving - starting from UE5.

After all, it's well-known at this point that Cerny is someone who doesn't shy away from speaking to devs be it first-party or third-party devs. He might have information as to where the industry is headed in the next few years, what most teams are planning to do with their engines, what rendering systems or features they're going to develop/utilize, etc. in order to push next-gen games. And let's not forget Cerny himself is a dev and has over 30 years of experience (which I'll admit is longer than my whole existence on this planet).

But I digress. His point was that it's easier to fully use a GPU with 36 CUs than a GPU with a larger number of CUs, which is true and we can see this in RDNA 2 CU scaling benchmark performed by the computer base. They tested 40 CUs, 60 CUs, 72 CUs, and 80 CUs at identical 2000 MHz frequency. 40 CUs to 60 CUs is a 50% increase in physical core count, therefore a 50% increase in TFLOPs (10.24 TF to 15.36 TF). At 2 GHz - the 60 CU part, at best (in 4K), gained an avg. of just 36% of perf over the 40 CU part (remember the 60 CU part has a 33% bandwidth advantage on top). When you reduce the resolution, the gains for the bigger GPU get even smaller.

Now, factor in an increase to the clock speed for the smaller GPU, that 36% perf gains (best case scenario) get even smaller.

Case in point, here's another evidence:
7FtU3ol.png

Unfortunately to judge the consoles we have to wait for UE5 and the new versions of the engines that will exploit the hw to judge the performance of the consoles with these games is misleading
Godfall's result disagrees with you. It doesn't exist on last-gen consoles.

I expect UE5 or other engines using micro polygons for their games/scenes will scale better on PS5. In the end, the perf difference will come down to being close for both consoles. You and others like you hoping and dreaming for a PS4 Pro vs X1X level of difference between PS5 and XSX are in for a rude awakening, unfortunately. It's just that. A dream.
 
Last edited:

MonarchJT

Banned
He didn't say that. While higher clock speeds have their own benefits, what he said was, "it is easier to fully use 36 CUs in parallel than it is to fully use 48 CUs". His reasoning behind that was: "...when triangles are small it's much harder to fill all those CUs with useful work". "Triangles are small"... Does this ring a bell? UE5's Nanite tech is just that. A micro polygon renderer - that can render huge amounts of triangles very fast while bringing the size of those triangles down to the size of pixels according to Epic which results in increased geometric detail. This is perhaps where the industry is moving - starting from UE5.

After all, it's well-known at this point that Cerny is someone who doesn't shy away from speaking to devs be it first-party or third-party devs. He might have information as to where the industry is headed in the next few years, what most teams are planning to do with their engines, what rendering systems or features they're going to develop/utilize, etc. in order to push next-gen games. And let's not forget Cerny himself is a dev and has over 30 years of experience (which I'll admit is longer than my whole existence on this planet).

But I digress. His point was that it's easier to fully use a GPU with 36 CUs than a GPU with a larger number of CUs, which is true and we can see this in RDNA 2 CU scaling benchmark performed by the computer base. They tested 40 CUs, 60 CUs, 72 CUs, and 80 CUs at identical 2000 MHz frequency. 40 CUs to 60 CUs is a 50% increase in physical core count, therefore a 50% increase in TFLOPs (10.24 TF to 15.36 TF). At 2 GHz - the 60 CU part, at best (in 4K), gained an avg. of just 36% of perf over the 40 CU part (remember the 60 CU part has a 33% bandwidth advantage on top). When you reduce the resolution, the gains for the bigger GPU get even smaller.

Now, factor in an increase to the clock speed for the smaller GPU, that 36% perf gains (best case scenario) get even smaller.

Case in point, here's another evidence:
7FtU3ol.png


Godfall's result disagrees with you. It doesn't exist on last-gen consoles.

I expect UE5 or other engines using micro polygons for their games/scenes will scale better on PS5. In the end, the perf difference will come down to being close for both consoles. You and others like you hoping and dreaming for a PS4 Pro vs X1X level of difference between PS5 and XSX are in for a rude awakening, unfortunately. It's just that. A dream.
This does not seem to me to be the right thread to talk about the technical characteristics of the consoles we have already (all) digressed enough. There are differences between the two architectures and differences of performances that will see light in the coming years. Nobody hopes for a difference as big as x1x and ps4pro, it would be embarrassing for Sony especially since they are the same price. But many are waiting to see the hw features of the console in full. At ms has focused a lot on a more evoluted gpu meanwhile at sony on the i/o Personally I am sure that the differences will be very close to those we read on paper. And if today people can cheer and brag about a 0.30% perf advantage during gameplay I'm sure there will be laughter in the future if difference will be double digit %
 
Last edited:

Heisenberg007

Gold Journalism
He didn't say that. While higher clock speeds have their own benefits, what he said was, "it is easier to fully use 36 CUs in parallel than it is to fully use 48 CUs". His reasoning behind that was: "...when triangles are small it's much harder to fill all those CUs with useful work". "Triangles are small"... Does this ring a bell? UE5's Nanite tech is just that. A micro polygon renderer - that can render huge amounts of triangles very fast while bringing the size of those triangles down to the size of pixels according to Epic which results in increased geometric detail. This is perhaps where the industry is moving - starting from UE5.

After all, it's well-known at this point that Cerny is someone who doesn't shy away from speaking to devs be it first-party or third-party devs. He might have information as to where the industry is headed in the next few years, what most teams are planning to do with their engines, what rendering systems or features they're going to develop/utilize, etc. in order to push next-gen games. And let's not forget Cerny himself is a dev and has over 30 years of experience (which I'll admit is longer than my whole existence on this planet).

But I digress. His point was that it's easier to fully use a GPU with 36 CUs than a GPU with a larger number of CUs, which is true and we can see this in RDNA 2 CU scaling benchmark performed by the computer base. They tested 40 CUs, 60 CUs, 72 CUs, and 80 CUs at identical 2000 MHz frequency. 40 CUs to 60 CUs is a 50% increase in physical core count, therefore a 50% increase in TFLOPs (10.24 TF to 15.36 TF). At 2 GHz - the 60 CU part, at best (in 4K), gained an avg. of just 36% of perf over the 40 CU part (remember the 60 CU part has a 33% bandwidth advantage on top). When you reduce the resolution, the gains for the bigger GPU get even smaller.

Now, factor in an increase to the clock speed for the smaller GPU, that 36% perf gains (best case scenario) get even smaller.

Case in point, here's another evidence:
7FtU3ol.png


Godfall's result disagrees with you. It doesn't exist on last-gen consoles.

I expect UE5 or other engines using micro polygons for their games/scenes will scale better on PS5. In the end, the perf difference will come down to being close for both consoles. You and others like you hoping and dreaming for a PS4 Pro vs X1X level of difference between PS5 and XSX are in for a rude awakening, unfortunately. It's just that. A dream.
Quality post!
 

DenchDeckard

Moderated wildly
He didn't say that. While higher clock speeds have their own benefits, what he said was, "it is easier to fully use 36 CUs in parallel than it is to fully use 48 CUs". His reasoning behind that was: "...when triangles are small it's much harder to fill all those CUs with useful work". "Triangles are small"... Does this ring a bell? UE5's Nanite tech is just that. A micro polygon renderer - that can render huge amounts of triangles very fast while bringing the size of those triangles down to the size of pixels according to Epic which results in increased geometric detail. This is perhaps where the industry is moving - starting from UE5.

After all, it's well-known at this point that Cerny is someone who doesn't shy away from speaking to devs be it first-party or third-party devs. He might have information as to where the industry is headed in the next few years, what most teams are planning to do with their engines, what rendering systems or features they're going to develop/utilize, etc. in order to push next-gen games. And let's not forget Cerny himself is a dev and has over 30 years of experience (which I'll admit is longer than my whole existence on this planet).

But I digress. His point was that it's easier to fully use a GPU with 36 CUs than a GPU with a larger number of CUs, which is true and we can see this in RDNA 2 CU scaling benchmark performed by the computer base. They tested 40 CUs, 60 CUs, 72 CUs, and 80 CUs at identical 2000 MHz frequency. 40 CUs to 60 CUs is a 50% increase in physical core count, therefore a 50% increase in TFLOPs (10.24 TF to 15.36 TF). At 2 GHz - the 60 CU part, at best (in 4K), gained an avg. of just 36% of perf over the 40 CU part (remember the 60 CU part has a 33% bandwidth advantage on top). When you reduce the resolution, the gains for the bigger GPU get even smaller.

Now, factor in an increase to the clock speed for the smaller GPU, that 36% perf gains (best case scenario) get even smaller.

Case in point, here's another evidence:
7FtU3ol.png


Godfall's result disagrees with you. It doesn't exist on last-gen consoles.

I expect UE5 or other engines using micro polygons for their games/scenes will scale better on PS5. In the end, the perf difference will come down to being close for both consoles. You and others like you hoping and dreaming for a PS4 Pro vs X1X level of difference between PS5 and XSX are in for a rude awakening, unfortunately. It's just that. A dream.

This is really interesting! thanks for posting. Great piece of evidence of how certain games will perform on wider vs narrow architecture.

Is there benchmarks for 4k? would be interesting to see how the 6700 XT performs vs the 6800 at higher resolutions. I believe that's where we may see some benefits from wider and slower over faster and narrow?
 

Heisenberg007

Gold Journalism
Poor upgrade on ps5 = bad port

Poor upgrade on XsX = bad console
On the contrary. According to Bernd Lauert Bernd Lauert , it's only a lazy port if it doesn't run well on XSX and poor console if it's on PS5.

Crash: Lazy port, only because the shadow settings on XSX are the same as old-gen. Not representative of console power.
Avengers: Perfect port, although the PS5 version uses the same resolution settings as PS4 Pro. Best representation of console power.
 

Lysandros

Member
He didn't say that. While higher clock speeds have their own benefits, what he said was, "it is easier to fully use 36 CUs in parallel than it is to fully use 48 CUs". His reasoning behind that was: "...when triangles are small it's much harder to fill all those CUs with useful work". "Triangles are small"... Does this ring a bell? UE5's Nanite tech is just that. A micro polygon renderer - that can render huge amounts of triangles very fast while bringing the size of those triangles down to the size of pixels according to Epic which results in increased geometric detail. This is perhaps where the industry is moving - starting from UE5.

After all, it's well-known at this point that Cerny is someone who doesn't shy away from speaking to devs be it first-party or third-party devs. He might have information as to where the industry is headed in the next few years, what most teams are planning to do with their engines, what rendering systems or features they're going to develop/utilize, etc. in order to push next-gen games. And let's not forget Cerny himself is a dev and has over 30 years of experience (which I'll admit is longer than my whole existence on this planet).

But I digress. His point was that it's easier to fully use a GPU with 36 CUs than a GPU with a larger number of CUs, which is true and we can see this in RDNA 2 CU scaling benchmark performed by the computer base. They tested 40 CUs, 60 CUs, 72 CUs, and 80 CUs at identical 2000 MHz frequency. 40 CUs to 60 CUs is a 50% increase in physical core count, therefore a 50% increase in TFLOPs (10.24 TF to 15.36 TF). At 2 GHz - the 60 CU part, at best (in 4K), gained an avg. of just 36% of perf over the 40 CU part (remember the 60 CU part has a 33% bandwidth advantage on top). When you reduce the resolution, the gains for the bigger GPU get even smaller.

Now, factor in an increase to the clock speed for the smaller GPU, that 36% perf gains (best case scenario) get even smaller.

Case in point, here's another evidence:
7FtU3ol.png


Godfall's result disagrees with you. It doesn't exist on last-gen consoles.

I expect UE5 or other engines using micro polygons for their games/scenes will scale better on PS5. In the end, the perf difference will come down to being close for both consoles. You and others like you hoping and dreaming for a PS4 Pro vs X1X level of difference between PS5 and XSX are in for a rude awakening, unfortunately. It's just that. A dream.
Very good post. 👍 Isn't RX 6800 a 3 Shader Engine/10 CUs per shader array design? In that case i would expect XSX to scale even worse with its 2 SE/12-14 CUs per SA setup compared to PS5.
 
On the contrary. According to Bernd Lauert Bernd Lauert , it's only a lazy port if it doesn't run well on XSX and poor console if it's on PS5.

Crash: Lazy port, only because the shadow settings on XSX are the same as old-gen. Not representative of console power.
Avengers: Perfect port, although the PS5 version uses the same resolution settings as PS4 Pro. Best representation of console power.
It's not a double standard when both ports vastly differ in the amount of work that went into them.
 

geordiemp

Member
first at all not all developers use UE and it is strange that you mention it so often
especially when none of the top Sony first party teams using it or there are rumors that they will. Surely not guerrilla, naughty dog or the ssm, studios who have invested years and so many resources in creating their engine. You probably expect miracles from the only team that use UE worthily, Bend?
And again, VRS as you know is applied in the rdna2 architecture at the end of the pipeline the bulk of the work will be done by the mesh shader with the help of the SF and SFS. You know some games who uses the mesh shader or the sf?

If you care to look at the order of discussion, it was others who brought up UE5, I just reponded to the UE5 posts

Sony bought UE shares, you think they wont utilise 1 triangle per pixel streaming and voxel based GI they worked on already. Logical.

There your mystery is solved.
 
Last edited:

MonarchJT

Banned
Quality post!
Not really for different reasons

1)for the 10 the time sony first party prefeer their own engine and mostly don't want use UE5 ..it only served sony to promote ps5 thanks to the demo. (which as confirmed by the engineer could run on a laptop with a 2080 and an SSD 870 Sam) also sony bought shares of epic for 250m during that period.

2) There is no question that having the same CU's the higher the frequency the higher the performance. But everyone knows that the benefits of parallelization are greater and scale better than Ghz. Not for nothing the biggest gpu producers do nothing but add more cu's. the same could be said with the CPU cores. I don't want to stay here to reiterate some theories that circulated at the launch of the consoles about why the ps5 had so few cu's that go against any evolution happened recently in the world of gpu. In a couple of years we'll know if it was cerny magic or some poor cheap sauce.

3) That game was developed to take advantage of the ps5 you should post other benchmark to be taken seriously and look a bit you will find that it doesn't go just as you say.
 

DenchDeckard

Moderated wildly
Very good post. 👍 Isn't RX 6800 a 3 Shader Engine/10 CUs per shader array design? In that case i would expect XSX to scale even worse with its 2 SE/12-14 CUs per SA setup compared to PS5.

I actually just looked at a load of 6700 XT reviews and the card gets trounced by the 6800 on nearly all games 1440p and above so I think it must just be a standout for the godfall game and that resolution. I was trying to locate the benchmark from the website that the image was shared but I can't seem to find it, if any one could share that would be great. I'm really interested how the engines scale per GPU at different resolutions. 6700 XT is deffo (as advertised by AMD) a 1440P card and the 6800 does a great job of out performing it at higher resolutions and in raytracing.
 
Last edited:

MonarchJT

Banned
If you care to look at the order of discussion, it was others who brought up UE5, I just reponded to the UE5 posts

Sony bought UE shares, you think they wont utilise 1 triangle per pixel streaming they worked on already. Logical.

There your mystery is solved.
Sony will never tell naughty dog (or guerilla with decima engine etc etc) to give up their engine where they have been working for years improving animation management to move to the UE. it could happen just in case Sony is preparing to release everything on PC.
 

DJ12

Member
Still, it's seriously embarrassing

Ps5 29250 (99.09%) XsX 28946 (98.57%)

0.52% of advantage, and most of this nothingness is during the cutscenes and not the gameplay. lol
Only thing embarrassing is the constant "it's nothing" when PS5 beats the tower of power at anything. It's certainly not 'nothing', it's pretty significant that not only is the PS5 making up a 2tf/16CU deficit it's also actually performing better.

This particular game has stuttering though, which doesn't manifest itself as a framerate drop, which is far more annoying than dropped frames.

If everything else was identical, which it isn't by the way, that would be enough to call the win for PS5.
 

MonarchJT

Banned
I actually just looked at a load of 6700 XT reviews and the card gets trounced by the 6800 on nearly all games 1440p and above so I think it must just be a standout for the godfall game and that resolution. I was trying to locate the benchmark from the website that the image was shared but I can't seem to find it, if any one could share that would be great. I'm really interested how the engines scale per GPU at different resolutions. 6700 XT is deffo (as advertised by AMD) a 1440P card and the 6800 does a great job of out performing it at higher resolutions and in raytracing.
exactly godfall is a game developed for the ps5 launch
 
Top Bottom