• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[IGNxGamer] Matrix Awakens, Hellblade and the Power of Unreal Engine 5 - Performance Preview

Status
Not open for further replies.

Panajev2001a

GAF's Pleasant Genius
Its probably worth considering that some or all of the innovative techniques used by UE5 will end up in other proprietary engines.

Obviously Epic are well ahead of everyone else because they have massive dev resources, not to mention partnerships with hardware vendors, but at the end of the day they are a software company offering a package for clients to use - so they can't really hide their innovations.

To some extent its almost unavoidable as well given the competitive nature of the space, I mean although we've not seen the same sort of upgrade elsewhere yet, we're still very much in the cross-gen period and so most games will have been architected with 2013 tech and its limitations in mind.
One has to hope for first party non cross gen engines :).
 

Darsxx82

Member
Again, this would point to good system Architecture choices by Cerny’s team.

System is easy to develop for (despite the TFLOPS disparity and memory bandwidth delta, the overall sustained performance is better than some expected), the devkit was available early on for third parties, UE5 did not require Sony’s ICE team or equivalent to optimise the PS5 version (with one of the most experienced MS teams with UE5 we have essentially parity between XSX and PS5, PS5 still being able to pull ahead in some cases), etc…

SoC is smaller and even at a high clockspeed yields seem to be good and the system already not selling at a loss (stopped to relatively early on).

This is not to talk down on the XSX, well designed system too (and yes less ugly and less noisy), but to give props where they are due.
Achieving a port that offers the same experience as the base PS5 version without counting the time spent almost designing a specific version of XSS is the most that TC could ask for.
No matter how experienced TC has on XSX, nothing compares to being the creators of the engine and creating a demo based on PS5.

As simple as that the experience in a hardware counts and more in an engine in development and testing of new technologies in console.
As simple as that if it had been XSX that had "enjoyed" this advantage, surely the small Epic team would not have needed TC and it would have been the PS5 version that needed external help for proper optimization.

Between consoles that are so evenly matched in power and performance, things like the extra experience in the hardware or just being a base platform make a difference.
 

Urban

Member
man i just cant wait for a Next Gen Console only UE5 Game. But i cant see myself running UE5 on Full HD and Max on my PC.

edit: boy does the Series S version looks ugly xD
 
Last edited:

M1chl

Currently Gif and Meme Champion
yeah-baby-oh.gif

Channel 9 Reaction GIF by The Block

Tools again.
More like this:

Foldable-Empty-Iron-Case-with-Cheaper-Price.jpg
 

Panajev2001a

GAF's Pleasant Genius
Achieving a port that offers the same experience as the base PS5 version without counting the time spent almost designing a specific version of XSS is the most that TC could ask for.
No matter how experienced TC has on XSX, nothing compares to being the creators of the engine and creating a demo based on PS5.

As simple as that the experience in a hardware counts and more in an engine in development and testing of new technologies in console.
As simple as that if it had been XSX that had "enjoyed" this advantage, surely the small Epic team would not have needed TC and it would have been the PS5 version that needed external help for proper optimization.

Between consoles that are so evenly matched in power and performance, things like the extra experience in the hardware or just being a base platform make a difference.

Sure, TC has not only a lot of XSX and XSS (critical) experience but a lot of UE code expertise (UE4/5 source is available, much more openly than before) hence why they were called to help. Fact is that we are comparing Epic + one of the most technically proficient first party MS devs (HW and UE experience) getting the demo to run well on XSX/XSS vs Epic for PS5.

Ensuring PS5 is not too expensive to make and maximising yields, ensuring it is performant but also code for, and having stable dev kits sent out early on, etc… this is what system architecture is about.
 

Fafalada

Fafracer forever
Sure there has to be gameplay balance but there must be a better way.
It's not that there isn't other ways - just noone is seriously investing into it (or rather, proportional investment into prettier static worlds is so much higher, it doesn't even register).
Not a recent development either - it's been on-going for decades now - until someone figures out a way to market other research domains, it's not likely to change.

Obviously Epic are well ahead of everyone else because they have massive dev resources
Unity has massive dev-resources also (to my knowledge they've been outspending Epic for at least half of decade now) it's not just about that.
But also worth noting that Epic's marketing machine is firing on all cylinders - and that's always been their biggest strength over direct competition. And direct comparisons to proprietary tech were never particularly useful since those are virtually never marketed 'as-is/through demos' - but almost exclusively through shipping products that are 2-3 years behind the tech (that goes for those built with UE as well).
 
Last edited:

Fafalada

Fafracer forever
That it already this early in this generation starts to perform this well is a good sign that Sony were right - and it means that the hardware specifications of the PC platform needs updating since the I/O reference pieces are archaic and poor. The PC platform is currently quite poor at moving information between the devices connected together on a motherboard - both hardware specifications and the Windows software holds it back.
Interestingly this isn't necessarily a new development.

PS2 generation was defined by consoles that were just really good at moving data around - PS2 with a system-wide DMA architecture that had an unprecedented amount of channels and bandwidth for a box of its size, and massively outsized bandwidth where it matted(rasterization, Vector processing). XBox with a giant(for its time) high-speed unified pool. GCN with ultra-low latency memory the likes we've never seen before (or after), and again, large embedded GPU buffers.
It was only the storage limitations (if all 3 of those consoles had standard HDD, that gen would have had more runway than it did) that prevented them from stretching more (barring the handful of XBox titles that actually took advantage of the HDD).

PS4/XB1 also arguably held up as well as they did, primarily because of having (again, for their time) oversized unified-memory blocks - actually it's notable both aged far better than their predecessors did over similar time-frame.
 
lol yes. Wasn't.
Corrected.



Yep Alex led the charge on that. What was shocking was that even after Cerny confirmed that the ray tracing was hardware accelerated in the second wired article, Alex went on and on about how it will be just shadows and Ambient occlusion only because its cheaper and reflections and ray traced GI is too expensive for PS5. Like WTF dude, just stfu and take the L. He continued to repeat that nonsense up until the Road to PS5 and even after Ratchet was revealed where IIRC, he said that the reflections werent ray traced. So much FUD was spread around the PS5 wired articles and Road to PS5 that it was hard to not get influenced by some of this nonsense when respectable outlets like DF were the ones spreading it.

Timestamped:


It's amazing Alex and df are still taken seriously. They've been spreading this type of fud for along time. They definitely have Bias that's embarrassing
 

Sosokrates

Report me if I continue to console war
The thing about thinking the Reason for the ps5 UE5 demo performance advantage is because of hardware, the ps5 has a sustained 5-10fps advantage at times, thats 25%-50% performance advantage. If the advantage was because of hardware we would be seeing it more consistently. The performance differences is all over the place, some games favour resolution and some favour fps regardless of the platform which suggests its a software/optimization issue.
Last gen performance differences could be identified easily. The vast majority performed better on the PS4 because of its more powerful GPU and RAM setup, but occasionally the X1 would have better FPS because of its slight CPU advantage.
 

Pedro Motta

Member
The thing about thinking the Reason for the ps5 UE5 demo performance advantage is because of hardware, the ps5 has a sustained 5-10fps advantage at times, thats 25%-50% performance advantage. If the advantage was because of hardware we would be seeing it more consistently. The performance differences is all over the place, some games favour resolution and some favour fps regardless of the platform which suggests its a software/optimization issue.
Last gen performance differences could be identified easily. The vast majority performed better on the PS4 because of its more powerful GPU and RAM setup, but occasionally the X1 would have better FPS because of its slight CPU advantage.
"Also, it's easier to fully use 36 CUs in parallel than it is to fully use 48 CUs - when triangles are small, it's much harder to fill all those CUs with useful work."

Mark Cerny - Road to PS5
 

Fredrik

Member
I'm thinking chip shortages would be a deciding factor but you never know.
Maybe, maybe not. I kinda have some leftfield insight from working with electronics manufacturing, different industry but things are bad no matter where you look. Right now I’d say it’s no harder to build something new than build something old. I’ve seen several redesigns happen just the last year because anything from ethernet chips to dcdc converters to relays to CPUs can suddenly be virtually impossible to find. On a new product you can dodge some big supply traps right away. On an old product you’re kinda stuck and just has to find someone selling the components, which isn’t easy when everybody is trying to secure their production.
 
Last edited:

Vognerful

Member
Certainly best bang for buck around at the moment.. if only there were more of them.
like, yeah...
I remember when I used to bring up RTX3070 & 3080 being 500 & 700 USD at MSRP and people would keep saying "lol, no you can't find them at that price."
 

Vognerful

Member
"Also, it's easier to fully use 36 CUs in parallel than it is to fully use 48 CUs - when triangles are small, it's much harder to fill all those CUs with useful work."

Mark Cerny - Road to PS5
Do we really have any sources or studies on this other than what Cerny said?
 

Elog

Member
Do we really have any sources or studies on this other than what Cerny said?
it is a fundamental principle in computational science. Assuming each work unit can do the exact same amount of work. Each additional unit adds less in practical output than the last when you parallelize a task. Some theoretical information can be found here - interestingly I/O is one of the aspects that determines just how steep the loss in productivity is when adding wok units.


Edit: Was beaten to it :)
 
Last edited:

Bogroll

Likes moldy games
Consoles are still semi custom chips and running customised OS and libraries that do not have to maintain the same abstracted compatibility with the huge variety of CPU’s and GPU’s configurations (and their drivers/microcode) as PC’s do. Some if the features that arrive on PC have been financed as semi custom changes, some are exposed to the console API’s sooner or better…

XSX titles can use DirectStorage today, on PC it is still a distant beta, for example.
Hence why I said just basically PC's.
 

adamsapple

Or is it just one of Phil's balls in my throat?
No - i hoped it would take him to the next world/ realm/ afterlife and him leaving DF that way..

Fucking hell, you're a jackass ..
man i just cant wait for a Next Gen Console only UE5 Game. But i cant see myself running UE5 on Full HD and Max on my PC.

edit: boy does the Series S version looks ugly xD

Well, unless there's any last minute delays, we're gonna get out first UE5 only game in April in STALKER 2.

Which is said to be using some form of Nanite and Lumen, whether it's as advanced or optimized as what we saw in the Matrix demo is not known.

We already know STALKER 2 will support 4K/60/RT on Series consoles.
 
Last edited:

Kenpachii

Member
Anyway nice video nxgamer, enjoyed it.


Same RDNA2.0 architecture but not same overall architecture. The PS5 I/O could be helping the PS5 GPU overperform its tflops here.

Also, the RDNA 2.0 architecture relies on really high clocks up to 2.7 GHz to hit its performance targets. The PS5 is at 2.23 Ghz while the XSX is even lower at 1.8 Ghz. It's possible that the lower clocks are holding back the 52 CU XSX GPU.

Lastly, the XSX uses a RDNA 2.0 chip that adds more CUs to a 2 Shader Array system which is probably causing some kind of bottleneck where the CUs arent being effectively utilized. The 13 tflops AMD 6700xt does not use 52 CUs. It tops out at 40 CUs and pushes the clocks up to 2.4 Ghz to hit its tflops target. It seems even AMD knew that the best way to get performance out of that particular CU configuration. This is also something Cerny hinted at in his Road to PS5 conference. Something we initially dismissed as damage control.

Another potential difference between the XSX and RDNA 2.0 PC cards is that it lacks the infinity cache thats part of the GPU die. The 6700xt is a 337 mm2 GPU compared to the entire XSX APU which includes the CPU and IO and still comes in around 360 mm2. So how can a 40 CU GPU be almost as big as the entire 52 CU XSX APU? The infinity cache must be taking up a lot of that space, and thus must be the reason why the XSX might not be performing up to its tflops potential.

People are still pushing this 52 cu bottleneck idiocy?

There is no indication the I/O in the PS5 does better then i/o in the xbox series X in the matrix demo. That's just tinfoil head nonsense. U can't diagnose the hardware on those boxes and without information from tim himself or a demonstration that goes into deeper detail from cerny or the xbox guy we don't know what is going on. Its funny how they always are silent when real data is asked for.

U keep talking about RDNA2 likes high clocks otherwise CU's are bottlenecked, but AMD doesn't agree with you

Here are there PC GPU's.

With your logic the 6900xt should just pack 36 cu's at best. Clearly AMD doesn't agree with you there, because they go as high as 80.

6600xt has 32 cu's and 2600 mhz
6700xt has 40 cu's and 2600 mhz
6800 has 60 cu's and 2100 mhz
6800xt has 72 cu's and 2250 mhz
6900xt has 80 cu's and 2250 mhz

Xbox sits at 52 cu's at 1.825 mhz
PS5 sits at 36 cu's at 2.230 mhz

It gets even more fun

5c6be703ed01dfe9340eea6e6fa1fa42.png


Let's not pretend 52cu's are hard to feed at 1825 mhz.

6700xt not using 52 cu's doesn't mean anything, AMD makes cards to compete with nvidia and focuses on targets for that solution. the 6700xt is mostly going to be used as a 1080p/1440p card without RT, its probably better to push higher clocks lower cu's as a result which favors the card in benchmarks against nvidia ( hardware unboxed ) Both ps5 and xbox are built for high resolutions and some form of RT which will favor higher cu counts over lower faster ones. unless your software is still built around last gen solutions.

This is why u see barely a difference with a 3080 over a 6800xt in some titles, but in other titles like control / cyberpunk radeon falls off pretty hard.

Now at the end of the day what does 18% or whatever advantage xbox series X gpu deliver bring? minor advancements if devs are even bothered by it. because there is no unlocked framerate on consoles so anything not 60 will be locked to 30. The only differences that u will see is what performance tanks under the 30. But devs should never allow that to happen anyways because they failed at optimized in for those boxes.


The thing about thinking the Reason for the ps5 UE5 demo performance advantage is because of hardware, the ps5 has a sustained 5-10fps advantage at times, thats 25%-50% performance advantage. If the advantage was because of hardware we would be seeing it more consistently. The performance differences is all over the place, some games favour resolution and some favour fps regardless of the platform which suggests its a software/optimization issue.
Last gen performance differences could be identified easily. The vast majority performed better on the PS4 because of its more powerful GPU and RAM setup, but occasionally the X1 would have better FPS because of its slight CPU advantage.

Epic the maker of the engine and demo build it for the PS5, the port was done by a dev group from microsoft. It's safe to assume it works in the strengths of the PS5 over xbox series X because its straight up designed for it. And the optimization in a tech demo on a engine with features that are not done yet with limited time is probably not something that's going to yield in great results etc.

To make it more easier to understand.

A good example:

6800xt can perform like a 3080 ( borderlands 3 ), it can outperform a 3080 ( ac valhalla ) and it can be far worse then a 3080 ( control ). So what is better a 6800xt or a 3080?

Depends on where the game is made for, what optimization took place, and what solutions it pushes, and what targets where made.

Matrix demo:

Low resolution ( low cu usage ), built for PS5 ( performance targets which means stay within 36 cu solution )

What happens when u only allocate 36 cu's in a game and one box has higher clocks? u get more performance on that higher clock box.

But what happens when u get more performance, but 30 fps lock is there anyway so u can't really see it? performance differences dissapear. But luckely the demo didn't stay on 30 fps fps did it? it dropped way below the 30's, isn't that interesting while with games in general u always see steady 30 fps locks? I wonder why that is. How could a company like epic not retain a stable 30 fps? why such a low resolution? was it not builded specifically for the PS5? so why not just choose 4k as resolution and a rock solid 30 fps with framerates going wild at ~45.

What happens when u focused the demo on 4k resolution and 52 cu solution at 30 fps on a xbox series X. Would the PS5 perform worse? no, because the fps lock would be 30 stable on both and the 50 vs 35 fps ( random numbers ) won't be showcased. See how that works.

But this is just GPU, tons of other problems to simple optimization issue's or time could create problems. it's not always just hardware.

The matrix demo is made to showcase epic engine off, and showcase it can be used on current gen consoles in a tech demo. It does nothing else then that. comparisons of performance between the two boxes isn't much useful in a tech demo like this. It's interesting to look at for science but that's about it. This is also why DF didn't burn there hands on comparisons.

"Also, it's easier to fully use 36 CUs in parallel than it is to fully use 48 CUs - when triangles are small, it's much harder to fill all those CUs with useful work."

Mark Cerny - Road to PS5

It's so hard that practically all besides a select few RDNA2 cards and RDNA3 cards have more cu's then 36's. Guess AMD that made the PS5 GPU architecture has no clue what they are doing.
 
Last edited:
Hopefully just joking but its bad taste.

He's banned now, so thankfully we won't have any more similar insightful follow ups from that user.
Personally I wouldn't even joke like that. I know several people that have died from it. And even if he never knew anyone who died from it, he knows at least someone who has had it before. That's why I never wish anything negative on no one, cause karma is a filthy bitch.


Anyways, bring on the UE5 games, can't wait for stalker 2! One of my favorite series from so many years ago!
 
Anyway nice video nxgamer, enjoyed it.




People are still pushing this 52 cu bottleneck idiocy?

There is no indication the I/O in the PS5 does better then i/o in the xbox series X in the matrix demo. That's just tinfoil head nonsense. U can't diagnose the hardware on those boxes and without information from tim himself or a demonstration that goes into deeper detail from cerny or the xbox guy we don't know what is going on. Its funny how they always are silent when real data is asked for.

U keep talking about RDNA2 likes high clocks otherwise CU's are bottlenecked, but AMD doesn't agree with you

Here are there PC GPU's.

With your logic the 6900xt should just pack 36 cu's at best. Clearly AMD doesn't agree with you there, because they go as high as 80.

6600xt has 32 cu's and 2600 mhz
6700xt has 40 cu's and 2600 mhz
6800 has 60 cu's and 2100 mhz
6800xt has 72 cu's and 2250 mhz
6900xt has 80 cu's and 2250 mhz

Xbox sits at 52 cu's at 1.825 mhz
PS5 sits at 36 cu's at 2.230 mhz

It gets even more fun

5c6be703ed01dfe9340eea6e6fa1fa42.png


Let's not pretend 52cu's are hard to feed at 1825 mhz.

6700xt not using 52 cu's doesn't mean anything, AMD makes cards to compete with nvidia and focuses on targets for that solution. the 6700xt is mostly going to be used as a 1080p/1440p card without RT, its probably better to push higher clocks lower cu's as a result which favors the card in benchmarks against nvidia ( hardware unboxed ) Both ps5 and xbox are built for high resolutions and some form of RT which will favor higher cu counts over lower faster ones. unless your software is still built around last gen solutions.

This is why u see barely a difference with a 3080 over a 6800xt in some titles, but in other titles like control / cyberpunk radeon falls off pretty hard.

Now at the end of the day what does 18% or whatever advantage xbox series X gpu deliver bring? minor advancements if devs are even bothered by it. because there is no unlocked framerate on consoles so anything not 60 will be locked to 30. The only differences that u will see is what performance tanks under the 30. But devs should never allow that to happen anyways because they failed at optimized in for those boxes.




Epic the maker of the engine and demo build it for the PS5, the port was done by a dev group from microsoft. It's safe to assume it works in the strengths of the PS5 over xbox series X because its straight up designed for it. And the optimization in a tech demo on a engine with features that are not done yet with limited time is probably not something that's going to yield in great results etc.

To make it more easier to understand.

A good example:

6800xt can perform like a 3080 ( borderlands 3 ), it can outperform a 3080 ( ac valhalla ) and it can be far worse then a 3080 ( control ). So what is better a 6800xt or a 3080?

Depends on where the game is made for, what optimization took place, and what solutions it pushes, and what targets where made.

Matrix demo:

Low resolution ( low cu usage ), built for PS5 ( performance targets which means stay within 36 cu solution )

What happens when u only allocate 36 cu's in a game and one box has higher clocks? u get more performance on that higher clock box.

But what happens when u get more performance, but 30 fps lock is there anyway so u can't really see it? performance differences dissapear. But luckely the demo didn't stay on 30 fps fps did it? it dropped way below the 30's, isn't that interesting while with games in general u always see steady 30 fps locks? I wonder why that is. How could a company like epic not retain a stable 30 fps? why such a low resolution? was it not builded specifically for the PS5? so why not just choose 4k as resolution and a rock solid 30 fps with framerates going wild at ~45.

What happens when u focused the demo on 4k resolution and 52 cu solution at 30 fps on a xbox series X. Would the PS5 perform worse? no, because the fps lock would be 30 stable on both and the 50 vs 35 fps ( random numbers ) won't be showcased. See how that works.

But this is just GPU, tons of other problems to simple optimization issue's or time could create problems. it's not always just hardware.

The matrix demo is made to showcase epic engine off, and showcase it can be used on current gen consoles in a tech demo. It does nothing else then that. comparisons of performance between the two boxes isn't much useful in a tech demo like this. It's interesting to look at for science but that's about it. This is also why DF didn't burn there hands on comparisons.



It's so hard that practically all besides a select few RDNA2 cards and RDNA3 cards have more cu's then 36's. Guess AMD that made the PS5 GPU architecture has no clue what they are doing.
Get out of here with your facts and logic, I'd much rather read opinions and theories from fanboys.
 
If using correct math make me crazy so be it
No
This team of 20 people made all 3 of the ue5 tech demos.
Instead of using some common sense you went right to no one at epic ever touched a series Dev kit. I nor anyone else said that. I said this team has had twice the time working on ps5 because they did.
Not only did you go right to crazy your crazy ruined the next few pages of this thread because it was all talking about something no one said.
 

Dodkrake

Banned
People are still pushing this 52 cu bottleneck idiocy?

It's more related to more CU's per shader array, and more CU's per shader engine.

There is no indication the I/O in the PS5 does better then i/o in the xbox series X in the matrix demo.

Actually, there is. the Xbox Series X underperforms when moving fast (meaning when more streaming is occurring)

Now at the end of the day what does 18% or whatever advantage xbox series X gpu deliver bring? minor advancements if devs are even bothered by it. because there is no unlocked framerate on consoles so anything not 60 will be locked to 30. The only differences that u will see is what performance tanks under the 30. But devs should never allow that to happen anyways because they failed at optimized in for those boxes.

Oh the good old excuse of optimization

It's so hard that practically all besides a select few RDNA2 cards and RDNA3 cards have more cu's then 36's. Guess AMD that made the PS5 GPU architecture has no clue what they are doing.

PS4 APU has 40cus, 4 are disabled for yields. Xbox Series X has 56cus, 4 disabled for yields. Do you know who had 56cu? Radeon RX Vega 56

With your logic the 6900xt should just pack 36 cu's at best. Clearly AMD doesn't agree with you there, because they go as high as 80.

Nice strawman. We're comparing 40 vs 48, for example, with the first being at lower clocks and the latter at higher clocks. You're comparing a die that's twice the size. ALSO, 80cus with 10cus per shader array / 20 cus per shader engine (same as PS5 and =/= from the Series X).

Both ps5 and xbox are built for high resolutions and some form of RT which will favor higher cu counts over lower faster ones. unless your software is still built around last gen solutions.

RT rays will favor more CUs, RT bounces will favor higher clocks. Matrix demo has better ray tracing on the PS5 overall.
 

Sosokrates

Report me if I continue to console war
"Also, it's easier to fully use 36 CUs in parallel than it is to fully use 48 CUs - when triangles are small, it's much harder to fill all those CUs with useful work."

Mark Cerny - Road to PS5

I dont see the relevance this had to my post. I mean cerny wasn't saying this when the xbox one had less cus 😆
 

Fredrik

Member
Personally I wouldn't even joke like that. I know several people that have died from it. And even if he never knew anyone who died from it, he knows at least someone who has had it before. That's why I never wish anything negative on no one, cause karma is a filthy bitch.


Anyways, bring on the UE5 games, can't wait for stalker 2! One of my favorite series from so many years ago!
You’re a good person!
I too know people who has died, I know people who has lost taste in food, I know one who got a blood clot, could’ve ended badly but was found in time, I know a family where their 13 yo daughter needs to be carried up stairs because she has no energy at all.
And I know those who barely even noticed they had it.
People should just be thankful if their body can handle it, you never know how your parents, kids, wife, best friend can handle it.
 

Panajev2001a

GAF's Pleasant Genius
Do we really have any sources or studies on this other than what Cerny said?
Sources? Good ones are what F Fafalada quoted and the words of the designers behind Xbox One/XSX/XSS:
Goossen also reveals that the Xbox One silicon actually contains additional compute units - as we previously speculated. The presence of that redundant hardware (two CUs are disabled on retail consoles) allowed Microsoft to judge the importance of compute power versus clock-speed:

"Every one of the Xbox One dev kits actually has 14 CUs on the silicon. Two of those CUs are reserved for redundancy in manufacturing, but we could go and do the experiment - if we were actually at 14 CUs what kind of performance benefit would we get versus 12? And if we raised the GPU clock what sort of performance advantage would we get? And we actually saw on the launch titles - we looked at a lot of titles in a lot of depth - we found that going to 14 CUs wasn't as effective as the 6.6 per cent clock upgrade that we did."


You may remember Goossen as the tech dude in the XSX tech interviews ( XSX|S system architect ).
 
Last edited:

assurdum

Banned
Anyway nice video nxgamer, enjoyed it.




People are still pushing this 52 cu bottleneck idiocy?

There is no indication the I/O in the PS5 does better then i/o in the xbox series X in the matrix demo. That's just tinfoil head nonsense. U can't diagnose the hardware on those boxes and without information from tim himself or a demonstration that goes into deeper detail from cerny or the xbox guy we don't know what is going on. Its funny how they always are silent when real data is asked for.

U keep talking about RDNA2 likes high clocks otherwise CU's are bottlenecked, but AMD doesn't agree with you

Here are there PC GPU's.

With your logic the 6900xt should just pack 36 cu's at best. Clearly AMD doesn't agree with you there, because they go as high as 80.

6600xt has 32 cu's and 2600 mhz
6700xt has 40 cu's and 2600 mhz
6800 has 60 cu's and 2100 mhz
6800xt has 72 cu's and 2250 mhz
6900xt has 80 cu's and 2250 mhz

Xbox sits at 52 cu's at 1.825 mhz
PS5 sits at 36 cu's at 2.230 mhz

It gets even more fun

5c6be703ed01dfe9340eea6e6fa1fa42.png


Let's not pretend 52cu's are hard to feed at 1825 mhz.

6700xt not using 52 cu's doesn't mean anything, AMD makes cards to compete with nvidia and focuses on targets for that solution. the 6700xt is mostly going to be used as a 1080p/1440p card without RT, its probably better to push higher clocks lower cu's as a result which favors the card in benchmarks against nvidia ( hardware unboxed ) Both ps5 and xbox are built for high resolutions and some form of RT which will favor higher cu counts over lower faster ones. unless your software is still built around last gen solutions.

This is why u see barely a difference with a 3080 over a 6800xt in some titles, but in other titles like control / cyberpunk radeon falls off pretty hard.

Now at the end of the day what does 18% or whatever advantage xbox series X gpu deliver bring? minor advancements if devs are even bothered by it. because there is no unlocked framerate on consoles so anything not 60 will be locked to 30. The only differences that u will see is what performance tanks under the 30. But devs should never allow that to happen anyways because they failed at optimized in for those boxes.




Epic the maker of the engine and demo build it for the PS5, the port was done by a dev group from microsoft. It's safe to assume it works in the strengths of the PS5 over xbox series X because its straight up designed for it. And the optimization in a tech demo on a engine with features that are not done yet with limited time is probably not something that's going to yield in great results etc.

To make it more easier to understand.

A good example:

6800xt can perform like a 3080 ( borderlands 3 ), it can outperform a 3080 ( ac valhalla ) and it can be far worse then a 3080 ( control ). So what is better a 6800xt or a 3080?

Depends on where the game is made for, what optimization took place, and what solutions it pushes, and what targets where made.

Matrix demo:

Low resolution ( low cu usage ), built for PS5 ( performance targets which means stay within 36 cu solution )

What happens when u only allocate 36 cu's in a game and one box has higher clocks? u get more performance on that higher clock box.

But what happens when u get more performance, but 30 fps lock is there anyway so u can't really see it? performance differences dissapear. But luckely the demo didn't stay on 30 fps fps did it? it dropped way below the 30's, isn't that interesting while with games in general u always see steady 30 fps locks? I wonder why that is. How could a company like epic not retain a stable 30 fps? why such a low resolution? was it not builded specifically for the PS5? so why not just choose 4k as resolution and a rock solid 30 fps with framerates going wild at ~45.

What happens when u focused the demo on 4k resolution and 52 cu solution at 30 fps on a xbox series X. Would the PS5 perform worse? no, because the fps lock would be 30 stable on both and the 50 vs 35 fps ( random numbers ) won't be showcased. See how that works.

But this is just GPU, tons of other problems to simple optimization issue's or time could create problems. it's not always just hardware.

The matrix demo is made to showcase epic engine off, and showcase it can be used on current gen consoles in a tech demo. It does nothing else then that. comparisons of performance between the two boxes isn't much useful in a tech demo like this. It's interesting to look at for science but that's about it. This is also why DF didn't burn there hands on comparisons.



It's so hard that practically all besides a select few RDNA2 cards and RDNA3 cards have more cu's then 36's. Guess AMD that made the PS5 GPU architecture has no clue what they are doing.
Here the think: no one has said have more CUs is a bad thing, but still when someone try to point out have less faster CU is easier to push, the same bullshit console war coming out, because no one for a second have argued have less CUs give a superior hardware but it give other advantages.
 
Last edited:

Sosokrates

Report me if I continue to console war
You know right the ps5 gpu is faster of the XSX one? When CUs is not involved, I wouldn't call it inferior.

First time its really mattered.
Its a shame cerny did not show some demo or game demonstrating the benefit of a higher clockspeed vs higher cus and higher Tflops.
 

assurdum

Banned
First time its really mattered.
Its a shame cerny did not show some demo or game demonstrating the benefit of a higher clockspeed vs higher cus and higher Tflops.
It's a shame neither MS did. And again no one here downplayed the utility to have higher CUs counts. Just pointed out faster GPU also has it's own advantage. It's that tough to grasp?
 
Last edited:

Sosokrates

Report me if I continue to console war
Eeeh, no. Not directly anyways. The emphasis on the TFLOPS number and “sustained” clocks is a bit closer on the comparison angle than Cerny ever did for the PS4 GPU, still this is an aside to the aside to the aside :p.

They said this during tech interviews, both companies talk about specs and "worlds most powerful console"
Dont really see your point here.
 

GymWolf

Member
I think it has definitely surpassed my expectations, but again, I wouldnt call it revolutionary just yet.

I suppose the flying section of the UE5 demo was as close to revolutionary as it gets, but I expected that stuff in games by now based on what he said. Thats why I keep harping on the flying in Horizon. I want to see Aloy fly around the map at those speeds with that asset quality.


When standing around in the Matrix demo, both consoles are fine. Even when driving around in a slow car, they are both equal. It's when they start flying at fast speeds, the xsx starts to lag behind. That's why I am bringing up those two screenshots because when the load is high, the xsx drops to PS5 levels which is something a better GPU simply should never do.

Those comparisons Alex took are from photomode so looking at the graph is not exactly accurate because the load doesnt change. Once he goes into the photomode, the game effectively pauses and the game only renders whats on screen. The XSX version of control had several framerate issues back when it first launched anyway. The PS5 was performing better in game which is why that comparison was so surprising.
Not gonna happen. (talking about the asset quality, not the flying itself)

Isn't zero pop in and always the higher details for textures no matter the distance a feature of the ue5?!

Can decima engine do something like that?
 

Panajev2001a

GAF's Pleasant Genius
First time its really mattered.
Its a shame cerny did not show some demo or game demonstrating the benefit of a higher clockspeed vs higher cus and higher Tflops.
Again, not how he operates or really approached the mix GDC mix general audience tech talk. He does not boast or trash talk or blows your mind with demos. You act as if he just appeared on the scene :p.
 

Sosokrates

Report me if I continue to console war
Again, not how he operates or really approached the mix GDC mix general audience tech talk. He does not boast or trash talk or blows your mind with demos. You act as if he just appeared on the scene :p.

It would of been nice to see the type of improvement the high clock speed would give.

Its fistrating being in the dark for why visual differences happen regardless if its the PS5 or XSX performing better.

You can take PC GPUs on the same architecture + mem bandwidth one with more cus and a low clock and the other with less cus but higher clocks if they equal the same tflops they perform practical the same, if one has more tflops it performs better even with a lower clock speed.
 

Fafalada

Fafracer forever
First time its really mattered.
It always mattered. Had the original XBox GPU been clocked at 150Mhz like the PS2 (or lower) the competition for best looking games would have been firmly between GCN and PS2 that gen.
Or if you prefer - Xbox was narrow and fast that gen, PS2 was wide and slow(er).

So the same as the PS5's gpu....
Well no - everything in X1 GPU was about 2-3x narrower and/or slower.
So deltas were both much larger, and never in favor of X1 (even accounting for clock speed differential).
 
Last edited:
Status
Not open for further replies.
Top Bottom