• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

GhostRunner PS5/XSX/XSS DF Performance Analysis

Loxus

Member
they are the same architecture in everything but name. RDNA2 is not much different than RDNA1. if you don't do any raytracing related tasks they should perform basically the same, only that RNDA2 is more power efficient.
they literally could have named it RDNA+ (like they did with Zen+) and noone would have questioned it, but it's better PR to add that 2
There are still different generations though.
You can't compare cards from different vendors, architectures or generations without getting inaccurate information.
 

01011001

Banned
There are still different generations though.
You can't compare cards from different vendors, architectures or generations without getting inaccurate information.

the differences are minimal. and yes you can totally compare them. you can check if similarly spec cards perform the same, and in the case of RDNA 1 and 2 they do preform very much the same. the differences you see come up are mostly due to the ram configuration being different, not due to the actual architecture of the GPU on the board
 

FrankWza

Member
You're still repeating the same BS over & over
Saul Goodman Defense GIF by Better Call Saul
 

Md Ray

Member
Second example makes sense, just not a workstation vs gaming GPU on the other hand. Horrible example. That was my whole point of even commenting. Didn't even know it was you who brought up this weird comparison. You can't compare a single slot gpu made for workstations, to a to a full fledged gaming GPU. I wouldn't compare either one. I could compare AMD to Nvidia, but wouldn't compare workstation GPU's, that were made for a completely different purpose, to gaming GPU's, even if you can apply a gaming profile. Doesn't mean it's a gaming GPU, meant for gaming.
Why not? It's fully enabled GA104. Literally the same silicon as 3070 Ti.
 
Last edited:

onQ123

Member
The BS is thinking that its some sort of PS5 technical wizardry at work over simple optimizations.

It's not technical wizardry PS5 & Xbox Series X just had different choices made when being designed & in this game it just favors PS5 design more.

PS5 having the ROPs & internal bandwidth advantage is a fact why do you ignore that & keep yelling that it's because PS5 received more optimization?
 
Why not? It's fully enabled GA104. Literally the same silicon as 3070 Ti.
Did you miss the part about not only being a workstation GPU, but also being a single slot GPU? Can you image the lack of cooling compared to a gaming GPU? And why it's NOT a gaming GPU? Do you not see why this is a bad comparison yet? No one uses an A4000 for gaming, for the millionth time. If they do, they are obviously doing it wrong, and probably going off bad information, like your comparison for instance.

It's like buying a quadro for gaming, and it's sad you are doubling down on your bad comparison. Come on bro, your better than that
 

DenchDeckard

Moderated wildly
It's not technical wizardry PS5 & Xbox Series X just had different choices made when being designed & in this game it just favors PS5 design more.

PS5 having the ROPs & internal bandwidth advantage is a fact why do you ignore that & keep yelling that it's because PS5 received more optimization?
You talk this stuff like it’s fact and you have no evidence to back that up.

im genuinely thinking you must be high or something lol.
 

Md Ray

Member
Did you miss the part about not only being a workstation GPU, but also being a single slot GPU? Can you image the lack of cooling compared to a gaming GPU? And why it's NOT a gaming GPU? Do you not see why this is a bad comparison yet? No one uses an A4000 for gaming, for the millionth time. If they do, they are obviously doing it wrong, and probably going off bad information, like your comparison for instance.

It's like buying a quadro for gaming, and it's sad you are doubling down on your bad comparison. Come on bro, your better than that
Did you miss the part about it being literally the same architecture, in fact, the same silicon as 3070 Ti? Don't you know NV uses the exact same architecture for both gaming and workstation cards these days? I'd agree with you if it were AMD's workstation vs gaming GPU because their WS and gaming GPUs use two very different architectures. AMD's WS cards are based on CDNA, very compute-heavy, like GCN was, and RDNA as we all know is a totally different beast, it's graphics-heavy, designed for gaming workloads. I wouldn't make narrow vs wide config comparison using RDNA and CDNA architectures.

A4000 isn't mainly used for gaming, sure. This is why it has more memory, 16GB of it w/ECC. Do you mind telling me if there's any difference architecturally that sets it apart from the 30 series that makes my narrow vs wide config comparison invalid and "bad"? What does it being a single slot have anything to do with its architecture? It's a single-slot card because it doesn't run at super high clock speeds therefore it doesn't need a bigger cooling solution. GA104 is GA104 whether it's on a board that's labeled as "Workstation GPU" or "Gaming GPU". They have the same capabilities, same 6144 CUDA cores, the same amount of ROPs, TMUs, same everything except one is heavily reduced in terms of frequency which makes it a perfect candidate for comparison against a GPU with higher frequency and fewer "CUs" having similar TFLOP number as the bigger GPU, it allows us to see if gaming perf is noticeably different as Cerny claimed, well turns out he was right... And it seems you're not liking this outcome for some reason.
 
Last edited:
Did you miss the part about it being literally the same architecture, in fact, the same silicon as 3070 Ti? Don't you know NV uses the exact same architecture for both gaming and workstation cards these days? I'd agree with you if it were AMD's workstation vs gaming GPU because their WS and gaming GPUs use two very different architectures. AMD's WS cards are based on CDNA, very compute-heavy, like GCN was, and RDNA as we all know is a totally different beast, it's graphics-heavy, designed for gaming workloads. I wouldn't make narrow vs wide config comparison using RDNA and CDNA architectures.

A4000 isn't mainly used for gaming, sure. This is why it has more memory, 16GB of it w/ECC. Do you mind telling me if there's any difference architecturally that sets it apart from the 30 series that makes my narrow vs wide config comparison invalid and "bad"? What does it being a single slot have anything to do with its architecture? It's a single-slot card because it doesn't run at super high clock speeds therefore it doesn't need a bigger cooling solution. GA104 is GA104 whether it's on a board that's labeled as "Workstation GPU" or "Gaming GPU". They have the same capabilities, same 6144 CUDA cores, the same amount of ROPs, TMUs, same everything except one is heavily reduced in terms of frequency which makes it a perfect candidate for comparison against a GPU with higher frequency and fewer "CUs" having similar TFLOP number as the bigger GPU, it allows us to see if gaming perf is noticeably different as Cerny claimed, well turns out he was right... And it seems you're not liking this outcome for some reason.
I agree that the A4000 is very comparable to its gaming equivalents, but one would still have to check for any eventual ECC performance penalties to make sure it's a 1:1 comparison.
 

Zathalus

Member
False.

Here's a reference RTX 3060 Ti w/ 38 SM and higher clock speed beating 48 SM lower clocked RTX A4000. Both have similar TFLOPS (~18TF) and exactly the same GDDR6-14000 memory, so 448 GB/s bandwidth.

OC'd A4000 gets an additional +10% on core clock and +14% on memory (512GB/s). A4000 has the same Ampere architecture and in terms of its silicon and size, it's basically a 3070 Ti (48 SM) paired with 16GB VRAM. You're essentially seeing an RTX 3070 Ti silicon getting beat by GPUs (3060 Ti, 3070) with fewer SMs running at higher clk speeds.

RSS_4K.png


52fps vs 57fps. One is much closer to 60fps than the other. So I'd say that's a "noticeably different" performance.

CP2077_1440p.png


Of course, not every game sees the same advantage as it depends from game to game, engine to engine like we've been seeing with XSX and PS5. Here both stock 3060 Ti and A4000 are like-for-like as their TFLOPS would suggest:

WDL_1440p.png


There are 3060 Tis with even bigger coolers hitting even higher clock speeds like MSI's 3 slot Gaming X Trio which can match and sometimes even exceed RTX 3070 Founders Edition. IIRC, @Werewolfgrandma probably has the Gaming X Trio model, correct me if I'm wrong. His 3060 Ti pretty much matches my 3070 FE at stock in Death Stranding.

So as you can see it's not speculation, you just choose to ignore facts.
This is all based on a false assumption, that the stock A4000 and the 3060ti have the same TFLOPs.

Stock A4000 runs at a clock speed of 1425 MHz, which means it is 17.5 TFLOP. a 3060ti on the other hand averages around 1980Mhz or so, which is 19.26 TFLOP. So at stock speeds the 3060ti should be around 10% faster. The overclocked A4000 is 19.3 TFLOPs so in that scenario they should be almost identical, with a slight advantage to the A4000 due to the higher memory speed. Looking at the benchmark numbers it appears the wider GPU is actually the faster one, despite the OC A4000 and the stock 3060ti having almost the exact same TFLOP count:

1440p.png


So no, your evidence of the higher clocked GPU having any noticeable advantage is anything but.
 
Did you miss the part about it being literally the same architecture, in fact, the same silicon as 3070 Ti? Don't you know NV uses the exact same architecture for both gaming and workstation cards these days? I'd agree with you if it were AMD's workstation vs gaming GPU because their WS and gaming GPUs use two very different architectures. AMD's WS cards are based on CDNA, very compute-heavy, like GCN was, and RDNA as we all know is a totally different beast, it's graphics-heavy, designed for gaming workloads. I wouldn't make narrow vs wide config comparison using RDNA and CDNA architectures.

A4000 isn't mainly used for gaming, sure. This is why it has more memory, 16GB of it w/ECC. Do you mind telling me if there's any difference architecturally that sets it apart from the 30 series that makes my narrow vs wide config comparison invalid and "bad"? What does it being a single slot have anything to do with its architecture? It's a single-slot card because it doesn't run at super high clock speeds therefore it doesn't need a bigger cooling solution. GA104 is GA104 whether it's on a board that's labeled as "Workstation GPU" or "Gaming GPU". They have the same capabilities, same 6144 CUDA cores, the same amount of ROPs, TMUs, same everything except one is heavily reduced in terms of frequency which makes it a perfect candidate for comparison against a GPU with higher frequency and fewer "CUs" having similar TFLOP number as the bigger GPU, it allows us to see if gaming perf is noticeably different as Cerny claimed, well turns out he was right... And it seems you're not liking this outcome for some reason.
You kinda answered it yourself. It's a single slot GPU. It's using ECC memory. It's clocked much lower than a gaming GPU. You literally just said all of this in your post. You just answered why it's not a good comparison. People aren't going to go out and buy this over a 3060 ti or a 3070. And the previous reasons are why a 3060 ti can beat it at times. Because of the ECC, slower clocks, and it not having sufficient cooling, which is why it won't keep boost clocks as high. All of the reasons to NOT use the a4000 as a gaming GPU.
 
Last edited:

DenchDeckard

Moderated wildly
Because It is facts.

What more evidence do you want?
The hardware differences has been discussed multiple times.

Im not talking about the hardware difference, I meant the first sentence. I should have been more clear.

This part:

"It's not technical wizardry PS5 & Xbox Series X just had different choices made when being designed & in this game it just favors PS5 design more."

This is not factual at all, and from what I can see pulled out of thin air....
 

Md Ray

Member
You kinda answered it yourself. It's a single slot GPU. It's using ECC memory. It's clocked much lower than a gaming GPU. You literally just said all of this in your post. You just answered why it's not a good comparison. People aren't going to go out and buy this over a 3060 ti or a 3070. And the previous reasons are why a 3060 ti can beat it at times. Because of the ECC, slower clocks, and it not having sufficient cooling, which is why it won't keep boost clocks as high. All of the reasons to NOT use the a4000 as a gaming GPU.
No one is telling anyone to go buy this over a GeForce. Do you have any evidence that ECC at same mem speed vs non-ECC GDDR6 lowers performance?

Btw, someone in the comments just said that A4000 was much cheaper than 3070 in their region. It's a nice alternative I'd go for it too if I was looking for a ballpark 3060 Ti level perf and I can't find one. 🤷🏻‍♂️
 
Last edited:

onQ123

Member
Im not talking about the hardware difference, I meant the first sentence. I should have been more clear.

This part:

"It's not technical wizardry PS5 & Xbox Series X just had different choices made when being designed & in this game it just favors PS5 design more."

This is not factual at all, and from what I can see pulled out of thin air....
If it didn't favor PS5 design how did we even arrive here?
 

DenchDeckard

Moderated wildly
If it didn't favor PS5 design how did we even arrive here?

Because maybe, just maybe a indie game made by 30 people that the console versions were outsourced to a separate studio to do a free next gen update a year after the console versions launched is absolutely no indication of anything if we do not know for a fact how much time was spent on the patch or even if the same studios did it.

We literally know nothing about this port, yet here we are jumping to conclusions. There's no evidence of anything here.

Are you saying the PS5 design favors old Unreal engine 4 games made by 30 people that is a corridor with like hardly any polygons etc and enemies? It doesn't seem like a good game to favour the PS5s design imo.
So the PS5 favours this game and a little isometric indie game called the tourist? That doesn't seem great to me.

I'm firmly in the camp that this games technical release on these machines amounts to pretty much anything without input from the actual people who ported this game. To think otherwise is a bit crazy imo.

It needs some patches.
 
Last edited:

Loxus

Member
This is all based on a false assumption, that the stock A4000 and the 3060ti have the same TFLOPs.

Stock A4000 runs at a clock speed of 1425 MHz, which means it is 17.5 TFLOP. a 3060ti on the other hand averages around 1980Mhz or so, which is 19.26 TFLOP. So at stock speeds the 3060ti should be around 10% faster. The overclocked A4000 is 19.3 TFLOPs so in that scenario they should be almost identical, with a slight advantage to the A4000 due to the higher memory speed. Looking at the benchmark numbers it appears the wider GPU is actually the faster one, despite the OC A4000 and the stock 3060ti having almost the exact same TFLOP count:

1440p.png


So no, your evidence of the higher clocked GPU having any noticeable advantage is anything but.
Your missing a valuable key point, "API"
It would be better to have two PS5 configurations to better see the results.
OuoS2pm.jpg


You guys always bring PC parts into the console space.
PC parts have a lot of overhead and games aren't fully optimized as developers have to get there game running on multiple configurations from different vendors, architectures and generations.

PS5 & XBSX are much more comparable because they are consoles from the same vendor, architecture and generation.

--------------------------------
There is a Colin Moriarty Podcast that talked about the GPU might be a bottleneck if it can't render data coming off the SSD fast enough.

Which is another reason why Cerny went with higher clocks over more CUs. More CUs would of lowered clocks because of thermals.

And to add to this, Mark Cerny said;
"Also it's easier to fully use 36CUs in parallel than it is to fully use 48CUs when triangles are small it's much harder to fill although CUs with useful work."

We began to see pixel size triangles in UE5.
It's all a balancing act.
 
No one is telling anyone to go buy this over a GeForce. Do you have any evidence that ECC at same mem speed vs non-ECC GDDR6 lowers performance?

Btw, someone in the comments just said that A4000 was much cheaper than 3070 in their region. It's a nice alternative I'd go for it too if I was looking for a ballpark 3060 Ti level perf and I can't find one. 🤷🏻‍♂️
Look up any comparison of memory speeds using ECC. ECC is always clocked lower than non ECC memory. Look at literally any ram kits out there, ECC will always be slower. Also ECC ram is usually registered, which gets delayed by clock cycle, aka latency.

So let me ask you, do YOU have any proof that ECC has any benefit to gaming, at all? Is there a reason gamers don't go for ECC, besides it costing so much more? If you are considering an a4000 as a gaming GPU, would you also recommend ECC ram over non ECC ram? Cause that is literally what you SHOULDN'T do.

I highly doubt the a4000 is cheaper than a 3070 in literally 99% of the world. A single use case doesn't make your argument true.
 
Last edited:

Md Ray

Member
do YOU have any proof that ECC has any benefit to gaming, at all?
Never claimed it to have any benefit in the first place.
Is there a reason gamers don't go for ECC, besides it costing so much more? If you are considering an a4000 as a gaming GPU, would you also recommend ECC ram over non ECC ram? Cause that is literally what you SHOULDN'T do.

I highly doubt the a4000 is cheaper than a 3070 in literally 99% of the world. A single use case doesn't make your argument true.
Again, no one is telling anyone to go buy this over a GeForce, that wasn't even the argument I was making.

So then are you gonna show me GDDR6 ECC vs non ECC benchmark with the same mem speed?
 

Loxus

Member
Because maybe, just maybe a indie game made by 30 people that the console versions were outsourced to a separate studio to do a free next gen update a year after the console versions launched is absolutely no indication of anything if we do not know for a fact how much time was spent on the patch or even if the same studios did it.

We literally know nothing about this port, yet here we are jumping to conclusions. There's no evidence of anything here.

Are you saying the PS5 design favors old Unreal engine 4 games made by 30 people that is a corridor with like hardly any polygons etc and enemies? It doesn't seem like a good game to favour the PS5s design imo.
So the PS5 favours this game and a little isometric indie game called the tourist? That doesn't seem great to me.

I'm firmly in the camp that this games technical release on these machines amounts to pretty much anything without input from the actual people who ported this game. To think otherwise is a bit crazy imo.

It needs some patches.
It's not just this game though.
We are seeing more and more games performing the same on both consoles with the PS5 edging out XBSX with details quality.

If the XBSX was as powerful as many say (30% power advantage with 40-50fps delta). It shouldn't matter if the game is fully optimized or not, it should still be outperforming the PS5.
 

onQ123

Member
Because maybe, just maybe a indie game made by 30 people that the console versions were outsourced to a separate studio to do a free next gen update a year after the console versions launched is absolutely no indication of anything if we do not know for a fact how much time was spent on the patch or even if the same studios did it.

We literally know nothing about this port, yet here we are jumping to conclusions. There's no evidence of anything here.

Are you saying the PS5 design favors old Unreal engine 4 games made by 30 people that is a corridor with like hardly any polygons etc and enemies? It doesn't seem like a good game to favour the PS5s design imo.
So the PS5 favours this game and a little isometric indie game called the tourist? That doesn't seem great to me.

I'm firmly in the camp that this games technical release on these machines amounts to pretty much anything without input from the actual people who ported this game. To think otherwise is a bit crazy imo.

It needs some patches.

So what you're saying is that these 30 indie devs love the PS5 so much that they spent all their time figuring out how to make this game run 4k 60fps with Ray tracing but only clicked a few buttons on Xbox Series X?
 
Last edited:
It's not technical wizardry PS5 & Xbox Series X just had different choices made when being designed & in this game it just favors PS5 design more.

PS5 having the ROPs & internal bandwidth advantage is a fact why do you ignore that & keep yelling that it's because PS5 received more optimization?
Please explain why the the XSX is just simply incapable of running this cross gen title smoothly then. No optimizations will fix this title is what you appear to be arguing. Rather than improving the software you are claiming there is just a hardware deficiency on the Xbox side? Again Little Nightmares 2 not having raytracing on Xbox is because the PS5 is better designed? The Medium loads faster on Xbox and has raytracing because the Xbox was better engineered than PS5 for that title? You don't see how that sounds silly?
 
Never claimed it to have any benefit in the first place.

Again, no one is telling anyone to go buy this over a GeForce, that wasn't even the argument I was making.

So then are you gonna show me GDDR6 ECC vs non ECC benchmark with the same mem speed?
That's for you to look up, as no gamer buys a workstation GPU, for gaming. YOU are the one who brought it up as an example, not me. YOU should have looked it up before hitting the reply button. I simply showed you multiple times why it's a bad idea and a horrible comparison. There's a million and one reasons why ECC memory isn't being used for RAM or VRAM in gaming builds. It literally makes no sense, and that's why the comparison is absurd in the first place.
 

onQ123

Member
Please explain why the the XSX is just simply incapable of running this cross gen title smoothly then. No optimizations will fix this title is what you appear to be arguing. Rather than improving the software you are claiming there is just a hardware deficiency on the Xbox side? Again Little Nightmares 2 not having raytracing on Xbox is because the PS5 is better designed? The Medium loads faster on Xbox and has raytracing because the Xbox was better engineered than PS5 for that title? You don't see how that sounds silly?
The game does run smoothly on Xbox Series X it's pretty much 60 fps locked at 4K

What some of you are upset about is bonus configurations like the ray tracing & 120fps modes one of which has never been achieved on these consoles until this game was patched.

To say that the game is unoptimized on Xbox Series X because it's not native 4k 60fps with Ray Tracing is crazy when no other devs have achieved this yet.
 

Md Ray

Member
That's for you to look up, as no gamer buys a workstation GPU, for gaming. YOU are the one who brought it up as an example, not me. YOU should have looked it up before hitting the reply button. I simply showed you multiple times why it's a bad idea and a horrible comparison. There's a million and one reasons why ECC memory isn't being used for RAM or VRAM in gaming builds. It literally makes no sense, and that's why the comparison is absurd in the first place.
"i simply showed you multiple times". You still haven't shown me anything. You are the one claiming ECC lowers perf, so show me the data. Show me benchmarks of GDDR6 ECC vs non-ECC running at the same mem speeds then. You need to put up or shut up.
 
Last edited:

DenchDeckard

Moderated wildly
So what you're saying is that these 30 indie devs love the PS5 so much that they spent all there time figuring out how to make this game run 4k 60fps with Ray tracing but only clicked a few buttons on Xbox Series X?

I'm saying that this game literally shows nothing, because it wasn't even the 30 devs that made the game that ported it to consoles. It could be one company doing the Ps5 version and another doing the xbox Series version, we have no idea. We don't know enough to make the statement you are making.

That is what I am making.

Imagine you say all this stuff, then in a few weeks theres a patch and the xbox version runs the same as the PS5. What would you say then? They upgraded the hardware of the xbox over ethernet...upgrade the ram via usb?
 

DenchDeckard

Moderated wildly
It's not just this game though.
We are seeing more and more games performing the same on both consoles with the PS5 edging out XBSX with details quality.

If the XBSX was as powerful as many say (30% power advantage with 40-50fps delta). It shouldn't matter if the game is fully optimized or not, it should still be outperforming the PS5.

I usually see the big game launches, with higher resolution on xbox and slightly higher FPS on PS5 if it shows an advantage..which I genuinely put that down to the speed of the GPU. and the xbox winning out on the wide GPU offering higher average resolution.

anyway, not arguing with You an oNQ we are all free to have our opinions. Pz
 
Last edited:

Riky

$MSFT
I'm saying that this game literally shows nothing, because it wasn't even the 30 devs that made the game that ported it to consoles. It could be one company doing the Ps5 version and another doing the xbox Series version, we have no idea. We don't know enough to make the statement you are making.

That is what I am making.

Imagine you say all this stuff, then in a few weeks theres a patch and the xbox version runs the same as the PS5. What would you say then? They upgraded the hardware of the xbox over ethernet...upgrade the ram via usb?

You only have to go to the next gen patch for Control comparison thread to see all the same stupid theories over and over again by the same people, turns out a simple firmware upgrade solved it and all the talk was utter nonsense, rinse and repeat.
 

onQ123

Member
I'm saying that this game literally shows nothing, because it wasn't even the 30 devs that made the game that ported it to consoles. It could be one company doing the Ps5 version and another doing the xbox Series version, we have no idea. We don't know enough to make the statement you are making.

That is what I am making.

Imagine you say all this stuff, then in a few weeks theres a patch and the xbox version runs the same as the PS5. What would you say then? They upgraded the hardware of the xbox over ethernet...upgrade the ram via usb?
I would say that they optimized it some more to achieve native 4K 60fps with Ray tracing on Xbox Series X.
 
"i simply showed you multiple times". You still haven't shown me anything. You are the one claiming ECC lowers perf, so show me the data. Show me benchmarks of GDDR6 ECC vs non-ECC running at the same mem speeds then. You need to put up or shut up.
Look up any benchmark between ECC vs non ECC. There's plenty of them all over. ECC always has lesser performance, more latency, etc. All of the negative things you DON'T want. Remember, I never made this ridiculous comparison.

Put up or shut up? Lol you mad I'm calling out your ridiculous comparison?

Google ECC vs non ECC for gaming. Click any benchmark. You can even YouTube comparisons, and you'll see the same results there. Why do you think ECC is not sold to gamers, and why just about all consumer motherboards don't support it? Why did Intel block ECC from just about all motherboards, with the exception of servers/research boards? Does any of this make sense to you yet?
 

DenchDeckard

Moderated wildly
I would say that they optimized it some more to achieve native 4K 60fps with Ray tracing on Xbox Series X.

so yo uare saying that the game could be more optimized on PS5 over the xbox ;)

Or that maybe even, that the xbox needs a little more work to be optimized over the PS5? Maybe the PS5 API is better and easier to get the PS5 version up to speed better.

Who the funk knows lol
 

Md Ray

Member
Look up any benchmark between ECC vs non ECC. There's plenty of them all over. ECC always has lesser performance, more latency, etc. All of the negative things you DON'T want. Remember, I never made this ridiculous comparison.

Put up or shut up? Lol you mad I'm calling out your ridiculous comparison?

Google ECC vs non ECC for gaming. Click any benchmark. You can even YouTube comparisons, and you'll see the same results there. Why do you think ECC is not sold to gamers, and why just about all consumer motherboards don't support it? Why did Intel block ECC from just about all motherboards, with the exception of servers/research boards? Does any of this make sense to you yet?
So, no data, just a bunch of assumptions then. Got it
 
"i simply showed you multiple times". You still haven't shown me anything. You are the one claiming ECC lowers perf, so show me the data. Show me benchmarks of GDDR6 ECC vs non-ECC running at the same mem speeds then. You need to put up or shut up.
ECC has a negative impact on performance, that's simply due to how it works. Benchmarks would be nice, sadly the site you linked didn't think about ECC at all, which means they also didn't think of turning it off.

The overall advantage the 3060 Ti had over the A4000 was 4%. That's already quite low, and when you apply a potential performance penalty from ECC, it's essentially a draw. Hardly good evidence that higher clocks with less CUs = more performance than lower clocks with more CUs.
 
So, no data, just a bunch of assumptions then. Got it
????

Everything that I've said is factual.



But since you are too lazy, I'll do it for you.






Not sure why that was so hard to do, or understand about that? Click any of the results and maybe you'll learn a thing or two about why ECC isn't used for gaming (for the millionth time)
 

Sosokrates

Report me if I continue to console war
If it didn't favor PS5 design how did we even arrive here?
More optimization may just be need for the series versions.

The reason why this seems likely bis because the seriesX version performs so much worse, the seriesX drops a huge 30% at times in the 120fps mode and and drops 20% in the 60fps modes, the seriesS also suffers from an animation issue.

If this was just the superior hardware of the PS5 why didnt the developer lower the resolution of the seriesX version?

I really wish we had a developer who would talk about if the PS5s hardware can enable greater performance then seriesX and in what circumstance.
 

Zathalus

Member
It's not just this game though.
We are seeing more and more games performing the same on both consoles with the PS5 edging out XBSX with details quality.

If the XBSX was as powerful as many say (30% power advantage with 40-50fps delta). It shouldn't matter if the game is fully optimized or not, it should still be outperforming the PS5.
Most games this year performed better on the XSX though. We have seen games demonstrate the 18% TFLOP difference between the consoles. I've got a full list with sources if you want me to post them?
 

Md Ray

Member
ECC has a negative impact on performance, that's simply due to how it works. Benchmarks would be nice, sadly the site you linked didn't think about ECC at all, which means they also didn't think of turning it off.

The overall advantage the 3060 Ti had over the A4000 was 4%. That's already quite low, and when you apply a potential performance penalty from ECC, it's essentially a draw. Hardly good evidence that higher clocks with less CUs = more performance than lower clocks with more CUs.
????

Everything that I've said is factual.



But since you are too lazy, I'll do it for you.






Not sure why that was so hard to do, or understand about that? Click any of the results and maybe you'll learn a thing or two about why ECC isn't used for gaming (for the millionth time)
Factual? Then show me. I just keep hearing that it lowers perf but you both refuse to show actual data/benchmarks of ECC vs non ECC with the same mem speed. Shouldn't be that hard to provide evidence, no? Telling people to go just google isn't enough after claiming one thing and then not backing it up with valid evidence. It's not a good look.

I'll wait.
 

Zathalus

Member
Factual? Then show me. I just keep hearing that it lowers perf but you both refuse to show actual data/benchmarks of ECC vs non ECC with the same mem speed. Shouldn't be that hard to provide evidence, no? Telling people to go just google isn't enough after claiming one thing and then not backing it up with valid evidence. It's not a good look.

I'll wait.

But what does it matter? The A4000 and 3060ti are performing exactly as you would expect them too. There is no real advantage from clock speeds on offer.
 

Md Ray

Member
Factual? Then show me. I just keep hearing that it lowers perf but you both refuse to show actual data/benchmarks of ECC vs non ECC with the same mem speed. Shouldn't be that hard to provide evidence, no? Telling people to go just google isn't enough after claiming one thing and then not backing it up with valid evidence. It's not a good look.

I'll wait.
I'll say this. My stance on GDDR6-14000 ECC vs non ECC is that it makes no diff to perf.
 
The game does run smoothly on Xbox Series X it's pretty much 60 fps locked at 4K

What some of you are upset about is bonus configurations like the ray tracing & 120fps modes one of which has never been achieved on these consoles until this game was patched.

To say that the game is unoptimized on Xbox Series X because it's not native 4k 60fps with Ray Tracing is crazy when no other devs have achieved this yet.
I didn't hear your technical explanation why this cross generational title running on hardware very similar to the PS5 is incapable of running similarly OUTSIDE of software optimization which you have completely eliminated as a possibility. While you are at it you can explain the technical advantages and faster loading the Xbox has over the PS5 in the Medium again OVER software optimizations. You say its because of the PS5's better engineering I'd like to hear your breakdown.

It's not just this game though.
We are seeing more and more games performing the same on both consoles with the PS5 edging out XBSX with details quality.

If the XBSX was as powerful as many say (30% power advantage with 40-50fps delta). It shouldn't matter if the game is fully optimized or not, it should still be outperforming the PS5.
This sounds totally imaginary. Name one technical source that stated the XSX was 30% more 'powerful' than the PS5. Every outlet I read from DF to IGN all stated both consoles were very similar in power and the results speak for themselves. NO ONE claimed that the Xbox would running games 40-50 FPS faster than the PS5 especially when they have the same CPU. It is nonsense to claim that an unoptimized game would outperform an optimized one when running on similar hardware. That is the reason why this game in particular shows the difference between optimized vs unoptimized. I do hope it gets fixed but it is doubtful.
 

onQ123

Member
I didn't hear your technical explanation why this cross generational title running on hardware very similar to the PS5 is incapable of running similarly OUTSIDE of software optimization which you have completely eliminated as a possibility. While you are at it you can explain the technical advantages and faster loading the Xbox has over the PS5 in the Medium again OVER software optimizations. You say its because of the PS5's better engineering I'd like to hear your breakdown.


This sounds totally imaginary. Name one technical source that stated the XSX was 30% more 'powerful' than the PS5. Every outlet I read from DF to IGN all stated both consoles were very similar in power and the results speak for themselves. NO ONE claimed that the Xbox would running games 40-50 FPS faster than the PS5 especially when they have the same CPU. It is nonsense to claim that an unoptimized game would outperform an optimized one when running on similar hardware. That is the reason why this game in particular shows the difference between optimized vs unoptimized. I do hope it gets fixed but it is doubtful.
So you're just making up arguments as you go?
 
Factual? Then show me. I just keep hearing that it lowers perf but you both refuse to show actual data/benchmarks of ECC vs non ECC with the same mem speed. Shouldn't be that hard to provide evidence, no? Telling people to go just google isn't enough after claiming one thing and then not backing it up with valid evidence. It's not a good look.

I'll wait.
Did you click any of the links that I've provided? There's many for you to choose from, so you can't claim I'm cherry picking. Also, you are the one who brought up the comparison in the first place. Don't you think you should know at least the bare minimum facts about ECC, before making that absurd comparison?

Go ahead and click the link. Several of us have been saying this in the thread already. It's not a good look to debate something you aren't that knowledgeable about. Seriously, click the link and look at any of the benchmarks. Some of the articles even explain why ECC introduces more latency into the equation. Gamers always go for the lowest latency. Whether that be input latency on a mouse, monitor, ram, etc. No gamer is going to spend MORE money for MORE latency. That's ridiculous. No gamer is going to buy a workstation GPU for the sole purpose of GAMING. Is this starting to make sense yet?
 
Top Bottom