• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Unreal Engine 5 Benchmarks. 6900xt outperforms 3090 at 1440p. 28 GB RAM usage.

SlimySnake

Flashless at the Golden Globes
Eh. The engine will be optimized more and more over time and it will run on cell phones. As every unreal engine since the beginning. No need to worry.
the engine would but the games wont. you dont see big AAA third party games running on cellphones.
 

DeaDPo0L84

Member
and people were so excited thinking a PS5 would be running games looking like that lol.
To be fair both "next gen" consoles have to make a lot of sacrifices to run most games @ 60fps/4k and it's only going to become more of a problem as we move forward. A mid gen refresh will have to come sooner rather than later to try and keep up with technology.

With that said, I'm so glad I built my first gaming pc last year, best decision I've made in a long time when it comes to gaming.
 

REDRZA MWS

Member
To be fair both "next gen" consoles have to make a lot of sacrifices to run most games @ 60fps/4k and it's only going to become more of a problem as we move forward. A mid gen refresh will have to come sooner rather than later to try and keep up with technology.

With that said, I'm so glad I built my first gaming pc last year, best decision I've made in a long time when it comes to gaming.
That’s why I’m glad MS offers 1440p as an option for my series X on my LG GX. Still waiting for that update Sony 😡
 

dxdt

Member
No. Folks just assumed since thats the case for MS then that must be the case for Sony. In fact since the PS5 SSD is much faster they don't need to allocate as much ram to the OS as MS. Rumor is the PS5 allocates 2GB to OS thus leaving 14GB available for games.
I am confused how the fast SSD can help to reduce OS memory footprint unless you're using a portion as a swap. But OS swapping is probably not desirable during gaming to reduce disk access.
 

//DEVIL//

Member
I am a proud owner of Asus 6900xt water cooled. But I don’t understand.
Wasn’t the first demo running at 30fps 2k on ps5 ? How on earth 3090 can’t reach 60 frames at 2k when that card is probably 3 times more powerful than a ps5 ?
Something is off here
 

Kenpachii

Member
I am a proud owner of Asus 6900xt water cooled. But I don’t understand.
Wasn’t the first demo running at 30fps 2k on ps5 ? How on earth 3090 can’t reach 60 frames at 2k when that card is probably 3 times more powerful than a ps5 ?
Something is off here

It's another demo, not optimized yet. Just look at the MS on that 3090 in the video the topic maker posted.
 
I am a proud owner of Asus 6900xt water cooled. But I don’t understand.
Wasn’t the first demo running at 30fps 2k on ps5 ? How on earth 3090 can’t reach 60 frames at 2k when that card is probably 3 times more powerful than a ps5 ?
Something is off here
The original demo wasn't fake, but it wasn't being played either. It's easier to make a qte run well than an interactive experience.
 

SlimySnake

Flashless at the Golden Globes
I am a proud owner of Asus 6900xt water cooled. But I don’t understand.
Wasn’t the first demo running at 30fps 2k on ps5 ? How on earth 3090 can’t reach 60 frames at 2k when that card is probably 3 times more powerful than a ps5 ?
Something is off here
because 3090 isnt 3x more powerful than the 10 tflops ps5. or any 10 tflops rdna 2.0 card on the market.

Nvidia increased the shader processors 3x for the 3080 and that allows them to manipulate the tflops but they took some shortcuts to get there as explained below.

8704 shader cores * 2 instructions per clock * 1.71 ghz Boost clock speeds = 29 tflops. For the 3080. 3090 has a few more shader cores and is able to hit 35 tflops.

The RTX 3000 cards are built on an architecture NVIDIA calls "Ampere," and its SM, in some ways, takes both the Pascal and the Turing approach. Ampere keeps the 64 FP32 cores as before, but the 64 other cores are now designated as "FP32 and INT32.” So, half the Ampere cores are dedicated to floating-point, but the other half can perform either floating-point or integer math, just like in Pascal.

With this switch, NVIDIA is now counting each SM as containing 128 FP32 cores, rather than the 64 that Turing had. The 3070's "5,888 cuda cores" are perhaps better described as "2,944 cuda cores, and 2,944 cores that can be cuda."

As games have become more complex, developers have begun to lean more heavily on integers. An NVIDIA slide from the original 2018 RTX launch suggested that integer math, on average, made up about a quarter of in-game GPU operations.

The downside of the Turing SM is the potential for under-utilization. If, for example, a workload is 25-percent integer math, around a quarter of the GPU’s cores could be sitting around with nothing to do. That’s the thinking behind this new semi-unified core structure, and, on paper, it makes a lot of sense: You can still run integer and floating-point operations simultaneously, but when those integer cores are dormant, they can run floating-point instead.


Your 6900xt has 23 tflops, way below the tflops count of both the 3080 and the 3090 but outperforms both GPUs.

The 3080 has 3x more tflops than the 2080 but offers only 1.8x more performance.
 

//DEVIL//

Member
because 3090 isnt 3x more powerful than the 10 tflops ps5. or any 10 tflops rdna 2.0 card on the market.

Nvidia increased the shader processors 3x for the 3080 and that allows them to manipulate the tflops but they took some shortcuts to get there as explained below.

8704 shader cores * 2 instructions per clock * 1.71 ghz Boost clock speeds = 29 tflops. For the 3080. 3090 has a few more shader cores and is able to hit 35 tflops.




Your 6900xt has 23 tflops, way below the tflops count of both the 3080 and the 3090 but outperforms both GPUs.

The 3080 has 3x more tflops than the 2080 but offers only 1.8x more performance.
Yes I get the 3090 semi fake tflops. But again . If not 3 times it should be easily twice as powerful easily. It’s not like the 10tf on ps5 is accurate either as this is the whole thing combined with the cpu performance when the 3090 is just that gpu.
I guess it’s because of the editor maybe ? Not optimized ? Or like other posts suggested different demo type ? No clue but doesn’t add up .
 
By the way, that's why pure resolution don't matter anymore:

ag10XgB.png
 

Lethal01

Member
Yes I get the 3090 semi fake tflops. But again . If not 3 times it should be easily twice as powerful easily. It’s not like the 10tf on ps5 is accurate either as this is the whole thing combined with the cpu performance when the 3090 is just that gpu.
I guess it’s because of the editor maybe ? Not optimized ? Or like other posts suggested different demo type ? No clue but doesn’t add up .

Guess the customizations made to the Geometry Engine are really paying off.
 

CrustyBritches

Gold Member
I am a proud owner of Asus 6900xt water cooled. But I don’t understand.
Wasn’t the first demo running at 30fps 2k on ps5 ? How on earth 3090 can’t reach 60 frames at 2k when that card is probably 3 times more powerful than a ps5 ?
Something is off here
Doesn't the benchmark posted here show 6900XT at 60fps avg, while 5700XT is at 29fps? That's double.
 
Last edited:

Rea

Member
Ok I laughed lol.
I could be wrong but I thought when they say ps5 is 10 and Xbox sx 12tf. They count the whole processing power of the PlayStation . Not just the GPU. Am I wrong ?
Yup, you're wrong. 10tf and 12 tf is the measurements of GPU compute power respectively. Nothing to do with CPU performance. CPU performance is another story.
 

Whitecrow

Banned
By the way, that's why pure resolution don't matter anymore:

ag10XgB.png
While I mostly agree with you, there are purists who still want the cleanest IQ possible.
I'm no nvidia user so I'm still not sure how DLSS can compare no native 4K (if I'm not mistaken, it can even look better), but outside DLSS realm, native is still king.

Playing my Pro on my C9 most games have very clean IQ but I can still see some slight blurryness and room for improvement, and I would gladly appreciate it in the cases where it does not affect
a stable frame rate.
 

ZywyPL

Banned
By the way, that's why pure resolution don't matter anymore:

ag10XgB.png

With all the different reconstruction techniques out there, from which most of them only blur the image and create artifacts, I'm afraid native resolution is still the only way that guarantees proper image sharpness and clarity.
 
With all the different reconstruction techniques out there, from which most of them only blur the image and create artifacts, I'm afraid native resolution is still the only way that guarantees proper image sharpness and clarity.
I can hardly notice the artifacting and other side effects of resolution upscaling methods, honestly.

Literally have to watch DF zoom into strands of hair in 2-3 frames to notice such things but maybe I have a poor eyesight lol
 

99Luffy

Banned
Vram usage is interesting. But I'll wait until we the PS5 scenes running, it looked alot more impressive than valley of the ancients.
 

SlimySnake

Flashless at the Golden Globes
Yes I get the 3090 semi fake tflops. But again . If not 3 times it should be easily twice as powerful easily. It’s not like the 10tf on ps5 is accurate either as this is the whole thing combined with the cpu performance when the 3090 is just that gpu.
I guess it’s because of the editor maybe ? Not optimized ? Or like other posts suggested different demo type ? No clue but doesn’t add up .
the 35 tflops 3090 IS 2x more powerful than the 2080. It's the 30 tflops 3080 thats only 1.8x more powerful.


4rkkHHX.png


6900xt is offering a linear performance increase over the roughly 10 tflops 5700xt. It's doing what it should. The Ampere cards dont seem to be scaling as well in this demo. A 3080 should be at least 1.8x more powerful than the 2080 but here its offering only a 1.4x more performance.
 

GymWolf

Member
Some interesting results found by a youtuber.

  • 3090 at 4k outperforms 6900xt. 10-20% better performance.
  • 6900xt at 1440p outperforms 3090. 10-15% better performance.
  • 6900xt is only 20 Tflops. 3090 is 36 tflops.
  • VRAM usage at both 1440p and 4k resolutions is around 6GB. Demo Allocates up to 7GB.
  • System RAM usage at both 1440p and 4k resolutions goes all the way up to 20+ GB. Total usage 2x more than PS5 and XSX.
  • PS5 and XSX only have 13.5 GB of total RAM available which means their I/O is doing a lot of the heavy lifting.
  • 6900 XT is roughly 50-60 fps at 1440p. 28-35 fps at native 4k.
  • 3090 is roughly 45-50 fps at 1440p. 30-38 fps at native 4k.
EDIT: More benchmarks here.

4rkkHHX.png


6900xt 1440p


3900 1440p


4k Comparison. Timestamped.


DWkKNS7g0wSf9zMt4q7b06g4aRuLgaCiKJjkP17Slscg3wkq5YJgFQPyRvncS_o0iyeoT2pyiJkRB5Rk5yWnJDXUsSMetUx_FRdBlWNO5h8EYAJhh57KdjveB-KK-3bazTnl8B8POd8wX4CH1zK-qD9Z_nub8TvYroTBKwFFiHz7cUXftZvo0tiRWb0nww

This can't be real right?

20% of difference in 4k? and not even against the 3090ti, but the vanilla model.

Was it a matter of amd driver not being ready?
 

SlimySnake

Flashless at the Golden Globes
This can't be real right?

20% of difference in 4k? and not even against the 3090ti, but the vanilla model.

Was it a matter of amd driver not being ready?
3090 was a $1,500 card.

6900xt was a $1000 card. It was actually performing very well in comparison.
 
Top Bottom