• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Quake 2 RTX Vulkan Ray Tracing/Path Tracing Benchmarks. RTX 3080 vs. RX 6800 XT.

Leonidas

Member

CkE8Kw3pzex6T3ZyDTCyBU.png


Another great showing for RTX in Path Tracing performance.

6800 XT is performing worse than last gen 2018 RTX cards in Path Tracing benchmarks such as Quake II RTX.
 

supernova8

Banned
Benchmarks paid by nvidia don´t count.

Vulkan is open source and paid for by both NVIDIA and AMD (and loads of other companies).

In that sense, whatever differences there are must be hardware and nothing to do with NVIDIA rigging the fight.
I think we all know now NVIDIA has the far better ray tracing solution right now.

Considering AMD has just started raytracing I would expect improvements with RDNA3 but of course NVIDIA is not going to sit around doing nothing.
 
Last edited:

VFXVeteran

Banned
Vulkan is open source and paid for by both NVIDIA and AMD (and loads of other companies).

In that sense, whatever differences there are must be hardware and nothing to do with NVIDIA rigging the fight.
I think we all know now NVIDIA has the far better ray tracing solution right now.

Considering AMD has just started raytracing I would expect improvements with RDNA3 but of course NVIDIA is not going to sit around doing nothing.

I always felt that AMD didn't pursue RT tech until too late and that's why they don't have dedicated cores.. and thus, why I believe their RT performance is lackluster. I'm willing to bet their next RT solution will be dedicated cores with some dedicated hardware for DLSS. To me, they are completely software accelerated with the pitiful performance they are putting out.
 
D

Deleted member 17706

Unconfirmed Member
Yeah, their solution is just worse. Maybe it will work out for them in the long run, but this first generation is not working out.

I think the 2080 Ti or equivalent is basically the bare minimum of where you want to be for RT...
 

rnlval

Member
I always felt that AMD didn't pursue RT tech until too late and that's why they don't have dedicated cores.. and thus, why I believe their RT performance is lackluster. I'm willing to bet their next RT solution will be dedicated cores with some dedicated hardware for DLSS. To me, they are completely software accelerated with the pitiful performance they are putting out.
Before other features, AMD has to fix its GPU raster performance instead of designing yet another large DSP with a small GPU aka RX Vega 56/64 and VII.
 

dsk1210

Member
AMD 's ray tracing is semi decent if using just rt reflections but the performance plummets the moment you try anything Fully path traced.

Nvidia is well ahead at the moment in that respect and there is no denying that, then DLSS bump performance even further with minimum visual impact. AMD will get there eventually but Nvidia will only improve as well.
 

rnlval

Member
AMD is slower in RT but not this slow as these numbers show. It is pretty obvious there is nvidia optimization for it
For RDNA 2 CU, BVH transverse workloads are done on shader units while box and intersect tests are done on RT hardware, hence large-scale BVH transverse workload has a blocking effect on normal shader workloads.

When box and intersect tests active within the CU, texture units can be blocked. RT payload needs to be broken down into smaller sizes to minimize the blocking effect.

RDNA 2's hardware RT is fundamentally inferior when compared to NVIDIA's RTX design.

AMD traded for ~100 mm2 area for 128 MB L3 cache instead of robust RT hardware, but BiG NAVI's PCB designs are in the mid-range 256-bit bus BOM cost levels.
 
Last edited:
I want to buy a 3080 or a 3090, but these benchmarks make me feel like it isn't even close to being worth it if the performances for 4K and RT is rarely ever a solid 60 frames for the money you're paying.

Where I live, it's about 3.6k for the 3080, and 7.6k for the 3090.

Guess i'll get a PS5 and just wait to upgrade until the RTX 40 series releases.
 

Blond

Banned
Someone really needs to explain to me why a feature meant to streamline the development process for artist is being sold to gamers like it is...

Again, I’m not saying you shouldn’t be impressed with the results or anything but this is just a graphics feature that doesn’t radically change the game in a significant way.

Anyone remember physx in Alice Madness Returns? The entire game felt empty and soulless without it. Ray tracing just makes everything look “different” but not better.


If something like this made tangible differences like world destruction, particle effects, fluid physics, etc I could be on board with it but ray tracing is just so...eh

 

CkE8Kw3pzex6T3ZyDTCyBU.png


Another great showing for RTX in Path Tracing performance.

6800 XT is performing worse than last gen 2018 RTX cards in Path Tracing benchmarks such as Quake II RTX.
There is no true path tracing performance given, this is ray tracing performance tested plain and simple - 2 low passes of path tracing applied to lighting in a scene should not constitute specifying this test is benchmarking path tracing in any meaningful way.

When path tracing is applied to all elements of a scene, diffuse maps/shadows/reflective caustics/refractive caustics/glossy/transmission/volume scattering/transparency/Emits/AO at 1024 render passes in real time then it may be considered an actual Path Tracing benchmark. Attempting to label this benchmark a Full Path Tracing Benchmark as PCGamer has only does a disservice to gamer's unaware of what this actually intel's.

So PCGamer is essentially stating this technology is as sophisticated as a scene that is rendered per frame over several hour's and minutes at a render farm.

No.

This benchmark applies 1 simple pass of Path Tracing to the lighting scheme which is hardly cause for celebration or even worth mentioning as the path tracer was not given enough time
to denoise on it's own.

A Path Tracing only sample

denoiser1.png


And as I expected, they only in fact use a very low pass for Path Tracing then use standard rasterization methods - or a mixed solution - to "denoise" the scene - supplanting the actual use of path tracing for render purposes as standard path tracing in practice is used to render an entire scene and path tracing recursively denoises per scene ad-infnite or at a user defined perimeters until the noise levels are non existent - without reverting to lesser archaic rasterization methods.

Here they simply use path tracing to help pre-calculate path lighting which is not in the same league as using a path tracing renderer for an entire scene.

When they do finally implement a legitimate path tracing renderer scheme for gaming, game's will no longer be adequately labeled - as they will have evolved into interactive realtime CGI.
 

FireFly

Member
To me, they are completely software accelerated with the pitiful performance they are putting out.
Not this again. They accelerate intersection, but not traversal.

For comparison, a 1080 Ti was getting 17 FPS or less at 1080p, in a previous version of the benchmark.

 
Someone really needs to explain to me why a feature meant to streamline the development process for artist is being sold to gamers like it is...

Again, I’m not saying you shouldn’t be impressed with the results or anything but this is just a graphics feature that doesn’t radically change the game in a significant way.

Anyone remember physx in Alice Madness Returns? The entire game felt empty and soulless without it. Ray tracing just makes everything look “different” but not better.


If something like this made tangible differences like world destruction, particle effects, fluid physics, etc I could be on board with it but ray tracing is just so...eh



I agree.

My biggest issue with RT is that it allows game developers to be goddamn lazy with how mirrors or glass in the game looks now if you don't have RT.

Take Cyberpunk 2077, for example, every mirror, glass window, or shelf, or whatever you'd find displayed in a gun store (for example) has completely low resolution and the textures of what is supposed to be a glass window instead of, ya know, something decent looking if you're playing without RT - even at Ultra settings.

I worry that this becomes a prominent thing.
 

Kenpachii

Member
Someone really needs to explain to me why a feature meant to streamline the development process for artist is being sold to gamers like it is...

Again, I’m not saying you shouldn’t be impressed with the results or anything but this is just a graphics feature that doesn’t radically change the game in a significant way.

Anyone remember physx in Alice Madness Returns? The entire game felt empty and soulless without it. Ray tracing just makes everything look “different” but not better.


If something like this made tangible differences like world destruction, particle effects, fluid physics, etc I could be on board with it but ray tracing is just so...eh



Same here i really wish physics moved forwards.
 
Last edited:
I expect nvidia would do well with raytracing due to how many cores they have but for games that dont use raytracin 6800xt either keeps up with it or beats it due to sheer speed of the card. All amd has to do now is come up with a solution to get great results for raytracing without having to sacrifice its sheer speed
 
Last edited:

regawdless

Banned
The more complex the raytracing gets, the bigger the gap it seems.
Shows how much more advanced the 30xx cards are regarding the raytracing compared to AMDs current offerings.

AMD warriors, come and defend the honor of the brand! It's very important!
 
Last edited:

Md Ray

Member
I always felt that AMD didn't pursue RT tech until too late and that's why they don't have dedicated cores.. and thus, why I believe their RT performance is lackluster. I'm willing to bet their next RT solution will be dedicated cores with some dedicated hardware for DLSS. To me, they are completely software accelerated with the pitiful performance they are putting out.
Aren't Ray Accelerators dedicated cores?

The RAs are the same amount as the CUs, just like RT cores are the same amount as the SM count on nvidia.
 

Panajev2001a

GAF's Pleasant Genius
The more complex the raytracing gets, the bigger the gap it seems.
Shows how much more advanced the 30xx cards are regarding the raytracing compared to AMDs current offerings.

AMD warriors, come and defend the honor of the brand! It's very important!

It is complex sure but also highly dependent on tensor cores for denoising (while the AMD solution is doing that with a shader implementation), it is 100% raytracing (modern games with RT are still hybrid so they would flex the rasterisation HW more), and I do not think it is optimised for AMD HW at all (say using RDNA2’s INT4/INT8 support, see point before about denoising) which makes it an ideal showcase/biggest gap possible for RT performance.

Great showing by nVIDIA, but also more of a fanboy war ammo than the sole decision you base your purchase on.
 

Panajev2001a

GAF's Pleasant Genius
i guess all others RT games where Nvidia crushes AMD are also developed by Nvidia.

No, nVIDIA has faster HW RT wise period, but most games are likely far more optimised to it than taking the time to fully exploit both approaches. NVIDIA’s solution came out a lot earlier and developers have had more time with it (nVIDIA has been pushing it strongly), it is not inconceivable :).
 
Last edited:

regawdless

Banned
It is complex sure but also highly dependent on tensor cores for denoising (while the AMD solution is doing that with a shader implementation), it is 100% raytracing (modern games with RT are still hybrid so they would flex the rasterisation HW more), and I do not think it is optimised for AMD HW at all (say using RDNA2’s INT4/INT8 support, see point before about denoising) which makes it an ideal showcase/biggest gap possible for RT performance.

Great showing by nVIDIA, but also more of a fanboy war ammo than the sole decision you base your purchase on.

True. But I mean the purchasing decision is pretty clear for anyone who wants to potentially use raytracing. These cards go head to head basically, while raytracing is the biggest differentiating factor with Nvidia being significantly faster.

It will be closer in hybrid raytracing approaches though.
 
Last edited:

llien

Member
N Nhranaghacon


Copypasta:

Soo, ray tracing they said, let's talk about "realism" shall we (and I don't mean Pixar stuff rendered by a freaking render farm, but by NVs tech demo), Corners:

hCo0iv7.png


if you need to be reminded how they look like, welp, a real photo:

JxAYkuJ.png


you can read what's going on in this wonderful blog post:



Now, let's move to "full RT" shall we? Let's be generous, Quake.

it takes 2060 about 20 seconds to generate a decent quality frame.
So how do they render it in a fraction of a second? Meet Green RT Fakery:
1) Temporal denoiser + blur
This is based on previous frame data, so with the textures turned off and the only image you're seeing is what's raytraced. Top image was taken within a few frames of me moving the camera, bottom image was the desired final result that took 3-5 seconds to 'fade' in as the temporal denoiser had more previous frames to work from. Since you are usually moving when you're actually playing a game, the typical image quality of the entire experience is this 'dark smear', laggy, splotchy mess that visibly runs at a fraction of your framerate. It's genuinely amazing how close to a useful image it's generating in under half a second, but we're still a couple of orders of magnitude too slow to replace baked shadowmaps for full GI.
1xgEUDU.png

2. Resolution hacks and intelligent sampling zones to draw you eye to shiny things at the cost of detail accuracy (think of it as a crude VRS for DXR)
Here's an image from the same room, zoomed a lot, and the part of the image I took it from for reference:
A - rendered at 1/4 resolution
B - tranparency, this is a reflection on water, old-school 1995 DirectX 3.0 dither hack rather than real transparency calculations
C - the actual resolution of traced rays - each bright dot in region C is a ray that has been traced in just 4-bit chroma and all the dark space is essentially guesswork/temporal patterns tiled and rotated based on the frequency of those ray hits. If you go and look at a poorly-lit corner of the room you can clearly see the repeated tiling of these 'best guess' dot patterns and they have nothing to do with the noisier, more random bright specs that are the individual ray samples.

85KG1Xo.png

r4LJppH.png


So, combine those two things together. Firstly we have very low ray density that is used as a basis for region definitions that can then be approximated per frame using a library of tile-based approximations that aren't real raytracing, just more fakery that's stamped out as a best guess based on the very low ray coverage for that geometry region. If I was going to pick a rough ballpark figure, I'd probably say that 3% of the frame data in that last image is raytraced samples and 97% of it is faked interpolation between regions and potato-stamped to fill in the gaps with an approximation. This works fine as long as you just want an approximation, because the human brain does great work in filling in the gaps, especially when it's all in motion. Anyway, once it's tile-stamped a best-guess frame together out of those few ray samples, each of those barely-raytraced frames are blurred together in a buffer over the course of several hundred frames. There will be visual artifacts like in my first point anywhere you have new data on screen, because temporal filtering of on-screen data only means that anything that has appeared from offscreen is a very low-resolution, mostly fake mess for the first few dozen frames.

By Crispy
 

llien

Member
Again, I’m not saying you shouldn’t be impressed with the results or anything but this is just a graphics feature that doesn’t radically change the game in a significant way.
Well, it does... when game is lacking light/shade/reflection effects altogether, like Minecraft.... :messenger_beaming:
 

regawdless

Banned
N Nhranaghacon


Copypasta:

Soo, ray tracing they said, let's talk about "realism" shall we (and I don't mean Pixar stuff rendered by a freaking render farm, but by NVs tech demo), Corners:

hCo0iv7.png


if you need to be reminded how they look like, welp, a real photo:

JxAYkuJ.png


you can read what's going on in this wonderful blog post:



Now, let's move to "full RT" shall we? Let's be generous, Quake.

it takes 2060 about 20 seconds to generate a decent quality frame.
So how do they render it in a fraction of a second? Meet Green RT Fakery:
1) Temporal denoiser + blur
This is based on previous frame data, so with the textures turned off and the only image you're seeing is what's raytraced. Top image was taken within a few frames of me moving the camera, bottom image was the desired final result that took 3-5 seconds to 'fade' in as the temporal denoiser had more previous frames to work from. Since you are usually moving when you're actually playing a game, the typical image quality of the entire experience is this 'dark smear', laggy, splotchy mess that visibly runs at a fraction of your framerate. It's genuinely amazing how close to a useful image it's generating in under half a second, but we're still a couple of orders of magnitude too slow to replace baked shadowmaps for full GI.
1xgEUDU.png

2. Resolution hacks and intelligent sampling zones to draw you eye to shiny things at the cost of detail accuracy (think of it as a crude VRS for DXR)
Here's an image from the same room, zoomed a lot, and the part of the image I took it from for reference:
A - rendered at 1/4 resolution
B - tranparency, this is a reflection on water, old-school 1995 DirectX 3.0 dither hack rather than real transparency calculations
C - the actual resolution of traced rays - each bright dot in region C is a ray that has been traced in just 4-bit chroma and all the dark space is essentially guesswork/temporal patterns tiled and rotated based on the frequency of those ray hits. If you go and look at a poorly-lit corner of the room you can clearly see the repeated tiling of these 'best guess' dot patterns and they have nothing to do with the noisier, more random bright specs that are the individual ray samples.

85KG1Xo.png

r4LJppH.png


So, combine those two things together. Firstly we have very low ray density that is used as a basis for region definitions that can then be approximated per frame using a library of tile-based approximations that aren't real raytracing, just more fakery that's stamped out as a best guess based on the very low ray coverage for that geometry region. If I was going to pick a rough ballpark figure, I'd probably say that 3% of the frame data in that last image is raytraced samples and 97% of it is faked interpolation between regions and potato-stamped to fill in the gaps with an approximation. This works fine as long as you just want an approximation, because the human brain does great work in filling in the gaps, especially when it's all in motion. Anyway, once it's tile-stamped a best-guess frame together out of those few ray samples, each of those barely-raytraced frames are blurred together in a buffer over the course of several hundred frames. There will be visual artifacts like in my first point anywhere you have new data on screen, because temporal filtering of on-screen data only means that anything that has appeared from offscreen is a very low-resolution, mostly fake mess for the first few dozen frames.

By Crispy

It's ok. Nobody here judges you because of your AMD tattoos. It's a great company and is valuable even if their cards are significantly slower in raytracing performance.

Now please stop your vendetta. It's beyond ridiculous at this point. Just stop.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
True. But I mean the purchasing decision is pretty clear for anyone who wants to potentially use raytracing. These cards go head to head basically, while raytracing is the biggest differentiating factor with Nvidia being significantly faster.

It will be closer in hybrid raytracing approaches though.

I think RT performance alone is not the only differentiator (price still matters and rasterisation performance does too) and this benchmark is more of a synthetic benchmark to prove NVIDIA’s approach than an industry one (stupid on AMD’s part not to spend more time optimising that scenario as it is open source).

A lot of current and future games will be hybrid and may actually lean into aspects (geometry processing) where AMD invested a lot on (think UE5 and their Nanite and Limen tech).
 

regawdless

Banned
I think RT performance alone is not the only differentiator (price still matters and rasterisation performance does too) and this benchmark is more of a synthetic benchmark to prove NVIDIA’s approach than an industry one (stupid on AMD’s part not to spend more time optimising that scenario as it is open source).

A lot of current and future games will be hybrid and may actually lean into aspects (geometry processing) where AMD invested a lot on (think UE5 and their Nanite and Limen tech).

Price and rasterization performance are basically identical though regarding the 3080 and 6800XT. So I don't see another important differentiator.

Still will be interesting how it'll develop, especially because the consoles only use AMD cards and most games will target them as a base. But that was also the case last gen, so who knows. It's not like Nvidia doesn't know what's going on and doesn't develop their cards accordingly.
 
The more complex the raytracing gets, the bigger the gap it seems.
Shows how much more advanced the 30xx cards are regarding the raytracing compared to AMDs current offerings.

AMD warriors, come and defend the honor of the brand! It's very important!
Waiting for Ascend and LLien to shit up the thread as per usual. They are way worse, than the worst Nvidia fanboys. But entertaining nonetheless.

It would be cool if AMD had the same engineers from the Ryzen team, to work on the GPU's. As they are night and day different.
 
Top Bottom