• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Quake 2 RTX Vulkan Ray Tracing/Path Tracing Benchmarks. RTX 3080 vs. RX 6800 XT.

regawdless

Banned
Waiting for Ascend and LLien to shit up the thread as per usual. They are way worse, than the worst Nvidia fanboys. But entertaining nonetheless.

It would be cool if AMD had the same engineers from the Ryzen team, to work on the GPU's. As they are night and day different.

I don't get why they are so brand driven instead of appreciating technological advancements, just because AMD currently does worse in raytracing. It is like it is, no need to go full on defense force.

We all wish that AMD would be better in raytracing and are happy that they're pushing lazy Nvidia.
 
Last edited:
I don't get why they are so brand driven instead of appreciating technological advancements, just because AMD currently does worse in raytracing. It is like it is, no need to go full on defense force.
That's what annoys me with people who are strictly platform biased. You can never have a decent conversation about technology in general, regardless if it's Nvidia's or AMD. It's literally just, shill, shill, hate Nvidia, and shill some more. Then the most ironic post, "Are you on Nvidia's payroll?!"
 

Rikkori

Member
Price and rasterization performance are basically identical though regarding the 3080 and 6800XT. So I don't see another important differentiator.
Not quite. Depends on games & region, because I got my 6800 for 20% cheaper than the cheapest 3070 that was available. Similar issue with 6800 XT vs 3080. As for games...

WKIKJJk.jpg


And that's best case scenario for 3080 because it's at 4K & we're not also taking into account OC potential, for which there is none on NV's side but plenty on AMD (PPT etc).

So, if on paper was in reality, sure, I'd have bought a 3080 at MSRP over a 6800/XT purely for some RT shenanigans. But in the real world it doesn't really break down like that, plus the simple fact is that even though Ampere is much faster in RT it's still pretty slow. I'm playing Cyberpunk in 4K and it's gorgeous, if I want to enable RT then I have to drop to 1080p even on a 3090 - what's the fucking point? So, people are quick to jump on the bandwagon but eh, I'd say for some nice RT performance we're waiting at least one more gen of cards.
 

longdi

Banned
Yap rx6000 is $100 too expensive. 3080 is neck and neck with 6800xt but thrashes it once rt and dlss is enabled. Not sure why anyone wants to buy rx6000 except as a shortage reprieve
 

Marlenus

Member
Did you know that game is updated to use Vulkan extensions? Of course not, but you are talking like you know stuff.

Do you realise that you can get the same output in different ways and a game that was built when the only RT game in town was NV only has to optimise against one hardware methodology.

It is possible that in fully path traced games like this AMD hardware is that much worse but there are 3 layers of software (Game, API, Driver) where improvements can be made so pointing to this singular example (or AMD fans pointing to Dirt 5 / Shadowlands) as proof that RDNA2 RT sucks (rocks) is premature.
 
Last edited:

FireFly

Member
Yes, but the rest of the GPU, not so much. If they worked on the gpu's from the first iteration of ryzen, their performance would be in a better place. Id imagine even more with RT.
Well it takes around 4 years to design a GPU, so that would already put us into the Zen 1 release timeframe for development of the 6000 series. Also remember that it took Ryzen 4 releases to get to where they needed to be.
 

Marlenus

Member
Not quite. Depends on games & region, because I got my 6800 for 20% cheaper than the cheapest 3070 that was available. Similar issue with 6800 XT vs 3080. As for games...

WKIKJJk.jpg


And that's best case scenario for 3080 because it's at 4K & we're not also taking into account OC potential, for which there is none on NV's side but plenty on AMD (PPT etc).

So, if on paper was in reality, sure, I'd have bought a 3080 at MSRP over a 6800/XT purely for some RT shenanigans. But in the real world it doesn't really break down like that, plus the simple fact is that even though Ampere is much faster in RT it's still pretty slow. I'm playing Cyberpunk in 4K and it's gorgeous, if I want to enable RT then I have to drop to 1080p even on a 3090 - what's the fucking point? So, people are quick to jump on the bandwagon but eh, I'd say for some nice RT performance we're waiting at least one more gen of cards.

Funnily enough the 6900XT beats the 3090 in those 7 games by 5/2 as well.
 

regawdless

Banned
Not quite. Depends on games & region, because I got my 6800 for 20% cheaper than the cheapest 3070 that was available. Similar issue with 6800 XT vs 3080. As for games...

WKIKJJk.jpg


And that's best case scenario for 3080 because it's at 4K & we're not also taking into account OC potential, for which there is none on NV's side but plenty on AMD (PPT etc).

So, if on paper was in reality, sure, I'd have bought a 3080 at MSRP over a 6800/XT purely for some RT shenanigans. But in the real world it doesn't really break down like that, plus the simple fact is that even though Ampere is much faster in RT it's still pretty slow. I'm playing Cyberpunk in 4K and it's gorgeous, if I want to enable RT then I have to drop to 1080p even on a 3090 - what's the fucking point? So, people are quick to jump on the bandwagon but eh, I'd say for some nice RT performance we're waiting at least one more gen of cards.

And there are different benchmarks that have the 3080 slightly ahead. It's very even in rasterization performance, generally they are within a couple percent. I'm only referring to the 3080 and 6800xt. I'm not a fan of the 3070.
Price can differ from region of course, that's true, and it can play a factor of course.

Regarding raytracing performance, that's just false. I'm playing Cyberpunk maxed with raytracing at 1440p with DLSS quality at 60fps on my 3080 and it looks stunning. Added some reshade AA and clarity, resulting in great image quality.

We're not there at 4k 60fps with raytracing (always depends on what raytracing effects are being used of course) if you want to max everything. But at 1440p, it's already great and playable at 60fps.

Of course you can favor some percent more rasterization performance at best compared to the option to use the biggest graphics feature in a long time in modern games. Seems weird to me though.
 

Rikkori

Member
Regarding raytracing performance, that's just false. I'm playing Cyberpunk maxed with raytracing at 1440p with DLSS quality at 60fps on my 3080 and it looks stunning. Added some reshade AA and clarity, resulting in great image quality.
There's a world of difference in terms of visual sharpness and clarity between DLSS on & native. 1440p DLSS means sub 1080p. It's whatever. It also gimps RT effects like reflections so you get to enjoy 960p RT reflections instead of 1440p. So, let's not kid ourselves about how good it is - it might still be worth using for the performance, but you're nowhere near native.



DLSS with strengths and weaknesses as a useful FPS boost
Cyberpunk 2077 uses DLSS 2.0 (test) which, in conjunction with ray tracing, should ensure playable performance. As recently in other games, Nvidia's exclusive "AI upsampling" in the current version shows advantages and disadvantages compared to the native resolution depending on the scene.

On the plus side, DLSS 2.0 recorded fine objects for itself as usual. Even in native Ultra HD resolution, some fine objects such as palm leaves are drawn cleanly with DLSS, while the ends simply disappear with classic post-processing anti-aliasing. In addition, there are objects that have a higher image sharpness with DLSS.

On the negative side, the image tends to flicker with DLSS 2.0. That only happens now and then, but every now and then you come across objects that native Ultra HD displays more calmly. In addition, there is the problem that DLSS 2.0 cannot visually enhance ray tracing reflections and these are therefore simply displayed in the rendered resolution. Accordingly, the reflections are visibly more blurred than with the native rendering resolution. Since Cyberpunk 2077 does not use reflections in abundance and these are usually diffuse, this is not as noticeable as in Watch Dogs: Legion.
 

BluRayHiDef

Banned
Not quite. Depends on games & region, because I got my 6800 for 20% cheaper than the cheapest 3070 that was available. Similar issue with 6800 XT vs 3080. As for games...

WKIKJJk.jpg


And that's best case scenario for 3080 because it's at 4K & we're not also taking into account OC potential, for which there is none on NV's side but plenty on AMD (PPT etc).

So, if on paper was in reality, sure, I'd have bought a 3080 at MSRP over a 6800/XT purely for some RT shenanigans. But in the real world it doesn't really break down like that, plus the simple fact is that even though Ampere is much faster in RT it's still pretty slow. I'm playing Cyberpunk in 4K and it's gorgeous, if I want to enable RT then I have to drop to 1080p even on a 3090 - what's the fucking point? So, people are quick to jump on the bandwagon but eh, I'd say for some nice RT performance we're waiting at least one more gen of cards.

That chart lists a small selection of games. What about Metro Exodus, Control, Tomb Raider, Cyberpunk 2077, etc.?

As for Ampere being slow in RT, that's not true. The 3080 and 3090 can meet the gold standard of 60 frames per second with RT enabled at 1080p and 1440p. At 4K, they're limited to the 50s, but that's a very playable range of frame rates.

Cyberpunk-2077-NVIDIA-GeForce-RTX-Official-PC-Performance-benchmarks-With-Ray-Tracing-DLSS-on-RTX-3090-RTX-3080-RTX-3070-RTX-3060-Ti-_1.png


Cyberpunk-2077-NVIDIA-GeForce-RTX-Official-PC-Performance-benchmarks-With-Ray-Tracing-DLSS-on-RTX-3090-RTX-3080-RTX-3070-RTX-3060-Ti-_2-Custom.png


Cyberpunk-2077-NVIDIA-GeForce-RTX-Official-PC-Performance-benchmarks-With-Ray-Tracing-DLSS-on-RTX-3090-RTX-3080-RTX-3070-RTX-3060-Ti-_3-Custom.png
 

regawdless

Banned
There's a world of difference in terms of visual sharpness and clarity between DLSS on & native. 1440p DLSS means sub 1080p. It's whatever. It also gimps RT effects like reflections so you get to enjoy 960p RT reflections instead of 1440p. So, let's not kid ourselves about how good it is - it might still be worth using for the performance, but you're nowhere near native.






Have you actually played it? Yes, there is a difference, of course. But it's not that big actually and well worth it. Especially if adding some reshade, it looks very clean and sharp. I've compared the reflection quality and it's not significant in any form. In WD Legion, I decided to not use DLSS because it totally fucks up the reflection quality. In Cyberpunk you can barely tell the difference.

Later tonight I'll check it out and can make some screenshots for comparison. I'm curious myself how that will turn out, my judgement was based on quickly switching back and forth during gameplay. It's less sharp than native of course, but well worth the trade off. It's a subjective assessment of if we're "there". I agree that if you want to play native with RT, you often have to turn down some settings.

When you only judge the end result, it's crazy impressive.
 
Last edited:

BluRayHiDef

Banned
There's a world of difference in terms of visual sharpness and clarity between DLSS on & native. 1440p DLSS means sub 1080p. It's whatever. It also gimps RT effects like reflections so you get to enjoy 960p RT reflections instead of 1440p. So, let's not kid ourselves about how good it is - it might still be worth using for the performance, but you're nowhere near native.





It still puts RDNA2 to shame. Also, honestly, unless someone is overanalyzing each frame, they're not going to notice.
 
Do you realise that you can get the same output in different ways and a game that was built when the only RT game in town was NV only has to optimise against one hardware methodology.

It is possible that in fully path traced games like this AMD hardware is that much worse but there are 3 layers of software (Game, API, Driver) where improvements can be made so pointing to this singular example (or AMD fans pointing to Dirt 5 / Shadowlands) as proof that RDNA2 RT sucks (rocks) is premature.
AMD is much slower than NVIDIA in RT in every game - and more game uses RT more the difference in favor of NVIDIA. Even AMD's ray-tracing demos are faster on RTX. And this is all very understandable. I'm not here to bash AMD in any way, but fanboys are overestimating AMD, then crying out of sudden.
 

spyshagg

Should not be allowed to breed
Yap rx6000 is $100 too expensive. 3080 is neck and neck with 6800xt but thrashes it once rt and dlss is enabled. Not sure why anyone wants to buy rx6000 except as a shortage reprieve

The same people who upgrade every 3~5 years are the people who would buy the rx6800. None of todays cards will handle RT well enough in 3 years, but at least the rx6800 wont be chocked by ram limits by then.

Is either the 3080ti or the Rx6800 series. No other card should be bought for longevity. RT will by entirely on another ballpark 3 years from now. By that point If the 3080 does 20fps and the rx6800 5fps, none of them are useful and you are just discussing numbers on a graph.
 
Last edited:

llien

Member
slower than NVIDIA in RT in every game

Sure, John.

NheDaPV.png



7IlhY2r.png




more game uses RT more
It's been 2 years since "RT greatness" and the main feature of RT, in the handful of games that have checkboxex for it, is that it crashes your fps.

RDNA2 also in general performs better than Ampere in newer games:

SJSNyER.png

5UIpNbQ.png

 
Last edited:

regawdless

Banned
The same people who upgrade every 3~5 years are the people who would buy the rx6800. None of todays cards will handle RT well enough in 3 years, but at least the rx6800 wont be chocked by ram limits by then.

Is either the 3080ti or the Rx6800 series. No other card should be bought for longevity. RT will by entirely on another ballpark 3 years from now. By that point If the 3080 does 20fps and the rx6800 5fps, none of them are useful and you are just discussing numbers on a graph.

I don't think it'll be that easy and clear. The advancement will slow down a bit because there will be more toned down hybrid RT approaches, which will be aimed at the consoles. And I believe current cards will handle that pretty well.

At least that's my guess, we don't know of course.
 

regawdless

Banned
Sure, John.

NheDaPV.png



7IlhY2r.png





It's been 2 years since "RT greatness" and the main feature of RT, in the handful of games that have checkboxex for it, is that it crashes your fps.

RDNA2 also in general performs better than Ampere in newer games:

SJSNyER.png

5UIpNbQ.png


You are truly something special. Like, wow. I've corrected you three times already and you've been warned by the mods regarding spreading lies about the WD Legion benchmarks. The raytracing is bugged on AMD cards and is extremely low quality, thus not being comparable at all.

There's a patch that was released six days ago that should fix it. But the article your citing is older.

You are so incredibly desperate, your dedication is kinda impressive tbh.
 
It is clear from what we have seen in most games performance that Nvidia has an overall more performant ray tracing solution than AMD this generation. When you move from hybrid rendering to full path tracing the gap widens again as this is where Ampere really stretches its legs, we see this even with comparisons of Turing vs Ampere with ray tracing where the biggest gains are with path traced games. We see this also with Minecraft, which as of now is the only other real example of a path traced title we have.

I'm actually surprised that the quake performance is as high as it is on RDNA2 compared to how they perform in Minecraft. However it should also be noted that while Nvidia does have the better RT solution/hardware, and they really pull far ahead with path traced titles, that Quake II RTX is a title where the RT implementation was coded by Nvidia and obviously optimised to their hardware solution/card.

In addition to that, Vulkan did not have a RT extension/capability in the API when Quake II RTX was released so Nvidia wrote their own proprietary RT extension for Vulkan. This is what was used in Quake II and Wolfenstein I believe.

This RT extension was used as the basis for the official Vulkan RT capability that is now a part of the API. The current head of Vulkan development at Khronos group is an Nvidia principle graphics engineer I believe, I think he did some interviews about Vulkan with RedGamingTech where he mentioned this. In the same way that DXR 1.0 was co-developed closely between MS and Nvidia, the Vulkan RT capability was mostly developed by Nvidia and clearly optimized for their technology.

So...what is the point in mentioning all of this, does this mean that AMD is secretly better than Nvidia at Vulkan RT? Of course not, Nvidia has both the general RT lead in hybrid rendering and a massive lead in path tracing scenarios, but worth pointing out that both the API and the title here were developed and optimized by Nvidia for their RT solution. Meaning that in a path traced title that was optimized for AMD, the cards would perform some unknown % better than they do now, but still be behind Nvidia.

Something that I'm not super clear on with this, I haven't checked the Github repo to confirm, but was Quake II RTX recoded to replace the Nvidia proprietary RT extension function calls with the open source extension instead? Or does the open source version have some fallbacks or conversion for when it reads the NV extension code? Not something that would have a huge impact on this dicsussion but more for my own curiosity I'm wondering.

Anyway, regarding RT performance right now 99% of titles are and will continue to be hybrid rendered scenarios for the immediate future as the impact is too high to use path tracing in titles with modern graphics. While Nvidia has a definite lead in RT performance in general, the gap is not as wide in most real world scenarios as Quake II or Minecraft would imply, given the different rendering scenarios at play.
 

Marlenus

Member
I don't think it'll be that easy and clear. The advancement will slow down a bit because there will be more toned down hybrid RT approaches, which will be aimed at the consoles. And I believe current cards will handle that pretty well.

At least that's my guess, we don't know of course.

Cyberpunk @ 1080p with RT on uses 7.5GB of VRAM. That is getting awfully uncomfortable for thr 3070.
 

regawdless

Banned
Cyberpunk @ 1080p with RT on uses 7.5GB of VRAM. That is getting awfully uncomfortable for thr 3070.

True. I think that we'll see more hybrid RT approaches that go easier on GPUs though, caused by the optimizations for the consoles. Can we wrong of course.
 
I always felt that AMD didn't pursue RT tech until too late and that's why they don't have dedicated cores.. and thus, why I believe their RT performance is lackluster. I'm willing to bet their next RT solution will be dedicated cores with some dedicated hardware for DLSS. To me, they are completely software accelerated with the pitiful performance they are putting out.

That's not it.

Amd architectures are dictated by console makers demands - they want universal solution without additional Silicon that would do nothing in traditional rasterization.

On the other hand Nvidia found briliant way how to utilise tech that they need for their professional users for RT and DLSS in gaming scenarios.
 

Marlenus

Member
AMD is much slower than NVIDIA in RT in every game - and more game uses RT more the difference in favor of NVIDIA. Even AMD's ray-tracing demos are faster on RTX. And this is all very understandable. I'm not here to bash AMD in any way, but fanboys are overestimating AMD, then crying out of sudden.

Ontop of what llien llien said Shadow lands is also faster, by a huge margin, on AMD than NV hardware.



So like I say software matters because here the 3090 is behind the 2080Ti so pointing to a small sample of games and making definitive claims about anything is a hiding to nothing.

Edit @ 4k it evens up but those 1% lows on AMD hardware are so much nicer.
 
Last edited:
Vulkan RT aside, best addition for RTX users in the update is the temporal upsampling option now offers a cleaner & sharper image when using DSR :messenger_ok:
 

Marlenus

Member
True. I think that we'll see more hybrid RT approaches that go easier on GPUs though, caused by the optimizations for the consoles. Can we wrong of course.

I expect consoles will use RT more selectively.

I am having trouble pinning down how much of the general RT delta between AMD and NV is purely hardware and how much is software since we are in the infancy of RT.
 
Ontop of what llien llien said Shadow lands is also faster, by a huge margin, on AMD than NV hardware.



So like I say software matters because here the 3090 is behind the 2080Ti so pointing to a small sample of games and making definitive claims about anything is a hiding to nothing.

Edit @ 4k it evens up but those 1% lows on AMD hardware are so much nicer.


As I said above, AMD does RT shadows well, but that's also the only thing it's good at. Everything else tanks the performance massively.
 

Marlenus

Member
That chart lists a small selection of games. What about Metro Exodus, Control, Tomb Raider, Cyberpunk 2077, etc.?

Metro, Control and Tomb Raider are in the main ComputerBase benchmark suite. Cyberpunk is too new so wasn't included yet.

This 7 game test is an addendum to their main suite to show how newer games handle the new hardware. You could go read their content though.

As I said above, AMD does RT shadows well, but that's also the only thing it's good at. Everything else tanks the performance massively.

Now show that performance drop off is due to the hardware and not something in the software stack.
 

Ascend

Member
I always felt that AMD didn't pursue RT tech until too late and that's why they don't have dedicated cores.. and thus, why I believe their RT performance is lackluster. I'm willing to bet their next RT solution will be dedicated cores with some dedicated hardware for DLSS. To me, they are completely software accelerated with the pitiful performance they are putting out.
 

VFXVeteran

Banned
Not this again. They accelerate intersection, but not traversal.

For comparison, a 1080 Ti was getting 17 FPS or less at 1080p, in a previous version of the benchmark.


So it's not the full monte like it's supposed to be. Which is my point.
 

VFXVeteran

Banned
N Nhranaghacon


Copypasta:

Soo, ray tracing they said, let's talk about "realism" shall we (and I don't mean Pixar stuff rendered by a freaking render farm, but by NVs tech demo), Corners:

hCo0iv7.png


if you need to be reminded how they look like, welp, a real photo:

JxAYkuJ.png


you can read what's going on in this wonderful blog post:



Now, let's move to "full RT" shall we? Let's be generous, Quake.

it takes 2060 about 20 seconds to generate a decent quality frame.
So how do they render it in a fraction of a second? Meet Green RT Fakery:
1) Temporal denoiser + blur
This is based on previous frame data, so with the textures turned off and the only image you're seeing is what's raytraced. Top image was taken within a few frames of me moving the camera, bottom image was the desired final result that took 3-5 seconds to 'fade' in as the temporal denoiser had more previous frames to work from. Since you are usually moving when you're actually playing a game, the typical image quality of the entire experience is this 'dark smear', laggy, splotchy mess that visibly runs at a fraction of your framerate. It's genuinely amazing how close to a useful image it's generating in under half a second, but we're still a couple of orders of magnitude too slow to replace baked shadowmaps for full GI.
1xgEUDU.png

2. Resolution hacks and intelligent sampling zones to draw you eye to shiny things at the cost of detail accuracy (think of it as a crude VRS for DXR)
Here's an image from the same room, zoomed a lot, and the part of the image I took it from for reference:
A - rendered at 1/4 resolution
B - tranparency, this is a reflection on water, old-school 1995 DirectX 3.0 dither hack rather than real transparency calculations
C - the actual resolution of traced rays - each bright dot in region C is a ray that has been traced in just 4-bit chroma and all the dark space is essentially guesswork/temporal patterns tiled and rotated based on the frequency of those ray hits. If you go and look at a poorly-lit corner of the room you can clearly see the repeated tiling of these 'best guess' dot patterns and they have nothing to do with the noisier, more random bright specs that are the individual ray samples.

85KG1Xo.png

r4LJppH.png


So, combine those two things together. Firstly we have very low ray density that is used as a basis for region definitions that can then be approximated per frame using a library of tile-based approximations that aren't real raytracing, just more fakery that's stamped out as a best guess based on the very low ray coverage for that geometry region. If I was going to pick a rough ballpark figure, I'd probably say that 3% of the frame data in that last image is raytraced samples and 97% of it is faked interpolation between regions and potato-stamped to fill in the gaps with an approximation. This works fine as long as you just want an approximation, because the human brain does great work in filling in the gaps, especially when it's all in motion. Anyway, once it's tile-stamped a best-guess frame together out of those few ray samples, each of those barely-raytraced frames are blurred together in a buffer over the course of several hundred frames. There will be visual artifacts like in my first point anywhere you have new data on screen, because temporal filtering of on-screen data only means that anything that has appeared from offscreen is a very low-resolution, mostly fake mess for the first few dozen frames.

By Crispy

Yea, yea. Of course, the GPUs are far far away from having the power for real RT. But they gotta start somewhere. At least they aren't using rasterization techniques in it's place, so that's something. RT in realtime hardware absolutely requires denoising to be viable. But even offline rendering use denoising now. We don't need to fire so many rays in interior shots to try to get the fidelity we need for bounced light when the only light source coming in is from outside. The frame goes WAY beyond budget in that case, so yea, film needs it too.
 
Last edited:

Ascend

Member
What is that supposed to tell me? Since neither of us have access to their low level drivers, we don't can't verify that they are just using algorithms instead of custom chips for the RT.
It's supposed to tell you that AMD was busy with RT well before 2016. This means that they were not actually late with pursuing RT tech. That they didn't prioritize it for gaming is another story.

nVidia simply has found its next TWIMTBP, PhysX and GameWorks in RT. That's it.
 
Last edited:

FireFly

Member
So it's not the full monte like it's supposed to be. Which is my point.
You said that the performance indicated they were "completely software accelerated".

I hate to be the guy nitpicking all the time, but it is important to be precise, since there is already a lot of FUD about AMD's solution.
 

Ascend

Member
That's what annoys me with people who are strictly platform biased. You can never have a decent conversation about technology in general, regardless if it's Nvidia's or AMD. It's literally just, shill, shill, hate Nvidia, and shill some more. Then the most ironic post, "Are you on Nvidia's payroll?!"
The hypocrisy is oozing from this post. The fact that you pretend to be neutral, and then suddenly have to call out the shilling against nVidia only says everything. Keep on virtue signaling...

Have you ever called out any shilling against AMD? If so, prove it. I'll wait.
 
Last edited:

regawdless

Banned
uhh, I fully support and commonly post Nvidia threads here and I am an Xbox fan - you are absurdly off base if A. you believe PCGamer when stating this is a Path Tracing Benchmark and B. Believe that benchmark render's actual path tracing performance.

And completely wrong in your assumption that I am pro AMD. As an Artist - this benchmark masquerading around as a Path Tracing Benchmark solution does a huge disservice to people like yourself who have no grasp of what that entails and is a Hardware Agnostic statement.

I quoted Ilien. Wasn't directed at you.
 

Mithos

Member
It feels like Quake RTX v1.4.0 gave a small boost in performance.

v1.3 could not hold 60fps on my 2060S in 1080p unless dropping RT settings low/medium and 2-4 rays or switching to 720p.
With v1.4 and the new drivers I can hold 60fps and max RT settings (high / 8 rays) if I also use resolution scaling min 80% / max 100%
 
Last edited:
Yea, yea. Of course, the GPUs are far far away from having the power for real RT. But they gotta start somewhere. At least they aren't using rasterization techniques in it's place, so that's something. RT in realtime hardware absolutely requires denoising to be viable. But even offline rendering use denoising now. We don't need to fire so many rays in interior shots to try to get the fidelity we need for bounced light when the only light source coming in is from outside. The frame goes WAY beyond budget in that case, so yea, film needs it too.
We are at 8 bit levels of RayTracing support, and at 0 levels of real time path tracing.

Diffuse cubemap sampling to speed up denoising "offline rendering" or for "previsualizing" a frame is not a comprehensive solution to path traced rendering otherwise render farms would not still require minutes to hours to render a frame at 1024bits. While it may speed up basic geometry/light passes for artist curated frames in particular, that is hardly cause for celebration when citing the focus has shifted to a real time path tracing solution at Nvidia - which is pertinent if you consider Nvidia -just- had a conference stating their focus moving forward is not Ray Tracing but a fully functional Real Time Path Tracing solution for gaming at 60FPS.
 
Last edited:

Buggy Loop

Member
Are you trying to imply that ALL of AMD's benchmarks that show slower performance in RT than Nvidia are all falsified?

Didn’t you know? While Microsoft and Khronos group (Vulkan) were setting up and asking Nvidia and AMD for feedback all those years to make an hardware agnostic API, Nvidia set it up only for it to run well on their hardware! AMD was sidelined and could not do anything! Poor poor AMD /tears

AMD was like the guy in class that sleeps during all of the teacher’s course and then, when he opens his eyes there’s an exam in front of him and does not know how to answer it. Years of API development, 2 years to study Turing and improve upon it, but no, they saw Nvidia’s paved road, and decided to go off road to a shortcut that forced them to go at half speed to get to destination.

Bravo

Should brand their ray tracing technology /r/AyyMD
 

Buggy Loop

Member
You are truly something special. Like, wow. I've corrected you three times already and you've been warned by the mods regarding spreading lies about the WD Legion benchmarks. The raytracing is bugged on AMD cards and is extremely low quality, thus not being comparable at all.

There's a patch that was released six days ago that should fix it. But the article your citing is older.

You are so incredibly desperate, your dedication is kinda impressive tbh.

On top of that, Dirt 5 has a special variable shading algorithm specifically optimized for RDNA 2, as per AMD’s own page on Dirt 5, that is simply borked on Nvidia cards. The rasterization is 30% higher at 1440p, while the RT drop is virtually the same (~20fps)

That game is some fuckery right off the bat in rasterization.
 
The hypocrisy is oozing from this post. The fact that you pretend to be neutral, and then suddenly have to call out the shilling against nVidia only says everything. Keep on virtue signaling...

Have you ever called out any shilling against AMD? If so, prove it. I'll wait.
The only "negatively" I've spoken about AMD, is in regards to raytracing and DLSS. I've given them props especially on their CPU's. I don't harp over and over in every thread, on how evil Nvidia is, or constantly downplay DLSS, raytracing, etc. If you ever noticed, I don't defend Nvidia. You love to defend AMD every chance you get. Big difference there.


Although, It's kinda funny to see you and llien fighting so hard to protect AMD on GAF and elsewhere I'm sure. I am neutral, and it kills you that I'm using Nvidia for the past couple of years, ever since they had better performance. If that changes, I night switch back. Just like I went from Intel to AMD. See I'm not against AMD with a ryzen CPU, and waiting till 5xxx cpus are back in stock. How am I not neutral again? Imagine if Nvidia completed in the CPU industry right now, you'd turn soooo red.

Again, it's ironic you say virtue signaling. 😂
 

magaman

Banned
lol.

"This nvidia card is like TWICE AS GOOD as this other card at rendering raytracing in this decades-old game!"

Technology is not ready for real-time raytracing in games. This is an absolute joke.
 
Top Bottom