• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Quake 2 RTX Vulkan Ray Tracing/Path Tracing Benchmarks. RTX 3080 vs. RX 6800 XT.

lukilladog

Member
Vulkan is open source and paid for by both NVIDIA and AMD (and loads of other companies).

In that sense, whatever differences there are must be hardware and nothing to do with NVIDIA rigging the fight.
I think we all know now NVIDIA has the far better ray tracing solution right now.

Considering AMD has just started raytracing I would expect improvements with RDNA3 but of course NVIDIA is not going to sit around doing nothing.

It´s not that simple. When gpu or processor manufacturers get involved in the development of games and demos they will naturally avoid or minimize the use of functions that don´t work well on their hardware while favouring the stuff they are good at, so in the end we get very biased results. When independent game devs start to familiarize themselves with AMD hardware we will see what their hardware is capable of. At this point this is just free marketing for nvidia.
 

lukilladog

Member
Nah man, these benchmarks are real. I actually thought in the beginning that AMD would be able to compete with NVIDIA when it came to ray-tracing performance. I was pretty naive when it came to this by not knowing the R&D process NVIDIA had going on for years before all this to get to where they are now with real-time ray-tracing, but then I read the white paper for Ampere and learned just how much of the RT process was hardware-accelerated.


They have HW-acceleration for BVH traversal, ray/triangle and bounding box intersections, and instance transformation (someone correct me if I’m wrong on this, but I’m guessing this is HW acceleration for rapidly updating any asset changes like breaking glass for example but every bits of glass being shattered is being updated in the BVH and being ray-traced in real-time instead of the object being completely removed from the BVH). The Ampere cards are level 3 cards in ray-tracing, there are six different levels of RT and to achieve FULL HW-accelerated RT, it takes more time and research (and of course, more custom hardware).



AMD’s RDNA 2 cards are level 2 cards in ray-tracing since they only have custom hardware acceleration for ray/triangle and ray/bounding box intersection tests, that’s pretty much it.


The part highlighted in red here is what the RX 6000 series cards (level 2 RT solution) have HW acceleration for, versus what NVIDIA has HW acceleration for (level 3 RT solution).
ZPQT8Cf.jpg


NVIDIA’s so ahead they even moved on to ray-traced MOTION BLUR.
lRgqNtN.jpg


I think AMD will start to improve with RT performance with RDNA 3 and 4 by a lot, I don’t think it’s fair to just shit on them since this is their first attempt at it, it’ll only get better from there. But I think NVIDIA will probably achieve level 5 by the time the RTX 50 or 60 series cards are out because of how early they started R&D for this. I was actually planning to get the 3080 but might as well wait for the 3070 Ti next year.

Thanks but I will reserve my judgement until we get independent benchmark/tests.
 

Buggy Loop

Member
It´s not that simple. When gpu or processor manufacturers get involved in the development of games and demos they will naturally avoid or minimize the use of functions that don´t work well on their hardware while favouring the stuff they are good at, so in the end we get very biased results. When independent game devs start to familiarize themselves with AMD hardware we will see what their hardware is capable of. At this point this is just free marketing for nvidia.

AMD already released the recommendations for developers with the best practices for ray tracing on RDNA 2.



Too many active ray query object in scope at any time?
Too large of a threadgroup size?
Not limiting enough the group shared memory?

To me these sounds mostly the same as DXR’s recommendations or for the memory structure, gimp your RT like consoles because it would collapse our GPU. Not some secret sauce that will catch up to 2 times the performances in path tracing. We’re compliant with DXR, but please avoid this, and that...

They only have a BVH intersection accelerator block. That’s it. Rest is relying on shaders for every decisions. There’s only so much possible optimization squeezed out of that. It’s bare minimum above just software ray tracing.

I guess 3Dmark’s full path tracing benchmark does not count either? The one that welcomed AMD in the RT arena to not have an Nvidia monopoly anymore? Feel free to @ me when there’s an independent full path tracing game where you feel AMD was not left on the sides.
 
Last edited:

BluRayHiDef

Banned
Nah man, these benchmarks are real. I actually thought in the beginning that AMD would be able to compete with NVIDIA when it came to ray-tracing performance. I was pretty naive when it came to this by not knowing the R&D process NVIDIA had going on for years before all this to get to where they are now with real-time ray-tracing, but then I read the white paper for Ampere and learned just how much of the RT process was hardware-accelerated.


They have HW-acceleration for BVH traversal, ray/triangle and bounding box intersections, and instance transformation (someone correct me if I’m wrong on this, but I’m guessing this is HW acceleration for rapidly updating any asset changes like breaking glass for example but every bits of glass being shattered is being updated in the BVH and being ray-traced in real-time instead of the object being completely removed from the BVH). The Ampere cards are level 3 cards in ray-tracing, there are six different levels of RT and to achieve FULL HW-accelerated RT, it takes more time and research (and of course, more custom hardware).



AMD’s RDNA 2 cards are level 2 cards in ray-tracing since they only have custom hardware acceleration for ray/triangle and ray/bounding box intersection tests, that’s pretty much it.


The part highlighted in red here is what the RX 6000 series cards (level 2 RT solution) have HW acceleration for, versus what NVIDIA has HW acceleration for (level 3 RT solution).
ZPQT8Cf.jpg


NVIDIA’s so ahead they even moved on to ray-traced MOTION BLUR.
lRgqNtN.jpg


I think AMD will start to improve with RT performance with RDNA 3 and 4 by a lot, I don’t think it’s fair to just shit on them since this is their first attempt at it, it’ll only get better from there. But I think NVIDIA will probably achieve level 5 by the time the RTX 50 or 60 series cards are out because of how early they started R&D for this. I was actually planning to get the 3080 but might as well wait for the 3070 Ti next year.

How far behind do you think that Intel will be when they finally release their discrete GPUs/ graphics cards? Do you think that they'll even have dedicated ray-tracing hardware?
 

VFXVeteran

Banned
We are at 8 bit levels of RayTracing support, and at 0 levels of real time path tracing.

Again, you gotta start somewhere. I don't see the issue here. Are they supposed to advertise that they have 8-bit levels of ray-tracing on their boxes?

Diffuse cubemap sampling to speed up denoising "offline rendering" or for "previsualizing" a frame is not a comprehensive solution to path traced rendering otherwise render farms would not still require minutes to hours to render a frame at 1024bits.

I never said they did.

While it may speed up basic geometry/light passes for artist curated frames in particular, that is hardly cause for celebration when citing the focus has shifted to a real time path tracing solution at Nvidia - which is pertinent if you consider Nvidia -just- had a conference stating their focus moving forward is not Ray Tracing but a fully functional Real Time Path Tracing solution for gaming at 60FPS.

Nothing wrong with Nvidia trying to change the narrative and get away from rasterization entirely. It's about time. If we are still 30yrs off from the goal, so be it. It's better than continuing to waste time trying to come up with rasterization tricks yet again for who knows how long.
 

VFXVeteran

Banned
It's telling just how inefficient and underdeveloped realtime raytracing is when the best examples for it are Minecraft and Quake - games that are decades old or otherwise graphically unimpressive. This tech really needs another generation before it's ready. Prebaked lighting runs better and looks 90% the same.

No it doesn't. Cyberpunk looks incredible with RT. You will never get any prebaked lighting solution to mimic what you see.
 
We are at 8 bit levels of RayTracing support, and at 0 levels of real time path tracing.

Diffuse cubemap sampling to speed up denoising "offline rendering" or for "previsualizing" a frame is not a comprehensive solution to path traced rendering otherwise render farms would not still require minutes to hours to render a frame at 1024bits. While it may speed up basic geometry/light passes for artist curated frames in particular, that is hardly cause for celebration when citing the focus has shifted to a real time path tracing solution at Nvidia - which is pertinent if you consider Nvidia -just- had a conference stating their focus moving forward is not Ray Tracing but a fully functional Real Time Path Tracing solution for gaming at 60FPS.
And yet nvidia's path traced marbles demos looks better than some of the stuff in the latest hollywood films such as Avenger's infinity war.
 
And yet nvidia's path traced marbles demos looks better than some of the stuff in the latest hollywood films such as Avenger's infinity war.
That isn't a legitimate path tracing solution either, it's a Ray Tracing solution with 3 low passes of Path Tracing. Watch Nvidia's latest GTC where they factually say actual path tracing is too computationally demanding. The issue I've raised isn't that Nvidia aim to offer a Path Tracing solution for gamers, it is that Nvidia on one hand blatantly leads the consumer to believe demo's such as Marbles/Quake RTX have solved the Path Tracing issue - when on the other hand they insist an actual path tracing solution is too computationally demanding. Meanwhile, people like me have to point out nothing shown has actually implemented meaningful Realtime Pathtracing.

3 passes is akin to half a second of render computation depending on the settings - perhaps minutes at actual high level path tracing standards - they are using path tracing data to better implement RTX features/calculations.

They are not using path tracing data to actually deliver path traced images/CGI level renders to the user. A Path tracing snapshot is made at an extremely low pass, it aggregates that data back to the RTX interloper, the path traced data is then discarded as it is not useable as a rendering method or when delivering imagery to the screen.

When in reality an actual path tracing solution would be capable of delivering that image to the screen, at 1024bit image/cgi levels.
 
Last edited:

3liteDragon

Member
How far behind do you think that Intel will be when they finally release their discrete GPUs/ graphics cards? Do you think that they'll even have dedicated ray-tracing hardware?
I'm pretty sure they confirmed back in August the upcoming GPU they're working on does have hardware-accelerated ray-tracing but I'm not expecting them to surpass NVIDIA anytime soon on their first try, my guess is that it's probably gonna be something similar to AMD's current solution with RDNA 2.
 
Last edited:

regawdless

Banned
There's a world of difference in terms of visual sharpness and clarity between DLSS on & native. 1440p DLSS means sub 1080p. It's whatever. It also gimps RT effects like reflections so you get to enjoy 960p RT reflections instead of 1440p. So, let's not kid ourselves about how good it is - it might still be worth using for the performance, but you're nowhere near native.






I went back and compared it. It's honestly better than I thought, With some reshade thrown on it, the difference is tiny. Do you see drastic differences here which would make you turn off all raytracing and go native without it? I think you have to backtrack on your "nowhere near native" claim.

Both 1440p, one is native, the other DLSS Quality + reshade. It's actually rather easy to say which one uses DLSS, because DLSS recovers more details than native.

u9E0MZa.jpg


Swloant.jpg
 

Ascend

Member
I ain't mad at them on that one, they're making sure you have 2 of their cards to run their stuff.That's straight up protecting an investment.
You wouldn't need two nVidia cards... Just the one.

You guys bully Ascend into this.
Good to see that at least someone sees it for what it is.

He probably accepted this incident or turns a blind eye to it


That’s why we say AMD is not your friend. Nvidia nor AMD are my friends, they’re corporations that makes cards, I give my money to either when it makes the most sense, period. Youtubers don’t deserve your simping either ffs 🤦‍♂️ especially not for a freebie.
Your link sucks.
Here's the original article;

Let me quote something;

On the 11th June AMD informed us via email that the upcoming FIJI hardware was reserved for KitGuru, as would normally be the case. We subsequently set a plan in motion to analyse the hardware for launch and were awaiting the arrival of the sample. Earlier this week I had a call from Christine Brown, Senior Manager, EMEA Communications at AMD to let me know that the company had withdrawn their sample from KitGuru labs and that we would now not be involved at all in the launch next week.
Christine Browne informed me directly on the phone that the reason for withdrawing the sample was based on ‘KitGuru’s negative stance towards AMD’. She said that with limited product they wanted to focus on giving the samples to publications that are ‘more positive’ about AMD as a brand, and company. I was not informed during this call of anything we have published that was factually incorrect, we were also not told to edit or remove any content we had published. Based on what AMD had seen via KitGuru editorial in recent weeks it was felt that overall coverage was just too negative.


Shitty? Yes. But there are a couple of differences here.

1) They initially had the intention to send a sample.
2) They made the effort to make a call to the reviewer before launch day, informing that KitGuru will not be receiving a sample. This is unlike nVidia which simply ghosted Hardware Unboxed, where Steve had to reach out THREE times to get an answer at all after weeks.
3) AMD mentioned they had limited samples, and that those samples would be going to more positive outlets. This makes perfect sense. If you have 10 reviewers but 5 cards, you will send cards to the ones that are generally more positive towards your brand. It's not the same as having cards but actively shutting someone out for being negative towards you.
4) AMD did not tell KitGuru that they would have to change their way of doing reviews if they want to receive samples in the future, which nVidia clearly said to Hardware Unboxed.

Going by this, it's the difference between not having enough cards for everyone and telling the reviewer they are prioritizing someone else vs deliberately not supplying someone you don't like while you do have the cards. It could very well be that AMD simply did the same thing but worded it better. But even then, they at least they had enough respect to inform the reviewer beforehand and didn't try to force their editorial direction in their late response.

Oh. And as a bonus, maybe you should read those comments below the article. There are many people that share my views. Not that we need a majority for the truth. But too many of you love ganging up on people that you disagree with.

Holy shit, these AMD vs NVIDIA arguments are worse than freakin' console wars.

Jesus.
Indeed they are. Can't possibly imagine why...

Too many active ray query object in scope at any time?
Too large of a threadgroup size?
Not limiting enough the group shared memory?

To me these sounds mostly the same as DXR’s recommendations or for the memory structure, gimp your RT like consoles because it would collapse our GPU. Not some secret sauce that will catch up to 2 times the performances in path tracing. We’re compliant with DXR, but please avoid this, and that...
Limits apply to both AMD and nVidia cards. That's called efficient development. Or would you want to have your performance tank with zero visual difference? Like Tessellation x64?

They only have a BVH intersection accelerator block. That’s it. Rest is relying on shaders for every decisions. There’s only so much possible optimization squeezed out of that. It’s bare minimum above just software ray tracing.
BVH is the most taxing aspect to implement RT, so, it's fine. Both AMD and nVidia tailored their hardware for their needs. Quoting myself;

AMD's implementation has the advantage that you could write your own traversal code to be more efficient and optimize on a game per game basis. The drawback is that the DXR API is apparently a black box, which prevents said code from being written by developers, a limit the consoles do not have. AMD does have the ability to change the traversal code in their drivers, meaning, working with developers becomes increasingly important.

nVidia's implementation has the advantage that the traversal code is accelerated, meaning, whatever you throw at it, it's bound to perform relatively well. It comes at the cost of programmability, which for them doesn't matter much for now, because as mentioned before, DXR is a black box. And it saves them from having to keep writing drivers per game.

That doesn't mean that nVidia's is necessarily better, but in the short term, it is bound to be better, because little optimization is required. Apparently developers are liking Sony's traversal code on the PS5 as is, so maybe something similar will end up in the AMD drivers down the line, if Sony is willing to share it with AMD.

I hinted at this a while back, where on the AMD cards the amount of CUs dedicated for RT is variable. There is an optimal balancing point somewhere, where the CUs are divided between RT and rasterization, and that point changes on a per game and per setting basis.
For example, if you only have RT shadows, maybe 20 CUs dedicated to the traversal are enough, and the rest are for the rasterization portion, and both would output around 60 fps, thus they balance out. But if you have many RT effects, having a bunch of CUs for rasterization and only a few for RT makes little sense, because the RT portion will output only 15 fps and the rasterization portion will do 75 fps, and the unbalanced distribution will leave all those rasterization CUs idling after they are done, because they have to wait for the RT to finish anyway.

AMD's approach makes sense, because it has to cater to both the consoles and the PC. nVidia's approach also makes sense, because for them, only the PC matters.


I guess 3Dmark’s full path tracing benchmark does not count either? The one that welcomed AMD in the RT arena to not have an Nvidia monopoly anymore? Feel free to @ me when there’s an independent full path tracing game where you feel AMD was not left on the sides.
The one game that confirms that nVidia is faster at RT is the Riftbreaker. AMD sponsored yet nVidia is still faster.
 

Bolivar687

Banned
I went back and compared it. It's honestly better than I thought, With some reshade thrown on it, the difference is tiny. Do you see drastic differences here which would make you turn off all raytracing and go native without it? I think you have to backtrack on your "nowhere near native" claim.

Both 1440p, one is native, the other DLSS Quality + reshade. It's actually rather easy to say which one uses DLSS, because DLSS recovers more details than native.

u9E0MZa.jpg


Swloant.jpg

This is the first time I've ever seen Reshade have such little impact on colors and contrast. Is A better?
 

regawdless

Banned
This is the first time I've ever seen Reshade have such little impact on colors and contrast. Is A better?

I didn't change the colors in reshade. I only applied some AA, clarity and a little bit of sharpen.
DLSS introduces a bit of noise on certain edges, which is a negative point though.

From my personal view, the difference is not so big that I would want to disable all raytracing effects.
 

supernova8

Banned
It´s not that simple. When gpu or processor manufacturers get involved in the development of games and demos they will naturally avoid or minimize the use of functions that don´t work well on their hardware while favouring the stuff they are good at, so in the end we get very biased results. When independent game devs start to familiarize themselves with AMD hardware we will see what their hardware is capable of. At this point this is just free marketing for nvidia.

You're right maybe AMD will do a bit better but I think it's pretty clear NVIDIA does actually have the better ray tracing solution as it stands right now. I don't think any amount of getting familiar with AMD hardware is going to miraculously make their RT better than NVIDIA's.
 

magaman

Banned
No it doesn't. Cyberpunk looks incredible with RT. You will never get any prebaked lighting solution to mimic what you see.

Lol. Puddle reflection technology, right? Not worth it, my man. Call me next generation when the hardware can handle RT effectively.
 
Lol. Puddle reflection technology, right? Not worth it, my man. Call me next generation when the hardware can handle RT effectively.
Call me when cars are fully autonomous. Or when battery efficiency at least quadruple what we currently have. Higher power efficiency, with less heat dissipation. Or when machines can replace humans completely. The list can go on and on. But it's stupid to dismiss technology until it's fully there, cause you need to take the baby steps to get there. Ill take half assed raytracing, which lowers framerate, over no raytracing at all. Yeah it has it's cons, but I'm all for progression, rather than being stagnant.

That's why Nvidia is killing it, regardless of all the AMD cultists/naysayers/religionists. They have put in so much time and research into it, and are constantly improving between hardware iterations, with software alone.
 

magaman

Banned
Call me when cars are fully autonomous. Or when battery efficiency at least quadruple what we currently have. Higher power efficiency, with less heat dissipation. Or when machines can replace humans completely. The list can go on and on. But it's stupid to dismiss technology until it's fully there, cause you need to take the baby steps to get there. Ill take half assed raytracing, which lowers framerate, over no raytracing at all. Yeah it has it's cons, but I'm all for progression, rather than being stagnant.

That's why Nvidia is killing it, regardless of all the AMD cultists/naysayers/religionists. They have put in so much time and research into it, and are constantly improving between hardware iterations, with software alone.

I can agree with most of this.
 
I can agree with most of this.
I wish someone could wake me up when it doesn't require DLSS to have the performance, or when gpu's have enough bandwith, that RT won't tank performance. That would be fucking sick. But for now, I'll take any solution that maximizes visual quality, as upping the resolution alone, won't change much, compared to when we were at 720p as the maximum.
 
Last edited:

Rikkori

Member
I went back and compared it. It's honestly better than I thought, With some reshade thrown on it, the difference is tiny. Do you see drastic differences here which would make you turn off all raytracing and go native without it? I think you have to backtrack on your "nowhere near native" claim.

Both 1440p, one is native, the other DLSS Quality + reshade. It's actually rather easy to say which one uses DLSS, because DLSS recovers more details than native.

u9E0MZa.jpg


Swloant.jpg
If you want to do a comparison you need to capture PNG & upload to flickr, otherwise imgur compression ruins what's generally the native advantage. There's a clear sheen & sharpness (and I'm not talking things like CAS) to a native 4K image that simply gets softened out of existence with DLSS (or other temporal injection type TAA variants). I'd also never ever not use CAS in a game with TAA, it just makes no sense to leave it out.
Again - I have no issue recommending DLSS-Q for performance reasons but to me it misses out on the whole reason for being at higher resolution in the first place, which is the sheer crispness of the native image, and that's something you lose with DLSS. I sit close to a 55" 4K screen, so maybe that's why it's more obvious to me.
 
Seeing you guys are discussing DLSS I saw this video recently by GamersNexus which I thought was a really fair analysis using CyberPunk.



Goes through some of the pros and cons of DLSS in a pretty fair and unbiased way. I'd recommend it as a good watch for both the DLSS die hards and the haters.
 

Antitype

Member
Seeing you guys are discussing DLSS I saw this video recently by GamersNexus which I thought was a really fair analysis using CyberPunk.



Goes through some of the pros and cons of DLSS in a pretty fair and unbiased way. I'd recommend it as a good watch for both the DLSS die hards and the haters.


I don't have the time right now to watch the video, but is he aware that the 1.04 update added a forced sharpening filter to "native"? Did he remove that filter or added it to the DLSS output to do a fair comparison ?

1.03 vs 1.04:



Edit: Btw, there's a trick to improve the crispness of DLSS by tweaking the LOD Bias with Nvidia Inspector.
 
Last edited:
I don't have the time right now to watch the video, but is he aware that the 1.04 update added a forced sharpening filter to "native"? Did he remove that filter or added it to the DLSS output to do a fair comparison ?

I don't think he messed with any sharpening filters at all from what I remember.
 
Nah man, these benchmarks are real. I actually thought in the beginning that AMD would be able to compete with NVIDIA when it came to ray-tracing performance. I was pretty naive when it came to this by not knowing the R&D process NVIDIA had going on for years before all this to get to where they are now with real-time ray-tracing, but then I read the white paper for Ampere and learned just how much of the RT process was hardware-accelerated.


They have HW-acceleration for BVH traversal, ray/triangle and bounding box intersections, and instance transformation (someone correct me if I’m wrong on this, but I’m guessing this is HW acceleration for rapidly updating any asset changes like breaking glass for example but every bits of glass being shattered is being updated in the BVH and being ray-traced in real-time instead of the object being completely removed from the BVH). The Ampere cards are level 3 cards in ray-tracing, there are six different levels of RT and to achieve FULL HW-accelerated RT, it takes more time and research (and of course, more custom hardware).



AMD’s RDNA 2 cards are level 2 cards in ray-tracing since they only have custom hardware acceleration for ray/triangle and ray/bounding box intersection tests, that’s pretty much it.


The part highlighted in red here is what the RX 6000 series cards (level 2 RT solution) have HW acceleration for, versus what NVIDIA has HW acceleration for (level 3 RT solution).
ZPQT8Cf.jpg


NVIDIA’s so ahead they even moved on to ray-traced MOTION BLUR.
lRgqNtN.jpg


I think AMD will start to improve with RT performance with RDNA 3 and 4 by a lot, I don’t think it’s fair to just shit on them since this is their first attempt at it, it’ll only get better from there. But I think NVIDIA will probably achieve level 5 by the time the RTX 50 or 60 series cards are out because of how early they started R&D on this. I was actually planning to get the 3080 but might as well wait for the 3070 Ti next year.


It's hard to know how well games can perform with RT on RNDA2, the ones we have available today are made with Nvidia GPUs in mind. Of course Nvidia's solution is better, but when optimized for RNDA2 the performance can be surprising.

Just see this comparison.
Benchmark made to use Nvidia's RT hardware:

index.php


Benchmark keeping in mind RDNA2 limitations:

index.php



We just need to hope that games will have this fallback for RDNA2.
 

Buggy Loop

Member
Oh man... things looking bad for AMD.... who could have predicted that?

Naw. As much as I like poking them for ray tracing, or that if the opportunity presented itself that you could choose between them, I would pick Nvidia, in the end they’ll all be sold out for a while considering the market is hungry to upgrade now.

Most peoples will just pick what’s available on the shelves. Now it’s a question of production of wafers and flow of components, because everything sells within 5 seconds
 

lukilladog

Member
AMD already released the recommendations for developers with the best practices for ray tracing on RDNA 2.



Too many active ray query object in scope at any time?
Too large of a threadgroup size?
Not limiting enough the group shared memory?

To me these sounds mostly the same as DXR’s recommendations or for the memory structure, gimp your RT like consoles because it would collapse our GPU. Not some secret sauce that will catch up to 2 times the performances in path tracing. We’re compliant with DXR, but please avoid this, and that...

They only have a BVH intersection accelerator block. That’s it. Rest is relying on shaders for every decisions. There’s only so much possible optimization squeezed out of that. It’s bare minimum above just software ray tracing.

I guess 3Dmark’s full path tracing benchmark does not count either? The one that welcomed AMD in the RT arena to not have an Nvidia monopoly anymore? Feel free to @ me when there’s an independent full path tracing game where you feel AMD was not left on the sides.


Nvidia and AMD always have had recommendations on how to use their hardware, it´s nothing new, their driver engineers are leagues beyond the average dev so it only makes sense. And the 3d mark tracing benchmark sits on nvidia´s VK_NV_ray_tracing extension so I´d take that with an ocean of salt. We should not embrace so quickly all the biased testing nvidia is producing for marketing.
 
D

Deleted member 17706

Unconfirmed Member
Oh man... things looking bad for AMD.... who could have predicted that?

The 6800 still looks like a great value for anyone who doesn't care about the Nvidia features. Especially if they have an x570 board and Ryzen 5000 processor.
 

Kenpachii

Member
The 6800 still looks like a great value for anyone who doesn't care about the Nvidia features. Especially if they have an x570 board and Ryzen 5000 processor.

To expensive, card would have been great if it was 399.
 
Last edited:
D

Deleted member 17706

Unconfirmed Member
To expensive, card would have been great if it was 399.

They should definitely have tried to match the 3070 price tag at least. Either way, they have so little stock that they are selling out everything easily. We'll see if they do a price change once things settle down.
 
They should definitely have tried to match the 3070 price tag at least. Either way, they have so little stock that they are selling out everything easily. We'll see if they do a price change once things settle down.

I think a lot of people are forgetting that the 6800 is the 3070ti competitor, it is clearly ahead of the 3070 so it is designed and priced to compete with the 3070ti that will probably come February of next year.

Although I agree they could have dropped price on both 6800 and 6800xt by 50 to enhance their appeal, but that logic only makes sense in a non supply constrained world.

With current market conditions they will easily sell everything they can make for the next 3-6 months easily so it makes little sense for them to price drop. Maybe by April/May they might drop
 
Top Bottom