• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[DF] AMD Radeon 6800 XT/6800 vs Nvidia GeForce RTX 3080/3070 Review - Which Should You Buy?

Ascend

Member
To say the majority won't use RTX or DLSS is fucking ridiculous.
DLSS, maybe. RTX, most people will turn it off in many of their games. Who wants to buy a 144Hz monitor with a $600+ graphics card to play below 100 fps?

Those are killer features that help push Nvidia as the better value. If we're talking about RTX Voice, or the webcam thing, then yeah, most will probably not find a ton of use in those things. However, they're nice extras that AMD doesn't have.

But having good-great ray tracing performance and getting FPS for free, at the cost of no image loss in DLSS, are both key features of the cards and something AMD is sorely lacking.
DLSS having no image quality loss is not true.
DLSS is indeed a key feature, with the weakness that it must be manually implemented in each game. Technically you can run a game at 80% resolution with sharpening. It won't quite reach the quality of DLSS2.0, but most people wouldn't be able to tell the difference between that and native resolution anyway. And it works in all games, and also works with nVidia. The main issue is that it's not a toggle like DLSS, but needs some manual input. It is minor, but still more than DLSS, which is why most people don't bother.

Your love of AMD is kinda scary dude.
I love gaming. nVidia has repetitively made the gaming scene worse by dangling the carrots in front of the donkeys to keep riding on their backs.
AMD has also innovated a bunch of things, but, most people dismiss their efforts and only value nVidia's. For one, the only reason we have Vulkan is because of AMD.
Tessellation was actually ATi technology.
AMD was the first to have a unified shading architecture.
AMD was the first and only one to support DX10.1.
AMD was the first and only one to support DX11 FL11_1.
AMD was the first to support DX12 features in their GPUs.
AMD was the first to have rapid packed math on their GPUs.
AMD was the first to have asynchronous compute on their GPUs.

I am not saying nVidia does not innovate. They were first with support for ROVs, RT, VRS etc. But people pretend that only nVidia has innovations and drives hardware forward, while that is simply not the case.
 
Last edited:
Keep in mind hardware unboxed is pretty much against ray tracing from the get go. They dismiss it, they barely benchmark 1 or 2 games with it when reviewing cards and have stated in their articles that in their opinion they are not fans. So when watching one of their videos on the subject, keep in mind thats the position from where they are refering to this subject.
 

mitchman

Gold Member
That's assuming NVIDIA doesn't have a big performance jump up their sleeve.
Little to hint at that so far with the last 2 NVidia generations. Brute forcing it will only get you so far before cost skyrockets and prices will be in a range not realistic for many consumer.
 

spyshagg

Should not be allowed to breed
Keep in mind hardware unboxed is pretty much against ray tracing from the get go. They dismiss it, they barely benchmark 1 or 2 games with it when reviewing cards and have stated in their articles that in their opinion they are not fans. So when watching one of their videos on the subject, keep in mind thats the position from where they are refering to this subject.

Its more legitimate than what you make it sound like. They made a poll asking what is of interest for their audience. The audience answered.

When RT really matters, there will be new generations of cards in the market. This is true even if we are talking about just 1 year from now.
 

Ascend

Member
Its more legitimate than what you make it sound like. They made a poll asking what is of interest for their audience. The audience answered.
This. Everyone is slamming Hardware Unboxed because they didn't prioritize RT, while over 75% of their audience doesn't prioritize RT, as posted by...;

No.

GUWQnkj.png


I would call say it's a vocal minority skewing perception, but the truth is that this has nothing to do with Ray Tracing or DLSS. Nvidia fans shit post every AMD launch no matter what.

Hardware Unboxed recommended the GTX 970 over the R9 390, which to me was one of the stupidest recommendations back then. They are definitely not AMD fanboys. They are as neutral as can be. They cater to their audience, which obviously doesn't care about RT. Yet here we are again, with people trying to shove down RT down everyone's throats.

And people get met at me when I say the the goal posts are always shifted in favor of nVidia, and that RT is only being touted because it's nVidia. If it was an AMD only feature, nobody would care.
 
Last edited:
Its more legitimate than what you make it sound like. They made a poll asking what is of interest for their audience. The audience answered.

When RT really matters, there will be new generations of cards in the market. This is true even if we are talking about just 1 year from now.


I didnt say its legitimate or not. Just that this is what i observed when reading their articles. They dismiss this technology. If they personally think this is not worthy and a waste of time, this bias if gonna affect what they believe and talk about it. The poll they made had a quarter saying they want ray tracing before rasterization. Thats 1 in 4 people. Its significant. Considering most people dont have a ray tracing capable card, thats something.

The way they talk about ray tracing is similar to any random person on an internet forum who dismises it - there arent enough games, i dont see the difference. It didnt strike me as a thing a professional should do. If there is new tech available and you're reviewing the hardware, i expect you to cover it from a place of neutrality, not dismiss it or ignoring it completely because of your personal preference
 
Last edited:

johntown

Banned
Unless you are really tight on cash or an AMD fanboy I don't see why anyone would really bother to buy one of their cards. They are always behind NVIDIA or just barely keeping up. If it was not for consoles they might not even be in the GPU market.

NVIDIA just has better features, support and cards. The only thing I don't like is their prices and I appreciate AMD being competition to NVIDIA.

Anyone claiming RT and DLSS doesn't matter is a little strange. I admit when RT first came out I didn't have much interest and even skipped all of those cards. I thought it was just some fad like Hairworks etc. RT is here to stay and probably most new games will have some implementation of it. If you really don't care about it then sure AMD may be the right card for you. Like DF said though if they start baking this stuff in by default AMD cards are going to be pretty bad.
 

spyshagg

Should not be allowed to breed
I didnt say its legitimate or not. Just that this is what i observed when reading their articles. They dismiss this technology. If they personally think this is not worthy and a waste of time, this bias if gonna affect what they believe and talk about it. The poll they made had a quarter saying they want ray tracing before rasterization. Thats 1 in 4 people. Its significant. Considering most people dont have a ray tracing capable card, thats something.

The way they talk about ray tracing is similar to any random person on an internet forum who dismises it - there arent enough games, i dont see the difference. It didnt strike me as a thing a professional should do. If there is new tech available and you're reviewing the hardware, i expect you to cover it from a place of neutrality, not dismiss it or ignoring it completely because of your personal preference

They dont dismiss it. Their audience dismisses it.

Naturally, they budget their limited review efforts accordingly. Its a big difference.
 
Last edited:

spyshagg

Should not be allowed to breed
Unless you are really tight on cash or an AMD fanboy I don't see why anyone would really bother to buy one of their cards. They are always behind NVIDIA or just barely keeping up. If it was not for consoles they might not even be in the GPU market.

NVIDIA just has better features, support and cards. The only thing I don't like is their prices and I appreciate AMD being competition to NVIDIA.

Anyone claiming RT and DLSS doesn't matter is a little strange. I admit when RT first came out I didn't have much interest and even skipped all of those cards. I thought it was just some fad like Hairworks etc. RT is here to stay and probably most new games will have some implementation of it. If you really don't care about it then sure AMD may be the right card for you. Like DF said though if they start baking this stuff in by default AMD cards are going to be pretty bad.

AMD (ATI) is not the weakling you make it be.


Before these past 7 years, AMD was constantly putting Nvidia against the wall. Constantly. Nvidia has permanent battle scars from AMD (ATI). Nvidia has PTSD from AMD. The 2000~2013 fight between them was amazing to see. AMD was innovating everywhere non-stop since 2005. Nvidia could not catch a break.


Nvidia belittles everyone. But never AMD.
If Nvidia even hears a rumor that AMD has a good product incoming, they react instantly and ferociously. They have a backup plan B C and D to fight AMD, because if they let it rize to the top again, these past 7 years will disappear instantly.


7 years seems a lot, but its less than the amount of time Intel was behind AMD in the CPU space 1999~2007. Yet Intel rebounded for the next 10 years. Intel did not have a plan B C or D. They let dice roll against AMD and they are now fucked for another 5 years minimum.


AMD is not what you think.
 
Last edited:

RedVIper

Banned
DLSS, maybe. RTX, most people will turn it off in many of their games. Who wants to buy a 144Hz monitor with a $600+ graphics card to play below 100 fps?

I think there's way more people worried about 4k than playing games at 144hz.
DLSS having no image quality loss is not true.

Sure, it has some artifacts.

DLSS is indeed a key feature, with the weakness that it must be manually implemented in each game.

If you game has taa it can have DLSS, maybe you're thinking of DLSS 1?


Technically you can run a game at 80% resolution with sharpening. It won't quite reach the quality of DLSS2.0, but most people wouldn't be able to tell the difference between that and native resolution anyway.

Hard disagree, unless you're doing it at 4k there's going to be a lot more aliasing and less fine detail. CAS sharpening really isn't something new, I could have applied a sharpening filter to games way before AMD came up with CAS.
 
They dont dismiss it. Their audience dismisses it.

Naturally, they budget their limited review efforts accordingly. Its a big difference.


He does dismiss it. Like i've already said, the says outright the same uninformed things you would see on a forum such as this during brand wars. There arent enough games, i dont see much difference, the framerate impact is too great, etc. He outright says it again, just now





More than that, he misrepresents the performance of the card he is reviewing. Calls the 3060TI 20% faster than a 5700XT and says it could be better. A quick look at another wesbite averages 30% over 5700XT averaged over 17 games. Thats a pretty big discrepancy. At this point, after numerous misteps by this website im starting to have second thoughts about them. It seems like its an ongoing thing




6GEfvYW.png
 
Last edited:

Ascend

Member
I think there's way more people worried about 4k than playing games at 144hz.
Both are niche. But so is paying over $600 for a graphics card. And, surprise, so is RT. No developer is going to focus as much on RT to single out the rest of the market. RT will remain an optional add on for now, which will inevitably limit its usage.

Sure, it has some artifacts.
Well. It's the first time I see someone arguing for RT casually admit that DLSS is not perfect. So kudos to you.

If you game has taa it can have DLSS, maybe you're thinking of DLSS 1?
The goal is to allow DLSS2.0 to work with every game. As far as I know, it is currently still manually implemented. Correct me if I'm wrong.
Assuming that is the case... If we are going to count future features or implementations, then we have to account for AMD's future scaling implementation as well to keep things balanced. And that goes into futile territory because we lack information.

Hard disagree, unless you're doing it at 4k there's going to be a lot more aliasing and less fine detail. CAS sharpening really isn't something new, I could have applied a sharpening filter to games way before AMD came up with CAS.
It's definitely not something new. But it is a compromise if you want a similar experience that is less taxing. There are many ways to do upscaling. Consoles have done it for ages. Are they as good as DLSS 2.0? Not likely. But everyone has to determine for themselves how much visual quality they are willing to sacrifice for how much performance.
DLSS-like technology is worth it in my book. RT isn't. At least not yet.
 

RedVIper

Banned
Both are niche. But so is paying over $600 for a graphics card. And, surprise, so is RT. No developer is going to focus as much on RT to single out the rest of the market. RT will remain an optional add on for now, which will inevitably limit its usage.

With the new consoles supporting 4k I think it's going to get a lot more common, on the other hand, even now, most casuals don't care about fps.


Well. It's the first time I see someone arguing for RT casually admit that DLSS is not perfect. So kudos to you.

I mean no technology is perfect, I still think the benefits far outweight the drawbacks.

The goal is to allow DLSS2.0 to work with every game. As far as I know, it is currently still manually implemented. Correct me if I'm wrong.

Well it still needs to be implemented in every engine yes, but DLSS2 already uses a generalized model so Nvidia doesn't need to train the ai for every game anymore.

Assuming that is the case... If we are going to count future features or implementations, then we have to account for AMD's future scaling implementation as well to keep things balanced. And that goes into futile territory because we lack information.

I hope AMD come up with something, but I'm not sure it will be as good as DLSS simply because they don't have dedicated hardware for it. It's not like AMD can't do image upscaling, it's just faster to render native 4k than upscale a 1080p image.


It's definitely not something new. But it is a compromise if you want a similar experience that is less taxing.

Sure, and I think it works fine at high resolutions, but CAS completely fall apart at lower resolutions, often making aliasing/shimmering worse.

There are many ways to do upscaling. Consoles have done it for ages. Are they as good as DLSS 2.0? Not likely. But everyone has to determine for themselves how much visual quality they are willing to sacrifice for how much performance.

Sure upscaling has existed for ages, hell AI upscaling has existed for quite a while, DLSS is impressive because it's done in real time and it's faster than rendering a native image.

DLSS-like technology is worth it in my book. RT isn't. At least not yet.

I agree, the only game that somewhat impresses me with RT is minecraft of all things, probably because that game doesn't have a lighting engine in the first place.

In the end I think RT will be more beneficial for developers than consumers.
 

waylo

Banned
DLSS having no image quality loss is not true.
There are literally games where DLSS looks better than native resolution. Obviously not all, but even games where it doesn't look as good, it's a negligible difference. Certainly not a big enough decrease in visual fidelity to turn it off and lose a shitload of frames.
 

Ascend

Member
There are literally games where DLSS looks better than native resolution. Obviously not all, but even games where it doesn't look as good, it's a negligible difference. Certainly not a big enough decrease in visual fidelity to turn it off and lose a shitload of frames.
Well... Let's just say that a lot more is gained than what is lost with DLSS. You can gain the performance, you gain a smoother image, and you lose some details. And in many cases, what is lost is not detectable to most people while gaming. So yeah. For most people it's worth the trade-off, if they like a smooth look.

Enjoy;
s2comparison3jhk4b.png
 

waylo

Banned
Please stop saying this. It makes all PC gamers look bad.
Stop saying things that are true because it makes the narrative that AMD fanboys try to spread that DLSS is shit and does nothing but smudge the IQ false?
Comparisons between the two techniques are fascinating but the big takeaway is that DLSS image reconstruction from 1440p looks cleaner overall than native resolution rendering. We've reached the point where upscaling is quantifiably cleaner and more detailed - which sounds absurd, but there is an explanation. DLSS replaces temporal anti-aliasing, where all flavours of TAA exhibit softening or ghosting artefacts that Nvidia's AI upscaling has somehow managed to mostly eliminate.

Nah, I'll keep speaking facts.
 

johntown

Banned
AMD (ATI) is not the weakling you make it be.


Before these past 7 years, AMD was constantly putting Nvidia against the wall. Constantly. Nvidia has permanent battle scars from AMD (ATI). Nvidia has PTSD from AMD. The 2000~2013 fight between them was amazing to see. AMD was innovating everywhere non-stop since 2005. Nvidia could not catch a break.


Nvidia belittles everyone. But never AMD.
If Nvidia even hears a rumor that AMD has a good product incoming, they react instantly and ferociously. They have a backup plan B C and D to fight AMD, because if they let it rize to the top again, these past 7 years will disappear instantly.


7 years seems a lot, but its less than the amount of time Intel was behind AMD in the CPU space 1999~2007. Yet Intel rebounded for the next 10 years. Intel did not have a plan B C or D. They let dice roll against AMD and they are now fucked for another 5 years minimum.


AMD is not what you think.
giphy.gif



Sure! Atari used to be the king of gaming. I don't dispute anything you said about the past. Right now my loyalty is with NIVIDIA because they make IMO a superior product. If AMD comes with something that is better I have no issue switching to team Red. I do PC gaming because I want the best in my gaming.

AMD has great CPU's I give them that for sure. When I build my next PC (next year) it will probably have an AMD CPU for multiple reasons.

Until they give me a reason to get away from NVIDIA I have no reason to switch. If they announce some AI deep learning system that blows DLSS away I would be highly tempted to switch. Right now though with everything I am seeing AMD in the PC GPU scene is not looking good.
 
giphy.gif



Sure! Atari used to be the king of gaming. I don't dispute anything you said about the past. Right now my loyalty is with NIVIDIA because they make IMO a superior product. If AMD comes with something that is better I have no issue switching to team Red. I do PC gaming because I want the best in my gaming.

AMD has great CPU's I give them that for sure. When I build my next PC (next year) it will probably have an AMD CPU for multiple reasons.

Until they give me a reason to get away from NVIDIA I have no reason to switch. If they announce some AI deep learning system that blows DLSS away I would be highly tempted to switch. Right now though with everything I am seeing AMD in the PC GPU scene is not looking good.
I wish the die hard AMD fanboys would realize that! Most normal gamers don't have some weird allegiance and sacrificial blood oaths with companies. That is fucking cringey as hell. If AMD actually had the better hardware this time around, I'd switch in a heartbeat! But since they don't, I'll stick with Nvidia this time around, again. I already switched from Intel on cpu's, and I'm gonna probably snag their new 5xxx series of CPU's early next year. For now, cyberpunk with raytracing countdown is the only thing I'm excited for at the moment.
 

regawdless

Banned
Your assertion that no game uses even 9GBs of VRAM is false. Watch Dogs: Legion uses 9GBs of VRAM at 4K with maximum settings. When I had an RTX 3080, the VRAM usage - according to MSI Afterburner - was over 9GBs; on both my RTX 3090's, the VRAM usage is a little more.

Note: Disregard the MSI Afterburner overlay that lists the graphics card as the RTX 3090 in the first screen shot; I hadn't bothered to rename the listing when I reinstalled my RTX 3080 to run the benchmark.

3QlFvZc.png


EnZBL9L.png

Just imagine how much VRAM GTA6 will need when it's finally released. 10GBs is going to be a problem for 4K gaming with maximum settings very soon.

Just checked and it really spikes at above 9gb. Weird thing though, at 5k it's not increasing. So I just couldn't get it to surpass the 10gb of my 3080 to see what happens.
This is with the texture pack of course.
 
Just checked and it really spikes at above 9gb. Weird thing though, at 5k it's not increasing. So I just couldn't get it to surpass the 10gb of my 3080 to see what happens.
This is with the texture pack of course.
I wonder if it's a weird allocation limiter on there, as 5K is a good bit more pixels than 4K, and would definitely go past 9gb of ram, if that was it's actual usage.
 
I am pretty sure most PC gamers feel this way. It is not like a console where if you switch from one to other you could lose all your friends, games etc.
Precisely. All your applications, games, etc simply, just work. That's why it's so weird to have fanboys on either side, especially AMD fanboys. There's not really a big difference in price, but more so big difference in feature set.
 

regawdless

Banned
I wonder if it's a weird allocation limiter on there, as 5K is a good bit more pixels than 4K, and would definitely go past 9gb of ram, if that was it's actual usage.

Resolution doesn't make that much of a difference it seems. Even 1440p maxed gets above 8.5gb and comes close to 9gb.

I want to go over 10gb and see what happens, but have no idea which game I could do that.
 

Elias

Member
nvidia's and amd's gpus are both great. pick which one which best meets your needs when they are available. I don't see why people would even argue about this. rdna 2 wins when it comes to pure rasterization, nvidia wins when it comes to ray tracing. either way, ray tracing is a massive performance hit and won't be mainstream until it isn't such a performance hog.
 

Mister Wolf

Member
There are literally games where DLSS looks better than native resolution. Obviously not all, but even games where it doesn't look as good, it's a negligible difference. Certainly not a big enough decrease in visual fidelity to turn it off and lose a shitload of frames.

It doesn't even need to look better than native just better than whatever scaler internal resolution or custom resolution that produces the same framerate. I dont know if the detractors overlook this or they're just being disingenuous.
 
It doesn't even need to look better than native just better than whatever scaler internal resolution or custom resolution that produces the same framerate. I dont know if the detractors overlook this or they're just being disingenuous.
"My favorite gpu company doesn't have an answer to that, so DLSS sucks!"



Also



"But when they do have an answer, DLSS will be acceptable"
 
Last edited:

BluRayHiDef

Banned
Just checked and it really spikes at above 9gb. Weird thing though, at 5k it's not increasing. So I just couldn't get it to surpass the 10gb of my 3080 to see what happens.
This is with the texture pack of course.

Very informative findings. I guess that you need to use an RTX 2080 Ti or an RTX 3090 or any RX 6000 Series card to determine whether or not the VRAM usage would increase above 10GBs at resolutions above 4K.

However, as you can see below, Watch Dogs: Legion is running on an RTX 3070 (8GBs of GDDR6) in 4K without any issues. So, perhaps the game isn't actually using as much VRAM as it appears to be using on your RTX 3080.



EDIT:

On second thought, at 2:44 the player switches to native 4K (i.e. no DLSS) and the frame rate at which the RTX 3070 renders the game plummets to 5 FPS. So, perhaps 8GBs is not enough, at least 8GB of GDDR6; 8GB of GDDR6X may perform better.
 
Last edited:

regawdless

Banned
Very informative findings. I guess that you need to use an RTX 2080 Ti or an RTX 3090 or any RX 6000 Series card to determine whether or not the VRAM usage would increase above 10GBs at resolutions above 4K.

However, as you can see below, Watch Dogs: Legion is running on an RTX 3070 (8GBs of GDDR6X) in 4K without any issues. So, perhaps the game isn't actually using as much VRAM as it appears to be using on your RTX 3080.



Huh, now that's interesting. VRAM usage is pretty confusing in some ways.
 

BluRayHiDef

Banned
Huh, now that's interesting. VRAM usage is pretty confusing in some ways.

On second thought, at 2:44 the player switches to native 4K (i.e. no DLSS) and the frame rate at which the RTX 3070 renders the game plummets to 5 FPS. So, perhaps 8GBs is not enough, at least 8GB of GDDR6; 8GB of GDDR6X may perform better.
 
DLSS, maybe. RTX, most people will turn it off in many of their games. Who wants to buy a 144Hz monitor with a $600+ graphics card to play below 100 fps?

DLSS having no image quality loss is not true.
DLSS is indeed a key feature, with the weakness that it must be manually implemented in each game. Technically you can run a game at 80% resolution with sharpening. It won't quite reach the quality of DLSS2.0, but most people wouldn't be able to tell the difference between that and native resolution anyway. And it works in all games, and also works with nVidia. The main issue is that it's not a toggle like DLSS, but needs some manual input. It is minor, but still more than DLSS, which is why most people don't bother.


I love gaming. nVidia has repetitively made the gaming scene worse by dangling the carrots in front of the donkeys to keep riding on their backs.
AMD has also innovated a bunch of things, but, most people dismiss their efforts and only value nVidia's. For one, the only reason we have Vulkan is because of AMD.
Tessellation was actually ATi technology.
AMD was the first to have a unified shading architecture.
AMD was the first and only one to support DX10.1.
AMD was the first and only one to support DX11 FL11_1.
AMD was the first to support DX12 features in their GPUs.
AMD was the first to have rapid packed math on their GPUs.
AMD was the first to have asynchronous compute on their GPUs.

I am not saying nVidia does not innovate. They were first with support for ROVs, RT, VRS etc. But people pretend that only nVidia has innovations and drives hardware forward, while that is simply not the case.

It won't quite reach the quality of DLSS2.0, but most people wouldn't be able to tell the difference between that and native resolution anyway

Not even close to true on a 4k display of any decent size. Not to mention the fact that 1080p or thereabouts looks like garbage on most 1440p monitors.

Literally just tested this on cod and the dlss implementation yielding ~15-20% performance increase or more (quality mode) looks a lot better than the corresponding drop in resolution. Looks better in motion too because of far less shimmering
 

supernova8

Banned
Sure ray tracing is not the be all and end all but if when you're getting up to $600 or more for a GPU you most likely do want the shiny features.

The RX 6800 cards essentially win in a couple of games and then shit the bed in terms of RT performance and lack of DLSS-like features at all, and then you also have slower video encoding and other stuff for productivity.

Really the only reason you'd say they are good choices is if you really really needed 16 GB RAM but from the tests, it looks like you really don't need 16 GB RAM. Even the 3090 wasn't that much better than the 3080 and that has 24GB RAM, surely that was a hint of things to come.
 
Last edited:

rnlval

Member
DLSS, maybe. RTX, most people will turn it off in many of their games. Who wants to buy a 144Hz monitor with a $600+ graphics card to play below 100 fps?

DLSS having no image quality loss is not true.
DLSS is indeed a key feature, with the weakness that it must be manually implemented in each game. Technically you can run a game at 80% resolution with sharpening. It won't quite reach the quality of DLSS2.0, but most people wouldn't be able to tell the difference between that and native resolution anyway. And it works in all games, and also works with nVidia. The main issue is that it's not a toggle like DLSS, but needs some manual input. It is minor, but still more than DLSS, which is why most people don't bother.


I love gaming. nVidia has repetitively made the gaming scene worse by dangling the carrots in front of the donkeys to keep riding on their backs.
AMD has also innovated a bunch of things, but, most people dismiss their efforts and only value nVidia's. For one, the only reason we have Vulkan is because of AMD.
Tessellation was actually ATi technology.
AMD was the first to have a unified shading architecture.
AMD was the first and only one to support DX10.1.
AMD was the first and only one to support DX11 FL11_1.
AMD was the first to support DX12 features in their GPUs.
AMD was the first to have rapid packed math on their GPUs.
AMD was the first to have asynchronous compute on their GPUs.

I am not saying nVidia does not innovate. They were first with support for ROVs, RT, VRS etc. But people pretend that only nVidia has innovations and drives hardware forward, while that is simply not the case.
For rapid pack math with mixed precision (a.ka. dot math) on NVIDIA GPUs, read https://developer.nvidia.com/blog/mixed-precision-programming-cuda-8/

DP4A_DP2A-624x223.png


Tesla_Pascal_Numerical_Throughput-624x93.png


GP102 "Pascal" is also used for GTX 1080 Ti and Titan XP.

GP102 has rapid pack math for integer formats with mixed precision, but GP102 is missing FP16 and INT4


AMD Vega 56/64's rapid pack math has 16 bit operands with 16 bit results. LOL, stuipd AMD can't do tensor math correctly!

-------------
For AMD's Radeon VII or Instinct MI50 (aka GFX906, Vega 20) and some NAVI GPUs

HUobUPa.png




For NAVI 21 / GFX1030 aka RX 6800, RX 6800 XT and RX 6900 XT

t6KSLH5.jpg



From https://github.com/llvm/llvm-project/commit/9ee272f13d88f090817235ef4f91e56bb2a153d6


Big Navi/GFX1030/Navi 21/Sienna Cichlid/RX 6800 series does support those Instructions

case GK_GFX1030:
Features["ci-insts"] = true;
Features["dot1-insts"] = true;
Features["dot2-insts"] = true;
Features["dot5-insts"] = true;
Features["dot6-insts"] = true;

Features["dl-insts"] = true;
Features["flat-address-space"] = true;

Features["16-bit-insts"] = true;
Features["dpp"] = true;[/QUOTE]


From https://github.com/llvm-mirror/llvm/blob/master/lib/Target/AMDGPU/AMDGPU.td
def FeatureDot1Insts : SubtargetFeature<"dot1-insts",
"HasDot1Insts",
"true",
"Has v_dot4_i32_i8 and v_dot8_i32_i4 instructions"
>;

def FeatureDot2Insts : SubtargetFeature<"dot2-insts",
"HasDot2Insts",
"true",
"Has v_dot2_f32_f16, v_dot2_i32_i16, v_dot2_u32_u16, v_dot4_u32_u8, v_dot8_u32_u4 instructions"
>;

def FeatureDot3Insts : SubtargetFeature<"dot3-insts",
"HasDot3Insts",
"true",
"Has v_dot8c_i32_i4 instruction"
>;

def FeatureDot4Insts : SubtargetFeature<"dot4-insts",
"HasDot4Insts",
"true",
"Has v_dot2c_i32_i16 instruction"
>;

def FeatureDot5Insts : SubtargetFeature<"dot5-insts",
"HasDot5Insts",
"true",
"Has v_dot2c_f32_f16 instruction"
>;

def FeatureDot6Insts : SubtargetFeature<"dot6-insts",
"HasDot6Insts",
"true",
"Has v_dot4c_i32_i8 instruction"
>;
 
Last edited:

spyshagg

Should not be allowed to breed
giphy.gif



Sure! Atari used to be the king of gaming. I don't dispute anything you said about the past. Right now my loyalty is with NIVIDIA because they make IMO a superior product. If AMD comes with something that is better I have no issue switching to team Red. I do PC gaming because I want the best in my gaming.

AMD has great CPU's I give them that for sure. When I build my next PC (next year) it will probably have an AMD CPU for multiple reasons.

Until they give me a reason to get away from NVIDIA I have no reason to switch. If they announce some AI deep learning system that blows DLSS away I would be highly tempted to switch. Right now though with everything I am seeing AMD in the PC GPU scene is not looking good.

You said it yourself on this post, AMD has great CPU's. You could not have said that 3 years ago.
I keep an eye on the past present and future. You can only look at today and claim this will remain true forever. Remember what I said, AMD is not the weakling you think it is. It never was.

Its not you fault that you are ignorant of computing history (as little as 3 years ago), but sure are guilty of assuming the present will be the future. Its the fastest way in the west to eat crow.
 
Last edited:
The RX 6900 XT seems to be 13% faster than the RX 6800 XT according to its performance in an Open CL benchmark.


I wouldn't necessarily take that as an indicator of actual gaming performance. It is a synthetic non gaming workload afterall. But nice to see it have a notable improvement which should rear its head in productivity applications/workloads.

To the best of my knowledge the base clock/boost clock of the 6900XT are the same as the 6800XT, now there is a higher maximum clock limit in the BIOS and the chips are binned better so even at stock it should boost probably a little higher. It should have a lot more OC headroom than 6800XT but this will only really manifest on AIB models.

Strangely AMD seems to be power limiting the 6000 series of cards pretty heavily only allowing a max power limit of +15% even on AIB models of 6800XT (although it should be noted AIB models have a higher base power draw so +15% is on top of the higher base power draw, this is also why AIB models of 6800XT clock higher than reference and scale better real world performance with higher clocks than reference). I'm assuming this will likely be the same for 6900XT, at least for reference models. Maybe they will allow some partner models of 6900XT to go beyond the +15% power limit but right now it seems unlikely.

If 6900XT doesn't allow more than +15% power limit even for AIB models then it does seem strange that AMD would artificially hold back the performance of their entire 6000 series line up which by all accounts are a little power starved and would perform even better with a higher power limit. It seems like they have PTSD from previous GPU generations where they had much high power draw than Nvidia and got slammed by the press and gamers for being inefficient power hogs.

It sounds like this time they were determined to stay below Nvidia in power draw no matter the circumstance which seems a little short sighted to me but I guess they really wanted that perf per watt/power draw/efficiency crown. Either that or there is some kind of engineering reason why they would limit power draw so much, maybe some kind of safety measure or maybe errors occur too much having a negative effect on performance past a certain point. Will be interesting if they explain this in their white paper in some way once it has released.

Regarding overall 6900XT performance, with the same clocks and power limits and only 8 more CUs but better binned chips I don't see it blowing the 6800XT out of the water. Maybe 5-10% more performance at stock? Although with the chips being the best binned and the higher clock limit in the BIOS I expect AIB models to be able to sustain 2800Mhz+ on a manual OC if the AIB 6800XTs are anything to go by which should give a very nice performance boost. The only question is if AMD will allow AIB models to raise the power limit past +15%, if they do then that could allow for even more performance/better scaling with clocks.
 
Last edited:

BluRayHiDef

Banned
I wouldn't necessarily take that as an indicator of actual gaming performance. It is a synthetic non gaming workload afterall. But nice to see it have a notable improvement which should rear its head in productivity applications/workloads.

To the best of my knowledge the base clock/boost clock of the 6900XT are the same as the 6800XT, now there is a higher maximum clock limit in the BIOS and the chips are binned better so even at stock it should boost probably a little higher. It should have a lot more OC headroom than 6800XT but this will only really manifest on AIB models.

Strangely AMD seems to be power limiting the 6000 series of cards pretty heavily only allowing a max power limit of +15% even on AIB models of 6800XT (although it should be noted AIB models have a higher base power draw so +15% is on top of the higher base power draw, this is also why AIB models of 6800XT clock higher than reference and scale better real world performance with higher clocks than reference). I'm assuming this will likely be the same for 6900XT, at least for reference models. Maybe they will allow some partner models of 6900XT to go beyond the +15% power limit but right now it seems unlikely.

If 6900XT doesn't allow more than +15% power limit even for AIB models then it does seem strange that AMD would artificially hold back the performance of their entire 6000 series line up which by all accounts are a little power starved and would perform even better with a higher power limit. It seems like they have PTSD from previous GPU generations where they had much high power draw than Nvidia and got slammed by the press and gamers for being inefficient power hogs.

It sounds like this time they were determined to stay below Nvidia in power draw no matter the circumstance which seems a little short sighted to me but I guess they really wanted that perf per watt/power draw/efficiency crown. Either that or there is some kind of engineering reason why they would limit power draw so much, maybe some kind of safety measure or maybe errors occur too much having a negative effect on performance past a certain point. Will be interesting if they explain this in their white paper in some way once it has released.

Regarding overall 6900XT performance, with the same clocks and power limits and only 8 more CUs but better binned chips I don't see it blowing the 6800XT out of the water. Maybe 5-10% more performance at stock? Although with the chips being the best binned and the higher clock limit in the BIOS I expect AIB models to be able to sustain 2800Mhz+ on a manual OC if the AIB 6800XTs are anything to go by which should give a very nice performance boost. The only question is if AMD will allow AIB models to raise the power limit past +15%, if they do then that could allow for even more performance/better scaling with clocks.

I expect the 6900 XT to trade blows - in rasterization - with the RTX 3090 at 1440p and to be a bit slower at 4K, similar to how the RX 6800 XT performs relative to the RTX 3080. In regard to ray tracing, I expect it to be blown away by the RTX 3090.
 

johntown

Banned
You said it yourself on this post, AMD has great CPU's. You could not have said that 3 years ago.
I keep an eye on the past present and future. You can only look at today and claim this will remain true forever. Remember what I said, AMD is not the weakling you think it is. It never was.

Its not you fault that you are ignorant of computing history (as little as 3 years ago), but sure are guilty of assuming the present will be the future. Its the fastest way in the west to eat crow.
mmmmm-crow.jpg
 

Hairsplash

Member
very easy, the AMD. Nvidia is making too much money... (yes the nvidia is probably better... BUT, the AMD is good enough, and someone HAS TO STOP THE MADNESS THAT IS NVIDIA...)
 
Top Bottom