• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD RX 6000 'Big Navi' GPU Event | 10/28/2020 @ 12PM EST/9AM PST/4PM UK

Ascend

Member
Probably because it really doesn't hold a candle to dlss2+.
Of course you would know that in advance, considering it isn't even out yet 🤷‍♂️

Take Control for example, maxed out at 4k native on my 3080 (undervolted 1920mhz@850mv) it runs at 36fps, dlss quality 64fps, balanced 75fps, performance 87fps and ultra performance 117. And up to balanced mode included it looks indistinguishable from native during gameplay (I mean without having to resort to zoomed in screenshots). You're essentially doubling the fps for free. And even performance mode looks ok if you don't scrutinize every little detail. And at that point that's 2.5 times native performance.
Dlss2+ is in a league of its own. And the fact that it runs on specialized/dedicated hardware makes it even better compared to whatever solution AMD will come up with.
This statement has zero basis. It's like arguing that HairWorks would always be better on nVidia because AMD's tessellation sucked. AMD came with a completely different alternative with TressFX that didn't have to use tessellation at all and worked just as well.
Just because DLSS is using machine learning doesn't mean its final result is actually guaranteed to be better. Look at what happened with the first version of DLSS, where a simple 80% resolution with sharpening was superior to it. That was also running on specialized/dedicated hardware. Didn't help, now did it...?
Sure, DLSS 2.0 is much better than the first. That does not mean there are no alternatives that can achieve a similar result. The whole idea that DLSS 2.0 is untouchable is simply bias and mind share of nVidia being untouchable. It's blind fanaticism and nothing more. And it doesn't surprise me who supported this post. Confirmation bias is very prevalent apparently.
 
I'm ready for the real-world benchmarks this week. Let's see if these cards can compete with the RTX 3090 in the real world! Bring. It. On.
I'm particularly interested how close they can get to Turing or even Ampere, and to see how much performance drop in comparison when enabled vs disabled. I'm itching to pull the trigger on a 3080 thats in my cart right now.
 

BluRayHiDef

Banned
I'm particularly interested how close they can get to Turing or even Ampere, and to see how much performance drop in comparison when enabled vs disabled. I'm itching to pull the trigger on a 3080 thats in my cart right now.

I personally wouldn't get a card with less VRAM than what AMD is offering, which is 16GBs. I returned my RTX 3080 to Micro Center, but when I had it, I noticed that Watch Dogs: Legion consumed all 10GBs of its VRAM, and that title is among only the first games of this generation and is therefore indicative of what we should expect going forward in terms of VRAM requirements. Also, Godfall requires 12GBs of VRAM for its 4K x 4K textures and Marvel's Avengers runs at the pace of a slideshow with less than 10GBs of VRAM.

Article reporting that Godfall needs more than 10GBs of VRAM for its 4K x 4K textures:


________________________________________________________________________________​

Article reporting that Marvel's Avengers doesn't run well with less than 10GBs of VRAM:

Article said:
peaking of 4K, we did notice some VRAM limitations when using the game’s HD Texture Pack. On Ultra Settings/Ultra Textures, our performance went downhill in the following scene. As you can see, our RTX2080Ti was used to its fullest and pushed 17fps.

However, when we used Ultra Settings/High Textures, our performance skyrocketed to 42fps. This appears to be a VRAM limitation. As we can see, the game’s Ultra textures used 10.5GB of VRAM, whereas High textures used 8GB of VRAM. Thus, it will be interesting to see whether the NVIDIA RTX3080 will be able to handle this game in 4K with its 10GB VRAM.

 
I personally wouldn't get a card with less VRAM than what AMD is offering, which is 16GBs. I returned my RTX 3080 to Micro Center, but when I had it, I noticed that Watch Dogs: Legion consumed all 10GBs of its VRAM, and that title is among only the first games of this generation and is therefore indicative of what we should expect going forward in terms of VRAM requirements. Also, Godfall requires 12GBs of VRAM for its 4K x 4K textures and Marvel's Avengers runs at the pace of a slideshow with less than 10GBs of VRAM.

Article reporting that Godfall needs more than 10GBs of VRAM for its 4K x 4K textures:


________________________________________________________________________________​

Article reporting that Marvel's Avengers doesn't run well with less than 10GBs of VRAM:



 

BluRayHiDef

Banned

Does the Epic preset include 4K x 4K textures or do the 4K x 4K textures have to be downloaded separately as a texture pack?
 

Great Hair

Banned
Probably because it really doesn't hold a candle to dlss2+.
Take Control for example, maxed out at 4k native on my 3080 (undervolted 1920mhz@850mv) it runs at 36fps, dlss quality 64fps, balanced 75fps, performance 87fps and ultra performance 117. And up to balanced mode included it looks indistinguishable from native during gameplay (I mean without having to resort to zoomed in screenshots). You're essentially doubling the fps for free. And even performance mode looks ok if you don't scrutinize every little detail. And at that point that's 2.5 times native performance.
Dlss2+ is in a league of its own. And the fact that it runs on specialized/dedicated hardware makes it even better compared to whatever solution AMD will come up with.

Care to take some HQ screenshots of each dlss mode? with control. Thanks.
 

BluRayHiDef

Banned
Care to take some HQ screenshots of each dlss mode? with control. Thanks.

Check out this thread in which I posted screenshots of Control at native 4K and at 4K via different DLSS modes.

 

Great Hair

Banned
Check out this thread in which I posted screenshots of Control at native 4K and at 4K via different DLSS modes.

4K native seems to deliver the best IQ. I don´t see much of a difference tbh. Except for a sharper back&ground ... do you happen to know, what the FPS with each resolution were?

4K native > 1440p native > 1080p to 4K > 1080p native
4K 225%

1080 to 4K 225%

1080p 225%

1440p 225%
 

Antitype

Member
Of course you would know that in advance, considering it isn't even out yet 🤷‍♂️

Well Sony's checkerboard is out and so is dlss2. So you bet I would know.

This statement has zero basis. It's like arguing that HairWorks would always be better on nVidia because AMD's tessellation sucked. AMD came with a completely different alternative with TressFX that didn't have to use tessellation at all and worked just as well.
Just because DLSS is using machine learning doesn't mean its final result is actually guaranteed to be better. Look at what happened with the first version of DLSS, where a simple 80% resolution with sharpening was superior to it. That was also running on specialized/dedicated hardware. Didn't help, now did it...?
Sure, DLSS 2.0 is much better than the first. That does not mean there are no alternatives that can achieve a similar result. The whole idea that DLSS 2.0 is untouchable is simply bias and mind share of nVidia being untouchable. It's blind fanaticism and nothing more. And it doesn't surprise me who supported this post. Confirmation bias is very prevalent apparently.

You misunderstood what I meant. Whatever AMD would come up with for its Super Resolution will take resources away, whereas on Nvidia it's handled by separate hardware. AMD will only be able to allocate a few ms of render time or it would become pointless as there would be no performance uplift. They'll never be able to get as good perfomance uplift while maintaining a great IQ.

Btw, calls of fanaticism are comical coming from an AMD super fan lol
 
wordslaughter wordslaughter

You were saying?

Told you to just wait and see man.

Yeah that is pretty amazing, imagine how far the custom AIB models will be able to OC with their (normally) improved cooling solution/fan design?

Hell out of the box they should come with a solid factory OC so without making any manual OC there could already be a very solid perf bump.
 

Ascend

Member
Well Sony's checkerboard is out and so is dlss2. So you bet I would know.
If you actually did comprehensive reading, you would have noticed that I said it could and should be combined with other techniques to improve its quality. Comparing it on its own is dishonest and not representative of what can potentially be achieved.

You misunderstood what I meant. Whatever AMD would come up with for its Super Resolution will take resources away, whereas on Nvidia it's handled by separate hardware. AMD will only be able to allocate a few ms of render time or it would become pointless as there would be no performance uplift. They'll never be able to get as good perfomance uplift while maintaining a great IQ.
Yeah... I'm pretty sure I didn't misunderstand, considering the same logic applies to DLSS1 and the 80%res+sharpening, where the additional cores didn't help at all.

Btw, calls of fanaticism are comical coming from an AMD super fan lol
I labeled your behavior (correctly by the way) and you label my person based on resentment. Very mature.
 

Ascend

Member
Ok, bye. I have better things to do than waste my time arguing with fanboys.
Then you should have remained quiet from the beginning and found another thread to post in. This is the RX6000 series thread, not the "let's advertise for nVidia"-thread. But I guess your fanaticism was too strong, and now that you're called on it, you run away. But by all means, if it makes you feel better, go ahead. Good riddance.
 
Last edited:

rnlval

Member
Of course you would know that in advance, considering it isn't even out yet 🤷‍♂️


This statement has zero basis. It's like arguing that HairWorks would always be better on nVidia because AMD's tessellation sucked. AMD came with a completely different alternative with TressFX that didn't have to use tessellation at all and worked just as well.
Just because DLSS is using machine learning doesn't mean its final result is actually guaranteed to be better. Look at what happened with the first version of DLSS, where a simple 80% resolution with sharpening was superior to it. That was also running on specialized/dedicated hardware. Didn't help, now did it...?
Sure, DLSS 2.0 is much better than the first. That does not mean there are no alternatives that can achieve a similar result. The whole idea that DLSS 2.0 is untouchable is simply bias and mind share of nVidia being untouchable. It's blind fanaticism and nothing more. And it doesn't surprise me who supported this post. Confirmation bias is very prevalent apparently.
TressFX and Hairworks use a more or less optimized compute based simulation. TressFX has master and slave hair strand support to get better performance.

On the rendering side, TressFX and Hairworks are significantly different. The vertices per hair strand are configurable on TressFX, and it doesn't use tessellation. This kind of effect does not even need tessellation, because the 64 vertices per strand are always enough. Hairworks employs tessellation and creates a large number of vertices per strand. This is a serious design flaw because all the hair strands rendered into a 8xMSAA render target, and HairWorks is also using geometry shader for extruding segments into front-facing geometry. This is why Hairworks has a very high resource usage.

TressFX uses vertex shader for extruding segments into front-facing polygons, which is a faster solution with the same results, and it doesn't use MSAA at all. But anti-aliasing is critical for hair rendering, so TressFX uses an analytical approach specially designed for thin geometry which is s much faster and yields higher quality compared to Hairworks.

TressFX also uses per-pixel linked lists for an order-independent transparency solution, which is not a cheap algorithm, but this is why a TressFX hair looks like hair and not like spaghetti. Hairworks don't use any order-independent transparency solution which negatively affecting the overall quality.
 

VFXVeteran

Banned
I personally wouldn't get a card with less VRAM than what AMD is offering, which is 16GBs. I returned my RTX 3080 to Micro Center, but when I had it, I noticed that Watch Dogs: Legion consumed all 10GBs of its VRAM, and that title is among only the first games of this generation and is therefore indicative of what we should expect going forward in terms of VRAM requirements. Also, Godfall requires 12GBs of VRAM for its 4K x 4K textures and Marvel's Avengers runs at the pace of a slideshow with less than 10GBs of VRAM.

Article reporting that Godfall needs more than 10GBs of VRAM for its 4K x 4K textures:


________________________________________________________________________________​

Article reporting that Marvel's Avengers doesn't run well with less than 10GBs of VRAM:




I remember telling people about my experiences with VRAM requirements in my own development side as well as testing on the games with my PC. I declared that 10G of VRAM was too little for this generation. Instead of hearing me out, I got backlash on "usage" doesn't mean "allocation" over and over again to justify some invisible number that a person THINKS the graphics engine is using.

Bottom Line - 16G or more of VRAM is required for good cohesive FPS and streamlined pipeline. It'd be nice if people actually listened and asked questions instead of strong arming my recommendations only to be proven wrong by some random website. :messenger_beaming:
 

Ascend

Member
Not bad at all...

Shadow of the Tomb Raider;
6800XT 82fps
2080Ti 71 fps
3080 96 fps

Metro Exodus;
6800XT 67fps
2080Ti 70 fps
3080 98 fps

I used Techspot as a reference. Assuming they are the same settings, it's not bad at all. It's surprisingly hard to find results of the other games. Two games is not enough of a sample to know how it performs. Ray traced shadows seem to work fine, considering its performance in Shadow of the Tomb Raider.
Metro Exodus uses Global Illumination, which doesn't seem as good for the 6800XT.

For me, this is good enough. Can't wait for the reviews tomorrow
 

FireFly

Member
Unacceptable. Why would they post 1440p numbers which they are good at lower resolution and not tax the system to the fullest with native 4k. Native 4k is the ONLY benchmarks that will truly tell what kind of RT power these boards have.
According to AMD these cards are designed to run tracing at 1440p, not 4K. For 4K, you will likely need their Super Resolution technology.
 

Ascend

Member
Unacceptable. Why would they post 1440p numbers which they are good at lower resolution and not tax the system to the fullest with native 4k. Native 4k is the ONLY benchmarks that will truly tell what kind of RT power these boards have.
Weren't you one of the ones touting DLSS, which is not native 4k?

RT is taxing enough to know its performance, even for 1080p. No GPU is currently going to be CPU bottlenecked at 1440p with RT.
 
Last edited:

VFXVeteran

Banned
According to AMD these cards are designed to run tracing at 1440p, not 4K. For 4K, you will likely need their Super Resolution technology.
Then they are out of the race already. Run the benchmarks with 4k with SR tech. Run them with 4k native. Thats' the only comparison I'm concerned about. Both high end boards will struggle with RT at the high res but that's where the tech is peaking at. And that's where we need to evaluate where the generation is going and what their most powerful card can do with all the features.

Weren't you one of the ones touting DLSS, which is not native 4k?

RT is taxing enough to know its performance, even for 1080p. No GPU is currently going to be CPU bottlenecked at 1440p with RT.

I do tout DLSS for sure. But only because the difference is pretty negligible.. BUT I still want to know what the raw GPU can do with no tricks.

CPU may not be bottlenecked at 1440p, but 2160p will cast way more rays and will give you a good picture at pushing the ray count to the limit and seeing how fast the board really is.
 

evanft

Member

Oooooh now this is getting interesting.

Battlefield VCOD: MWCrysis RMMetro ExodusSOTTTR
6800 XT7095906782
3080 FE100.2126??????7569

Benchmarks sources:
FPS Review
Digital Foundry
WCCF Tech

I couldn't find a Crysis benchmark that used the combo of settings they used (High w/high RTX).
 

FireFly

Member
Then they are out of the race already. Run the benchmarks with 4k with SR tech. Run them with 4k native. Thats' the only comparison I'm concerned about. Both high end boards will struggle with RT at the high res but that's where the tech is peaking at. And that's where we need to evaluate where the generation is going and what their most powerful card can do with all the features.
Tomorrow you will be able to see benchmarks at 4K. 4K benchmarks might not be representative of 1440p performance, though, if at this resolution the BVH can't fit fully in the InfinityCache.
 

VFXVeteran

Banned
Tomorrow you will be able to see benchmarks at 4K. 4K benchmarks might not be representative of 1440p performance, though, if at this resolution the BVH can't fit fully in the InfinityCache.

This will be good. Is this a NDA being removed starting tomorrow? I was wondering why they haven't done any test with the cards yet. I thought they were released already.
 

Ascend

Member
I do tout DLSS for sure. But only because the difference is pretty negligible.. BUT I still want to know what the raw GPU can do with no tricks.

CPU may not be bottlenecked at 1440p, but 2160p will cast way more rays and will give you a good picture at pushing the ray count to the limit and seeing how fast the board really is.
If you're interested in the RT scaling, you should compare the percentage drop from 1440p to 4k using rasterization with the percentage drop from 1440p to 4K with RT.
 

VFXVeteran

Banned
If you're interested in the RT scaling, you should compare the percentage drop from 1440p to 4k using rasterization with the percentage drop from 1440p to 4K with RT.

Yeap, good idea. I'm sure someone will post that on their website at some point.
 
Not bad at all...

Shadow of the Tomb Raider;
6800XT 82fps
2080Ti 71 fps
3080 96 fps

Metro Exodus;
6800XT 67fps
2080Ti 70 fps
3080 98 fps

I used Techspot as a reference. Assuming they are the same settings, it's not bad at all. It's surprisingly hard to find results of the other games. Two games is not enough of a sample to know how it performs. Ray traced shadows seem to work fine, considering its performance in Shadow of the Tomb Raider.
Metro Exodus uses Global Illumination, which doesn't seem as good for the 6800XT.

For me, this is good enough. Can't wait for the reviews tomorrow
Am I crazy or this is really bad? Isn't the 6800XT supposed to battle the 3080? What am I missing here?
 

Ascend

Member
Am I crazy or this is really bad? Isn't the 6800XT supposed to battle the 3080? What am I missing here?
The 6800XT competes with the 3080 in traditional rasterization performance. AMD didn't share anything regarding their RT performance. Leaks suggest performance better than Turing but not matching the ampere cards.

Tomorrow we will know for sure. I am not expected much from their RT performance to be honest. Maybe I will be pleasantly surprised, or maybe we get what I expect.
Considering AMD just stated that their RT targets 1440p performance, I don't think we can expect 60 fps at native 4k with RT on.
 

Rikkori

Member
EnCvSQvXUAA2dMq


Lisa Su, why do you do this to us??

tenor.gif
 

Rikkori

Member
Top Bottom