• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Radeon RX6800/RX6800XT Reviews/Benchmarks Thread |OT|

Kenpachii

Member
€979 is what my colleague paid for his EVGA 3080 FTW3 on Alternate.de the other day. It's sad that that can actually be considered a fair price for that card at the moment, as it's never in stock anywhere :/ (and btw my Zotac performs a bit better, and like 8°C cooler than the FTW...) I snagged a ZOTAC 3080 OC the other day, and to be honest, the RTX games I've tried to this point (Quake 2, Fortnite, Deliver us the Moon, Control, COD Cold War) haven't really wowed me that much. And COD with RTX runs worse than it does on a €400 PS5.

I'm actually considering selling on the 3080 and trying to grab a PS5 and Demon's Souls...

Its pretty insane how much value those consoles give at this point. PC is cooked with those retarded prices.
 

Rikkori

Member
The sad thing is that we'll see card refreshes before the prices on these actually stabilise. As someone earlier said, yeah, feeling very lucky to have snagged a ref 6800 - and to think I was having doubts lol. :messenger_astonished: I'll probably end up keeping it 4 years, damn CPU needs upgrade too, so that's 2-3 years out for DDR5.

I do wonder, how far can the prices be pushed though? Clearly Nvidia hasn't stopped with $1299 -> $1499, and AMD has put up prices too even if less so, but when will it be that you can get better price/performance WITHOUT actually having to jump over that $500 bracket-fence? Clearly the $200 market has been fucked for about 4 years now, and at least another year to go before there's a hope of improvement.

It's a real shame.
 

MadYarpen

Member
There is also another thing that surprises me. Rx 5700xt is starting to disappear :(

I need a 1440p card, preferably from AMD due to free sync monitor, and I don't know what to do now. Buy any 5700xt that is left and wait with a serious upgrade to rdna 3? Hope to get 6800 somehow?

It would be easier if I knew if cyberpunk requirements are for 60 fps.
 
AMD had nVidia on the plate, it is amazing to see this so fucked up. Its not even that bots are buying everything, just there are no cards.

I am as far from bein AMD fanboy as possible, it is just incredible when you look at this from business perspective.

i kinda understand AMD from a business perspective, as the margins on these are not as good as some of the competing 7nm products.

but really this would have been such an easy marketing win against nvidia if they just had dedicated a bit more waver capacity for the GPU branch.
 

llien

Member
Goddamn bitcoin doubled in price, I hope it didn't turn GPU mining into something viable again.

Hey llien llien remember our talk in the rtx thread? So how did your predictions turn out with the rtx launch being a panic paper launch now that amd cards are available in even worse quantities? :messenger_tears_of_joy:

It is too loud a word to call it predictions, I've pointed out what was very obvious, and it obivously stands.
I know, it's painful for you and others who "predicted" that AMD would barely scratch 3070 to read this, but let me remind you, nevertheless:

1) Pricing on Ampere cards is an anomaly caused by RDNA2 lineup (GA104 cannot compete with "big navi")
2) Nonsensical memory configurations are caused by NV dropping GPUs by a tier, hold on, again GA104 cannot compete with "big navi"

At the moment even Zen3 is nowhere to be found, upgrade itch is high in folks, I'd ring worry bells if RDNA2 GPUs are not sold en mass by mid Jan.
 
Last edited:

BluRayHiDef

Banned
Goddamn bitcoin doubled in price, I hope it didn't turn GPU mining into something viable again.



It is too loud a word to call it predictions, I've pointed out what was very obvious, and it obivously stands.
I know, it's painful for you and others who "predicted" that AMD would barely scratch 3070 to read this, but let me remind you, nevertheless:

1) Pricing on Ampere cards is an anomaly caused by RDNA2 lineup (GA104 cannot compete with "big navi")
2) Nonsensical memory configurations are caused by NV dropping GPUs by a tier, hold on, again GA104 cannot compete with "big navi"

At the moment even Zen3 is nowhere to be found, upgrade itch is high in the folks, I'd ring worry bells if RDNA2 GPUs are not sold en mass by mid Jan.

What do you mean that GA104 cannot compete with Big Navi? It trades blows with it; they're about even in rasterization and GA104 performs better in ray tracing.
 

llien

Member
What do you mean that GA104 cannot compete with Big Navi? It trades blows with it; they're about even in rasterization...
What the heck is wrong with you, guys?
Even 6800 stomps all over 3070, 6800XT wipes the floor with it.


qE8aVNG.png


 
Last edited:

Ascend

Member
Goddamn bitcoin doubled in price, I hope it didn't turn GPU mining into something viable again.
No one is going to mine Bitcoin with their GPU. It simply is not profitable. ASICs have taken over Bitcoin. Other cryptocurrencies however are another story. Not that I think there will be a huge boom right now. The market is still too uncertain.
 

BluRayHiDef

Banned
What the heck is wrong with you, guys?
Even 6800 stomps all over 3070, 6800XT wipes the floor with it.


qE8aVNG.png



Oops, I mistook GA104 for GA102. However, it's not fair to compare the RTX 3070 to the RX 6800 or the RX 6800XT; the 3070 is priced below them both.
 

llien

Member
Sapphire's GPU can OC to 2.7Ghz on air, ASUS WC even beyond, motherfucker... :messenger_alien:

Hefty pricing on AIBs, but I think it's better that way, than when money goes to scalpers.

Oops, I mistook GA104 for GA102. However, it's not fair to compare the RTX 3070 to the RX 6800 or the RX 6800XT; the 3070 is priced below them both.
Actual street price on 3070 is 700 Euro (mindfactory sold 600+ of those last week for that price).
This is more than most Europeans (VAT is different in different countrties) paid for 6800XT (!!!).

MSRP vs MSRP,it would be $580 vs $500 (maybe in 6 month?) for a faster GPU with double the VRAM. I think it is still fair.
 
Last edited:

ZywyPL

Banned
There's a good section on the recent Hardware Unboxed Q&A about what's more relevant to focus on for future proofing, raytracing vs more VRAM.

The golden rule of building a PC - buy for the now, and enjoy. Because really, no one knows where the industry is heading, except for the very few companies on top of the food chain that create the said industry. I find this whole ongoing VRAM discussion to be so flawed, driven by blind assumptions that A) everyone suddenly plays in 4K, B), people are too dumb to tweak down the textures quality from Ultra to Very High/High if they'll really run out of VRAM, and C) that for whatever the reasons, games will suddenly start using higher than 4K textures which the PCs have been using for quite some time now and never run out of VRAM.

And who knows, maybe in a few years DX12U will kick in and textures will be indeed streamed from the SSD, and VRAM will never be an issue ever again? Maybe upscaling algorithms will become more and more commonly utilized, and also reduce the VRAM usage? Nobody knows that, it's all in the hands of developers, which then again, is determined by the market adoption, how fast people will abandon PS4/XB1 and jump onto newer consoles, and how fast people will upgrade to Big Navi/Turing/Ampere cards, and no one has a crystal ball to tell whether such transition will occur within the next year or five. So why bother with something you cannot predict nor you can control?

Because bottom line is, if anyone is really THAT concerned about the future, the upcoming years, I think it will be better for him to simply skip the current gen of GPUs all along and wait for next ones to arrive, because I feel like some people are standing between choosing the lesser evil, and will regret their choice regardless.
 

llien

Member
Because bottom line is, if anyone is really THAT concerned about the future, the upcoming years, I think it will be better for him to simply skip the current gen of GPUs

Because why buy similarly priced (and more power efficient) GPU with more VRAM right now.
Makes sense.

You are missing the point: consoles are already having 10GB VRAM for GPU stuff ON TOP OF using SSD in crazy ways.
The trick with "textures do not fit" has ALREADY BEEN used by DF to mislead on 2080 vs 3080 performance in their stinky "early preview".
 

BluRayHiDef

Banned
2600Mhz core clock speeds...I never thought I'd see the day...

According to Bitwit, the higher clockspeeds don't scale linearly with real-world game performance.

Go to 16:58 and listen to his commentary on the benchmark results; at 2660 MHz, the RX 6800 XT Red Devil ran faster in synthetic benchmarks but not in games at 1440p. However, at 4K, it ran 5% to 8% faster.

 
Last edited:

Ascend

Member
The golden rule of building a PC - buy for the now, and enjoy. Because really, no one knows where the industry is heading, except for the very few companies on top of the food chain that create the said industry. I find this whole ongoing VRAM discussion to be so flawed, driven by blind assumptions that A) everyone suddenly plays in 4K, B), people are too dumb to tweak down the textures quality from Ultra to Very High/High if they'll really run out of VRAM, and C) that for whatever the reasons, games will suddenly start using higher than 4K textures which the PCs have been using for quite some time now and never run out of VRAM.

And who knows, maybe in a few years DX12U will kick in and textures will be indeed streamed from the SSD, and VRAM will never be an issue ever again? Maybe upscaling algorithms will become more and more commonly utilized, and also reduce the VRAM usage? Nobody knows that, it's all in the hands of developers, which then again, is determined by the market adoption, how fast people will abandon PS4/XB1 and jump onto newer consoles, and how fast people will upgrade to Big Navi/Turing/Ampere cards, and no one has a crystal ball to tell whether such transition will occur within the next year or five. So why bother with something you cannot predict nor you can control?

Because bottom line is, if anyone is really THAT concerned about the future, the upcoming years, I think it will be better for him to simply skip the current gen of GPUs all along and wait for next ones to arrive, because I feel like some people are standing between choosing the lesser evil, and will regret their choice regardless.
RT also increases VRAM usage by a very significant amount. Just sayin'.
 
Last edited:
According to Bitwit, the higher clockspeeds don't scale linearly with real-world game performance.

Go to 16:58 and listen to his commentary on the benchmark results; at 2660 MHz, the RX 6800 XT Red Devil ran faster in synthetic benchmarks but not in games at 1440p. However, at 4K, it ran 5% to 8% faster.



That is likely because at 1440p there is some CPU limiting going on, nowhere near as much as 1080p but it is certainly present.

At 4K you remove that limitation somewhat and let the GPU go full hog. Also in a general sense scaling up cores (CUs/SMs) or clock speed never results in a direct linear increase. The scaling is not perfect and never has been on any GPU however the simple fact is increasing clocks does increase performance in a general sense.

Looking at the titles he tests there are only a few, AoS specifically is a mostly CPU intensive/bound game at almost any resolution.

I would like to see someone do a proper benchmark suite with these AIB models showing stock and manual OC across a suite of 15-20 games. Would be great to get an idea of average uplift and compare them to the reference model. Like with anything the games chosen to benchmark will have a huge impact on the final results.
 

ZywyPL

Banned
Because why buy similarly priced (and more power efficient) GPU with more VRAM right now.
Makes sense.

What does "more power efficient" even mean? Sorry, but if I had to chose today between two equally priced cards that go toe-to-toe in benchmarks, except one can actually do RT in playable framerates and the other one can't but has tons of unused VRAM instead, then the choice is really simple. Because how much of a "future proofing" it is when you can't even fully enjoy today's games, like the upcoming CP2077 for example? Maybe spending 1000$ to play at 1080p seems like a good investment for you, but I don't share that mindset, and I'm afraid neither do many gamers around the globe.


You are missing the point: consoles are already having 10GB VRAM for GPU stuff ON TOP OF using SSD in crazy ways.
The trick with "textures do not fit" has ALREADY BEEN used by DF to mislead on 2080 vs 3080 performance in their stinky "early preview".

Consoles have their 13GB RAM shared between CPU and GPU, somewhere between 50:50 - 30:70, so if consoles can do native 4K + 4K textures + RT with such amount of VRAM, so will the PCs. And bare in mind this is the baseline for the next 8 years or so, if we're talking about future proof concerns, the games won't be made with more VRAM in mind until PS6/next XB arrive.
 

Dr.D00p

Member
onsoles have their 13GB RAM shared between CPU and GPU, somewhere between 50:50 - 30:70, so if consoles can do native 4K + 4K textures + RT with such amount of VRAM, so will the PCs.

Not how it works, if console games are coming with 4K textures, then the PC ports will start using 8K textures and higher sample count Ray Tracing.

PC's will always be where the next big thing starts...its how the PC gaming scene continues to thrive, despite the ever increasing cost and shit-show paper launches like we keep seeing.
 

llien

Member
What does "more power efficient" even mean?
Means it consumes less electricity and, hence, generates less heat to do the same work.

There is more to it than just some watt figures, as power hungry card might force people to buy new PSUs.

RT in playable framerates
3070/2080Ti levels of perf (this is what 6800XT has in green crap sponsored games like Control) is now "unplayable framerates", mkay, thanks.
But then we have:

1606382763578-png.177032

1606382787166-png.177033


Consoles have their 13GB RAM shared between CPU and GPU, somewhere between 50:50 - 30:70
Code doesn't eat much.
Microsoft has 10 + 6 split, with 10GB being faster than the remaining 6, as if they are hinting at something... :messenger_beaming:
I don't want to continue the "we do not need more VRAM" denialism, sorry, let's agree to disagree here, enjoy your 8/10GB....
 
Means it consumes less electricity and, hence, generates less heat to do the same work.

There is more to it than just some watt figures, as power hungry card might force people to buy new PSUs.


3070/2080Ti levels of perf (this is what 6800XT has in green crap sponsored games like Control) is now "unplayable framerates", mkay, thanks.
But then we have:

1606382763578-png.177032

1606382787166-png.177033



Code doesn't eat much.
Microsoft has 10 + 6 split, with 10GB being faster than the remaining 6, as if they are hinting at something... :messenger_beaming:
I don't want to continue the "we do not need more VRAM" denialism, sorry, let's agree to disagree here, enjoy your 8/10GB....

Wow that's impressive RT for the RDNA2 cards. It's very curious, there are few actual game ray-tracing benches out there by the big sites. Some have used synthetic benches to show Nvidia with a big lead, or used just 2 or so Nvidia RT favouring titles like Minecraft using the RT mode, both very, very misleading.

Here's another useful bench by ExtremeTech:

metro-exodus-extreme-rt-ultra-6800.png


RT on 6800XT seems pretty robust actually.
 
Last edited:

regawdless

Banned
Means it consumes less electricity and, hence, generates less heat to do the same work.

There is more to it than just some watt figures, as power hungry card might force people to buy new PSUs.


3070/2080Ti levels of perf (this is what 6800XT has in green crap sponsored games like Control) is now "unplayable framerates", mkay, thanks.
But then we have:

1606382763578-png.177032

1606382787166-png.177033



Code doesn't eat much.
Microsoft has 10 + 6 split, with 10GB being faster than the remaining 6, as if they are hinting at something... :messenger_beaming:
I don't want to continue the "we do not need more VRAM" denialism, sorry, let's agree to disagree here, enjoy your 8/10GB....

I'm quoting you for the third fucking time now on this.

STOP SPREADING FALSE INFORMATION BECAUSE OF YOUR BIAS.

The raytracing in WD Legion is bugged and uses extremely low quality settings, way below the minimum PC settings.
 

Ascend

Member
Since many people missed this post regarding RT apparently... I'm quoting it;

An update on the whole RT situation, based on a post by Dictator at Beyond3D, with some of my own thoughts mixed in.

The main difference between nVidia's RT and AMD's RT is that nVidia's RT cores cover both BVH and ray traversal, while AMD's RAs cover BVH, while ray traversal is done on the CUs.

AMD's implementation has the advantage that you could write your own traversal code to be more efficient and optimize on a game per game basis. The drawback is that the DXR API is apparently a black box, which prevents said code from being written by developers, a limit the consoles do not have. AMD does have the ability to change the traversal code in their drivers, meaning, working with developers becomes increasingly important.

nVidia's implementation has the advantage that the traversal code is accelerated, meaning, whatever you throw at it, it's bound to perform relatively well. It comes at the cost of programmability, which for them doesn't matter much for now, because as mentioned before, DXR is a black box. And it saves them from having to keep writing drivers per game.

That doesn't mean that nVidia's is necessarily better, but in the short term, it is bound to be better, because little optimization is required. Apparently developers are liking Sony's traversal code on the PS5 as is, so maybe something similar will end up in the AMD drivers down the line, if Sony is willing to share it with AMD.

I hinted at this a while back, where on the AMD cards the amount of CUs dedicated for RT is variable. There is an optimal balancing point somewhere, where the CUs are divided between RT and rasterization, and that point changes on a per game and per setting basis.
For example, if you only have RT shadows, maybe 20 CUs dedicated to the traversal are enough, and the rest are for the rasterization portion, and both would output around 60 fps, thus they balance out. But if you have many RT effects, having a bunch of CUs for rasterization and only a few for RT makes little sense, because the RT portion will output only 15 fps and the rasterization portion will do 75 fps, and the unbalanced distribution will leave all those rasterization CUs idling after they are done, because they have to wait for the RT to finish anyway.

AMD's approach makes sense, because it has to cater to both the consoles and the PC. nVidia's approach also makes sense, because for them, only the PC matter.
 

llien

Member
Ascend Ascend
What does "cover BVH" mean?
It is just a way to structure data.
Exactly what is "covered" if traversal is not?
 
Last edited:

Ascend

Member
Ascend Ascend
What does "cover BVH" mean?
It is just a way to structure data.
Exactly what is "covered" if traversal is not?
It means it's hardware accelerated. Traversal does not have dedicated hardware acceleration on AMD hardware and is done on the CU, hence the programmability part. BVH is directly hardware accelerated.
nVidia does both.
 
Last edited:

MadAnon

Member
Sapphire's GPU can OC to 2.7Ghz on air, ASUS WC even beyond, motherfucker... :messenger_alien:

Hefty pricing on AIBs, but I think it's better that way, than when money goes to scalpers.


Actual street price on 3070 is 700 Euro (mindfactory sold 600+ of those last week for that price).
This is more than most Europeans (VAT is different in different countrties) paid for 6800XT (!!!).

MSRP vs MSRP,it would be $580 vs $500 (maybe in 6 month?) for a faster GPU with double the VRAM. I think it is still fair.

I repeat, where are those below 700 EUR 6800XTs most of Europeans are buying!!?? Stop spreading misinformation you uninformed buffoon. Seriously! 6800XTs in Europe go for 800+ EUR. In my country they are listed at 900+.
 

llien

Member
It means it's hardware accelerated. Traversal does not have dedicated hardware acceleration on AMD hardware and is done on the CU, hence the programmability part. BVH is directly hardware accelerated.
nVidia does both.

BVH is just data structure.
Data cannot be "hardware accelerated".
What kind of operation with it is "hardware accelerated"?
 

Ascend

Member
BVH is just data structure.
Data cannot be "hardware accelerated".
What kind of operation with it is "hardware accelerated"?
BVH is not data structure; it is THE technique to accelerate ray tracing.

"The simplest method to accelerate ray tracing can't necessarily be considered as an acceleration structure, however this is certainly the simplest way to cut down render times. Each one of the grid from the teapot has potentially more than a hundred triangles: with 8 divisions, the grid contains 128 triangles, which requires a similar number of intersection test for each ray cast into the scene. What we can do instead, is ray trace a box containing all the vertices of the grid. Such box is called a bounding volume: it is the tightest possible volume (a box in that case but it could also be a sphere) surrounding the grid. "

"this technique still suffers from the fact that the rendering time is proportional to the number of objects in the scene. To improve the performance of this method a step further, Kay and Kajiya suggest to use a hierarchy of volumes.
For each such node we compute a bounding volume. Then if a ray misses the bounding volume at a given node in the hierarchy, we can reject that node's entire subtree from further consideration. Using a hierarchy causes the cost of computing an image to behave logarithmically in the number of objects in the scene. "

"Grouping the bounding volumes we have described in the previous chapter into a hierarchy of bounding volumes is called a Bounding Volume Hierarchy or BVH. It generally provides (compared to other possible acceleration structures) very good results."


In other words, BVH is the grouping of object data in such a way to reduce the amount of rays that need to be calculated, i.e. reducing ray traversal calculations. It requires compute power to do that grouping.

Enjoy reading;
 
Last edited:

Ascend

Member
Dude...

Anyhow, let's go along that route: what operation do you think RDNA2 hardware supports on BVH?
You're acting as if that data is already there, but it's not. That data needs to be constructed. So BVH is not a data structure that is already there. It is the actual structuring of object data in preparation for ray traversal.

What do you mean "what operation do you think RDNA2 hardware supports on BVH"? BVH is an operation.
Every frame, the bounding volume needs to be calculated, because all objects can shift. The further away the object, the larger the "box" of the object can be. This is not a preset, but requires compute power.
Then, all those boxes of all those objects , need to be put in a hierarchy, meaning, if one is behind the other, it is discarded from the ray calculation to increase efficiency. This is also a calculation and is not a preset.
Then, ray traversal comes into play.
 
Last edited:

CuNi

Member
Goddamn bitcoin doubled in price, I hope it didn't turn GPU mining into something viable again.



It is too loud a word to call it predictions, I've pointed out what was very obvious, and it obivously stands.
I know, it's painful for you and others who "predicted" that AMD would barely scratch 3070 to read this, but let me remind you, nevertheless:

1) Pricing on Ampere cards is an anomaly caused by RDNA2 lineup (GA104 cannot compete with "big navi")
2) Nonsensical memory configurations are caused by NV dropping GPUs by a tier, hold on, again GA104 cannot compete with "big navi"

At the moment even Zen3 is nowhere to be found, upgrade itch is high in folks, I'd ring worry bells if RDNA2 GPUs are not sold en mass by mid Jan.

It obviously does not stand. 3080 is equal to 6800XT, in RT even way above it.
VRAM is not a "weird" config, it's good enough for what the card does. GPU tech is just slowly moving into 4k. All cards and consoles advertise 4k and look what it did. We barely have any real next gen looking games instead only got a resolution bump which eats away nearly all the GPU power up we got. RDNA3 is rumored to have yet another 50% perf per Watt uplift and I bet we can expect similar from Nvidia, so if you're planing on future proofing, you should skip this gen completely because even your 6gb more VRAM will not translate to much better performance since that 6800XT with 16 GB will have the compute power of the then available entry cards.

I have no issue in admitting that I did not expect AMD, based on previous track records, to end up this competitive. But, and you see, this is a good thing. They are in a good position back in the game which means we all win moving forward as there is more pressure on Nvidia moving forward.
 

llien

Member
You're acting as if that data is already there, but it's not. That data needs to be constructed. So BVH is not a data structure that is already there. It is the actual structuring of object data in preparation for ray traversal.
That's an interesting take, but I frankly doubt that updating BVHs is what is done by RDNA2 RT core, or at all (for most objects) as it is too expensive to do:

xncdYTH.png



It obviously does not stand. 3080 is equal to 6800XT, in RT even way above it.
3080 is GA102... It is faster than 2080Ti yet has less VRAM.
It was obviously planned as 20GB card for higher price, but GA104 is too slow to be sold as 3080.
 

mr.dilya

Banned
It's kinda hilarious watching these GPU reviews. They're pretty much telling their subscribers hey look at this brand new card AMD sent me for free, too bad you'll never be able to get one yourself even if you paid double LMAO.
 

ZywyPL

Banned
Means it consumes less electricity and, hence, generates less heat to do the same work.

WOW, like this is the reason people buy high-end PC parts... Well fortunately, if you're so concern about saving on the electricity, as Don Mattrick would've said - there's a product for you, it's called a laptop.



From all the 14 bars shown here only 3 are above 60, I think we have different understanding of what "playable" means... But that's where NV has DLSS to back up the performance, whereas with AMD you have to either settle for those 30-40FPS or just turn RT completely off, so that's not even choosing the lesser evil, you're just screwed either way, and that wouldn't even be an issue at all if the cards didn't cost 500/700/1000$, it's the same situation as with Turing GPUs back in 2018-2019, where yeah the tech is there, but it tanks the performance so much you have to turn it off, and that's not what people paid so much money for.

I don't want to continue the "we do not need more VRAM" denialism

We don't, there are a lot of concerns floating around the web(strangely mostly from AMD folks tho), but there hasn't been even a single actual real life example recorded so far, because even the most demanding games out there don't use more than 6-8GB, and that's 4K I'm talking about, in 1080p/1440p it goes down by 10-40% even, depending on the game. And even at 4K, again, thanks to DLSS it goes down by quite a margin. So I don't know, call me back when people will really starting experiencing stutters due to insufficient VRAM, not just in your nightmares.
 

llien

Member
Ascend Ascend

I stand corrected, here is an interesting doc on DXR.

DirectX acceleration structures are opaque, with the driver and underlying hardware determining data structure and memory layout. Existing implementations rely on BVHs, but vendors may choose alternate structures. DXR acceleration structures typically get built at runtime on the GPU and contain two levels: a bottom and a top level. Bottom- level acceleration structures (BLAS) contain geometric or procedural primitives. Top- level acceleration structures (TLAS) contain one or more bottom-level structures. This allows geometry instancing by inserting the same BLAS into the TLAS multiple times, each with different transformation matrices. Bottom-level structures are slower to build but deliver fast ray intersection. Top-level structures are fast to build, improving flexibility and reusability of geometry, but overuse can reduce performance. For best performance, bottom-level structures should overlap as little as possible.Instead of rebuilding the BVH in dynamic scenes, acceleration structures can be “refit” if geometry topology remains fixed (only node bounds change). Refits cost an order of magnitude less than rebuilds, but repeated refits usually degrade ray tracing performance over time. To balance tracing and build costs, use an appropriate combination of refits and rebuilds.


...you have to either settle for those 30-40FPS or just turn RT completely off [or downscale and use TAA derivative upscaling sprinkled with NN]
Ok.

WOW, like this is the reason people buy high-end PC parts...
I'm unlikely to buy a GPU that cannot be operated at around 200W, I don't want to argue what "people" buy.

We don't, there are a lot of concerns floating around the web(strangely mostly from AMD folks tho), but there hasn't been even a single actual real life example recorded so far

DF's pathetic preview, when they crippled 2080 perf with textures fitting into 10GB but not 8GB.
 
Last edited:

Ascend

Member
VRAM is not a "weird" config, it's good enough for what the card does. GPU tech is just slowly moving into 4k. All cards and consoles advertise 4k and look what it did. We barely have any real next gen looking games instead only got a resolution bump which eats away nearly all the GPU power up we got. RDNA3 is rumored to have yet another 50% perf per Watt uplift and I bet we can expect similar from Nvidia, so if you're planing on future proofing, you should skip this gen completely because even your 6gb more VRAM will not translate to much better performance since that 6800XT with 16 GB will have the compute power of the then available entry cards.
I learned my lesson of not going for the graphics card with more RAM. Back when I was less knowledgeable, I bought an HD 6850 with 1GB. That was with the advice that the graphics card would become too slow to make use of its 2GB of RAM anyway. A few years past, and it still had the power to run any game I wanted to play, but, some games simply stopped supporting graphics cards with less than 2GB of RAM. That forced me to upgrade earlier than I was planning to, despite the card being actually capable GPU-wise.
Right now I have an R9 Fury. It has the power to run most things for my setup, but, the 4GB is a limit yet again. I wouldn't have had to upgrade right now if it had 6GB or 8 GB. The argument was the same, in addition that HBM would use less RAM than traditional GDDR due to its bandwidth, which was obviously a lie. What can I say? Mistakes make you learn stuff.
For my next purchase, I'm going for double the RAM people are saying is necessary. Right now everyone seems to think 8GB is enough, so to me, 12GB is the bare minimum, and I'd prefer 16GB to be on the safe side. I am not someone that upgrades every two or three years. And looking at the market with only now DX12 and Vulkan lifting off the ground, anything you buy now will be useful for quite a long time... So to me, the amount of VRAM is paramount. I've been 'burned' twice by it, even though I am currently using my card for over 4 years.

I have no issue in admitting that I did not expect AMD, based on previous track records, to end up this competitive. But, and you see, this is a good thing. They are in a good position back in the game which means we all win moving forward as there is more pressure on Nvidia moving forward.
It's not a good thing if they actually did a bait and switch. Hardware Unboxed just confirmed that the prices we see on Newegg for the 6800 series cards, are literally the MSRP for these AIB cards. They are $120 to $150 over the reference MSRP. That is simply scummy. If they don't do better with their other Navi chips, I'm skipping this gen entirely.
 

ZywyPL

Banned

TAA? The thing that blurs even native 4K when applied, let alone lower than native resolutions? No thanks. And like I said, not for that price range, for 149-299$ you obviously expect sacrifices to be made, but not on a close to a grand GPUs, there's literally no excuse, there wasn't any for Turings back in the day, there isn't any now for Big Navi.


I'm unlikely to buy a GPU that cannot be operated at around 200W, I don't want to argue what "people" buy.

Well that's just like, you know, your own issue. But your'e not representation of billions of gamers out there, far from it, sorry to break it down to you. You've put some silly limitations on yourself and then went on the internet desperately trying to prove how you're right and everyone else is wrong, strange approach I have to say, but whatever drives you boat. But the reality is really simple - people just want to play games, and they want as much performance as possible when they buy new parts, everything else is more or less irrelevant.
 

llien

Member
TAA? The thing
Yep, that thing:

Unfortunately, DLSS 1.0 never quite lived up to its promise. NVIDIA took a very image-centric approach to the process, relying on an extensive training program that involved creating a different neural network for each game at each resolution, training the networks on what a game should look like by feeding them ultra-high resolution, 64x anti-aliased images. In theory, the resulting networks should have been able to recognize how a more detailed world should work, and produce cleaner, sharper images accordingly.

So for their second stab at AI upscaling, NVIDIA is taking a different tack. Instead of relying on individual, per-game neural networks, NVIDIA has built a single generic neural network that they are optimizing the hell out of. And to make up for the lack of information that comes from per-game networks, the company is making up for it by integrating real-time motion vector information from the game itself, a fundamental aspect of temporal anti-aliasing (TAA) and similar techniques. The net result is that DLSS 2.0 behaves a lot more like a temporal upscaling solution, which makes it dumber in some ways, but also smarter in others.



UeykcQW.jpg


You remember one of the ampere GPU owners tried to "gotcha" me, asking which of the two pics was true 4k and which was NV's TAA derived upscaling, remember how it went?
It improves lines (very visible on gras, hair), but also has downsides, like losing fine details, struggling with quickly moving targets, blurring in general.
It is so easy to see it, I don't think you've ever honestly tried to evaluate it.

Well that's just like, you know, your own issue. But your'e not representation of billions of gamers out there, far from it.
Got to meet someone who knows is certified to speak for billions of gamer.
I don't say everyone cares about power consumption, I do think a number of gamers (especially Germans) do.
Some gamers obviously don't.
So what.
 
Last edited:

regawdless

Banned
Yep, that thing:

Unfortunately, DLSS 1.0 never quite lived up to its promise. NVIDIA took a very image-centric approach to the process, relying on an extensive training program that involved creating a different neural network for each game at each resolution, training the networks on what a game should look like by feeding them ultra-high resolution, 64x anti-aliased images. In theory, the resulting networks should have been able to recognize how a more detailed world should work, and produce cleaner, sharper images accordingly.

So for their second stab at AI upscaling, NVIDIA is taking a different tack. Instead of relying on individual, per-game neural networks, NVIDIA has built a single generic neural network that they are optimizing the hell out of. And to make up for the lack of information that comes from per-game networks, the company is making up for it by integrating real-time motion vector information from the game itself, a fundamental aspect of temporal anti-aliasing (TAA) and similar techniques. The net result is that DLSS 2.0 behaves a lot more like a temporal upscaling solution, which makes it dumber in some ways, but also smarter in others.



UeykcQW.jpg


You remember one of the ampere GPU owners tried to "gotcha" me, asking which of the two pics was true 4k and which was NV's TAA derived upscaling, remember how it went?
It improves lines (very visible on gras, hair), but also has downsides, like losing fine details, struggling with quickly moving targets, blurring in general.
It is so easy to see it, I don't think you've ever honestly tried to evaluate it.


Got to meet someone who knows is certified to speak for billions of gamer.
I don't say everyone cares about power consumption, I do think a number of gamers (especially Germans) do.
Some gamers obviously don't.
So what.

Curious why German gamers should care more about energy consumption than others. Care to elaborate?
 
Curious why German gamers should care more about energy consumption than others. Care to elaborate?

I'm not German but if I was to guess I think it is kind of a geopolitical issue. Simply put Germany does not have enough power by itself to supply and heat the homes of its 80-90 million person population.

This requires a reliance on Russian oil pipelines to generate the needed electricity, which in turn raises prices so I believe in simple terms electricity is very expensive in Germany.

Also as a general rule people in major cities in European countries tend to earn a lot less than their US counterparts per year (20-30% average less depending on role etc.. sometimes way higher in places like San Francisco and generally across the US in high experience/executive positions) so that makes the extra charge sting even more.

At least that is my assumption, I live in Ireland and electricity is reasonably priced here I think? At least I don't tend to pay massive amounts but probably much more than the US.
 

Patrick S.

Banned
I'm not German but if I was to guess I think it is kind of a geopolitical issue. Simply put Germany does not have enough power by itself to supply and heat the homes of its 80-90 million person population.

This requires a reliance on Russian oil pipelines to generate the needed electricity, which in turn raises prices so I believe in simple terms electricity is very expensive in Germany.

Also as a general rule people in major cities in European countries tend to earn a lot less than their US counterparts per year (20-30% average less depending on role etc.. sometimes way higher in places like San Francisco and generally across the US in high experience/executive positions) so that makes the extra charge sting even more.

At least that is my assumption, I live in Ireland and electricity is reasonably priced here I think? At least I don't tend to pay massive amounts but probably much more than the US.

I'm German, I have a small household of three, and I pay around €90 per month for electricity. AFAIK, Germany is the country with the highest costs for electricity. I think I pay around 30 cents per kilowatt hour.
 
Last edited:

Ascend

Member
I'm German, I have a small household of three, and I pay around €90 per month for electricity. AFAIK, Germany is the country with the highest costs for electricity. I think I pay around 30 cents per kilowatt hour.
Caribbean is worse... We have a below average electricity cost for the Caribbean... The monthly rate in US dollars for me is;

<= 250 kWh /month = $0.30 / kWh
>250 <=350 kWh /month = $0.36 / kWh
> 350 kWh /month = $0.39 / kWh
 
Last edited:

regawdless

Banned
I'm not German but if I was to guess I think it is kind of a geopolitical issue. Simply put Germany does not have enough power by itself to supply and heat the homes of its 80-90 million person population.

This requires a reliance on Russian oil pipelines to generate the needed electricity, which in turn raises prices so I believe in simple terms electricity is very expensive in Germany.

Also as a general rule people in major cities in European countries tend to earn a lot less than their US counterparts per year (20-30% average less depending on role etc.. sometimes way higher in places like San Francisco and generally across the US in high experience/executive positions) so that makes the extra charge sting even more.

At least that is my assumption, I live in Ireland and electricity is reasonably priced here I think? At least I don't tend to pay massive amounts but probably much more than the US.

I was genuinely interested in his reasons. We are talking about people who buy a GPU for +600 Euro and most likely have PCs with other expensive high-end components. Investing a lot of money for their hobby.

Not sure if these people are very sensitive regarding the comparatively low costs of 100w more or less from the GPU. And if it influences their purchasing decision in any significant way.

I'm sure there are some, but I don't think it's a lot of people.
 

ZywyPL

Banned
Yep, that thing:

<posts articles about DLSS instead of TAA, nice goalpost shifting there buddy>

If you really want to know I've been playing in native 4K on 1080p display in the past few years, so it's really damn hard to satisfy/impress me now in that regards, hard to go back after years of supersampling you know. And just recently I've switched to 4K TV and I find most post-processing AA to be utter shit, with TAA being on "top" of the list, it simply kills the whole idea of playing in 4K to begin with.


Got to meet someone who knows is certified to speak for billions of gamer.
I don't say everyone cares about power consumption, I do think a number of gamers (especially Germans) do.
Some gamers obviously don't.
So what.

FPS is all that matters, but if you want to bend/deny the reality and keep lying to yourself feel free. But it's interesting to see that at the same time you constantly post FPS charts, but (unsurprisingly) only when AMD has an edge.
 

wachie

Member
The availability is a joke, so much for AMD coming to save PC gamers. Somewhere Nvidia is breathing a sigh of relief.
 

psorcerer

Banned
BVH is just data structure.
Data cannot be "hardware accelerated".
What kind of operation with it is "hardware accelerated"?

RT blocks do the traversal in HW.
I.e. you set up a structure in memory with shaders that run on specific "events" (hit, miss, etc.)
And then you launch ray probes which just call these shaders automatically.
AMD solution has RAs accelerating the intersection lookup. I.e. the generation of these "events" is done with compute.
Which means that all the optimizations that NV does under the hood need to be implemented manually here.
On the other hand it gives more flexibility. For example you can stop the ray check early, unfortunately DXR was designed by NV, thus there is no API for early stop and other optimizations yet, even in 1.1
AFAIK
 

Kenpachii

Member
I wouldn't look at control for the sake of it. the whole game was builded around RTX for nvidia. they even
The availability is a joke, so much for AMD coming to save PC gamers. Somewhere Nvidia is breathing a sigh of relief.

The real battle for them is next series of cards. if AMD has dlss2.0 alternative and a 50% performance uplift they will be in a real pickle. So far there 3080 can hold itself perfectly fine performance wise against a 6800xt.
 
Top Bottom