• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Why you shouldn't underestimate Radeon 6000/Big Navi

Ascend

Member
Introduction
With nVidia announcing their RTX 3000 series, everyone is jumping up and down from excitement. The cards are extremely fast, relatively well-priced, and everyone is already planning which card they will be getting. Many are saying that there is no way AMD can catch up, and well, I'm going to try and explain why we shouldn't immediately discount AMD, despite the impressive performance and prices that nVidia provided.

This is going to be speculation, but, hopefully, it will be of some value to you. Obviously, we're going to take references from the 5700XT, PS5 and Xbox Series X to extrapolate what we can expect for RDNA2. But before that, we HAVE a leak of an AMD engineering sample beating out a 2080Ti...;

AdlnLJp.png
I simply wanted to share that, so that we know that the least we can expect is that kind of performance. That leak was discovered about 7 months ago. So at least 7 months ago, an AMD card gave the result of being 17% faster than a 2080Ti. Anyone thinking AMD would not be able to keep up, well, I'll say it again. Don't underestimate AMD this time. Things have likely only improved since then, so, let's dive in.

The meat of it...
I'm going to assume there is no IPC increase from RDNA1 to RDNA2, which is likely wrong, but, we have no reference what the IPC increase would be. I analyzed the specs and the performance of the AMD cards to form a projection of what Big Navi will likely be. The 5700XT is a good baseline, simply because we know its performance and characteristics. The PS5 is a good indication of what kind of clocks we can at least expect from RDNA2. The Xbox Series X is a good reference for RDNA2 die size and power consumption.
So, putting all things together we get (see notes below table for additional explanations)...;

GPU5700XT (RDNA1)PS5 (RDNA2)Xbox Series X (RDNA2)Big Navi (estimated)RTX 3090RTX 3080RTX 2080TiRTX 2080
Compute units / SM4036528082686846
Frequency (GHz)~1.8~2.21.82.2 (2)***1.71.71.51.7
Die area (mm2)251 mm2?~300*~480****627627754545
Power consumption (W)225?~150**300*****350320250215
Power consumption per CU (W)5.63.43.84.34.73.74.7
Transistor Density (M/mm2)41?434345452525
TFLOPS9.810.312.222.9 (21.7)35.629.813.510.1
AMD Performance Factor (% compared to 5700XT)100105124234 (221)XXXX
nVidia Performance Factor (% compared to 2080)XXXX216180117100
Performance normalized to 5700XT (%)100105124234 (221)248207135115
Performance normalized to 2080 Ti (%)747892173 (164)18415310098
Performance normalized to 3080 (%)485160113 (107)1201006564
Performance normalized to Big Navi(%)42 (45)45 (48)53 (56)100 (100)106 (112)88 (93)58 (61)56 (60)
Performance factor per TF (IPC compared to RDNA1)10.210.210.210.2 (10.2)7.06.91011.3
Relative Die Size (% compared to 5700XT)100?120191250250300217
Relative Die size (% compared to 3080)40?487710010012087
Relative Die size (% compared to Big Navi)52?63100131131157114

* XSX GPU portion only, estimated by calculating from die shot. Confirmed with a secondary method since we know the Zen 2 chiplet size, which was substracted from the total die size. Not fully accurate but should be close enough.

** There was a picture indicating the Xbox Series X Could deliver up to ~310W. No console is going to risk using near the max capability of the PSU, so count 250W max. If you take the power the CPU, SSD, other I/O components need to use, 150W is a reasonable estimate. Additionally, assuming AMD complies with their 50% power consumption decrease for the same performance, we can say;
52/40=1.3 (the XSX has 30% more CUs)
1.3*225W = 293W
50% power consumption is 293*0.5=146W

*** If a console can reach 2.2 GHz, a PC part should have no problem reaching it. It can mean that power efficiency decreases though. I added a secondary 2GHz value in case higher clocks are not reached.

**** Calculated using double the CU of the 5700XT, but using the XSX transistor density. This can easily be 500 mm2 (or slightly more) though.

***** Using the CU ratio of big Navi and XSX with its power consumption nets you approximately 230W. Multiplying by 80/52 nets you around 272W. Some lower efficiency is expected due to the higher clocks, so 300W seems reasonable. Back calculating the power consumption per CU, gives you a value of 3.75. If you multiply this by 1.5 (50% increase in power consumption), this nets you the power consumption of the 5700XT. It all balances out.


So... Conclusions...;
  • RTX series cards cannot go much higher than 1.7 GHz, while RDNA2 should have no trouble hitting at least 2 GHz, possibly more than 2.2GHz. The conclusion on the Ampere clocks has turned out to be incorrect. However, the performance metrics remained exactly the same (Edited on Sep 18th)
  • Power consumption per SM has not increased from RTX2080 to RTX3080. This is a good thing, since performance has increased.
  • Even if 2GHz is the max frequency that Big Navi can reach, it should still be around 5% faster than the RTX 3080, provided it has 80CUs at RDNA1 IPC and isn't RAM bottlenecked.
  • Ampere/RTX3000 series has atrocious IPC for gaming compared to both Turing and RDNA. Per TFLOP, Ampere is around 70% of both Turing and RDNA.
  • The Big Navi die size is expected to be slightly smaller than the RTX 2080, while performing like an RTX 3080. The RTX 3080 die size is around 15% larger than the RTX 2080.
  • A smaller die size means that they can possibly charge less for it than nVidia can for their RTX 3080. I say possibly, because TSMC 7nm is more expensive than Samsung 8m, but yields are better too.
  • Big Navi will be around RTX 3080 performance, likely even slightly higher. It is possible for it to reach or even surpass RTX 3090 performance if there is an IPC increase, while consuming less power.

Could Big navi be even better? Yes, IF;
  • Clocks are higher than expected (possible)
  • IPC increase is significant (I don't expect more than 5%)
  • Power consumption is significantly better than expected (unlikely)
  • If it is priced below the RTX 3080 while being equal or better

Could Bag Navi be significantly worse? Yes, IF;
  • AMD is unable to reliably create cards with 80CUs and needs to cut them down to 72CUs or less. (unlikely)
  • Clocks are unable to go higher than XSX/5700XT clocks (unlikely)
  • Die area is a lot larger than expected to reach 80CUs, at least RTX 2080 size or larger (unlikely)
  • Power consumption is significantly higher than expected (possible)
  • IPC decreases just like Ampere (unlikely)
  • If it is priced above the RTX 3080 while being equal.

Lastly, Some final notes
  • It has been confirmed that Big Navi will release before the consoles. That puts the release window in October of this year, or the first week of November.
  • Despite the short wait, supplies for especially the RTX 3080 and RTX 3090 are apparently low, so, if you want one of those, don't wait too long.
  • Don't be surprised if AMD releases Big Navi for $599.
  • For the ones wondering about AMD's DLSS alternative... FidelityFX exists. And even if that doesn't latch on, DirectML will likely be leveraged to create a DLSS alternative.
  • I would be surprised if AMD doesn't have their own RTX I/O alternative, especially since the Xbox Series X supports the same thing. Not to mention RDNA2 has been confirmed to be DX12_2 compliant.
  • I know that larger dies are less efficient than smaller ones. It is also one of the reasons I did not account for any IPC increase. I believe any IPC increase will be off-set by the loss of efficiency of the larger amount of CUs. It should balance out.

Final disclaimer:
The table can possibly contain mistakes. I checked and verified, but I cannot guarantee full accuracy. This is a logical deduction based on what we know, with some basic assumptions.
I personally believe Big Navi performance will not be far off from above estimations, but it is your job to keep your own expectations in check.
 
Last edited:
BigNavi clocks won't be 2.2 GHz or something as ridiculous, 1.8 - 1.9 GHz sounds about right. But yes, BigNavi can probably match 3080 in real world gaming benchmarks, at least in not-RT applications. However, if the BigNavi doesn't have DLSS type functionality, it'll be in massive performance disadvantage in all DLSS supporting games compared to the RTX cards.
 

pawel86ck

Banned
FidelityFX is just a standard upscaling, just with very good sharpening filter, while DLSS 2.0 indeed looks like native 4K. I hope DirectML will run and look very good though.

The biggest question is however RT support / performance in current games. Ampere has 2x faster RT compared to Turing, do you really think AMD can match it :p ?
 
Last edited:

martino

Member
frequencies on console are not conservative at all this time.
the 2.23ghz is not "maintainable" for 32cu
XSX indicate the sweet spot for 52 CU is 1.8ghz
And you expect bigger will clock faster ?
It would be more a surprise than the reverse.
 
Last edited:
Hmm, FidelityFX does look pretty good. I'm hard pressed to tell it apart from DLSS at the distance I sit. On the other hand I expect you'd be able to reconstruct from even lower resolutions with DLSS and still make it look good.

 

Krappadizzle

Gold Member
I have absolutely no confidence that AMD can meaningfully compete at the high end with Nvidia and I have no reason to think that's gonna change. They went the complete wrong direction with Radeon VII and while they have made some strides, they are so outclassed in the GPU department that I just don't think they'll ever meaningfully catch up. I'd love for them to compete, I just don't see it happening. But who knows, what they've done on the CPU side is nothing short of super impressive and they lagged in that area for quite a while so who really knows.
 

Abriael_GN

RSI Employee of the Year
Introduction
With nVidia announcing their RTX 3000 series, everyone is jumping up and down from excitement. The cards are extremely fast, relatively well-priced, and everyone is already planning which card they will be getting. Many are saying that there is no way AMD can catch up, and well, I'm going to try and explain why we shouldn't immediately discount AMD, despite the impressive performance and prices that nVidia provided.

This is going to be speculation, but, hopefully, it will be of some value to you. Obviously, we're going to take references from the 5700XT, PS5 and Xbox Series X to extrapolate what we can expect for RDNA2. But before that, we HAVE a leak of an AMD engineering sample beating out a 2080Ti...;

AdlnLJp.png
I simply wanted to share that, so that we know that the least we can expect is that kind of performance. That leak was discovered about 7 months ago. So at least 7 months ago, an AMD card gave the result of being 17% faster than a 2080Ti. Anyone thinking AMD would not be able to keep up, well, I'll say it again. Don't underestimate AMD this time. Things have likely only improved since then, so, let's dive in.

The meat of it...
I'm going to assume there is no IPC increase from RDNA1 to RDNA2, which is likely wrong, but, we have no reference what the IPC increase would be. I analyzed the specs and the performance of the AMD cards to form a projection of what Big Navi will likely be. The 5700XT is a good baseline, simply because we know its performance and characteristics. The PS5 is a good indication of what kind of clocks we can at least expect from RDNA2. The Xbox Series X is a good reference for RDNA2 die size and power consumption.
So, putting all things together we get (see notes below table for additional explanations)...;

GPU5700XT (RDNA1)PS5 (RDNA2)Xbox Series X (RDNA2)Big Navi (estimated)RTX 3090RTX 3080RTX 2080TiRTX 2080
Compute units / SM4036528082686846
Frequency (GHz)~1.8~2.21.82.2 (2)***1.71.71.51.7
Die area (mm2)251 mm2?~300*~480****627627754545
Power consumption (W)225?~150**300*****350320250215
Power consumption per CU (W)5.63.43.84.34.73.74.7
Transistor Density (M/mm2)41?434345452525
TFLOPS9.810.312.222.9 (21.7)35.629.813.510.1
AMD Performance Factor (% compared to 5700XT)100105124234 (221)XXXX
nVidia Performance Factor (% compared to 2080)XXXX216180117100
Performance normalized to 5700XT (%)100105124234 (221)248207135115
Performance normalized to 2080 Ti (%)747892173 (164)18415310098
Performance normalized to 3080 (%)485160113 (107)1201006564
Performance normalized to Big Navi(%)42 (45)45 (48)53 (56)100 (100)106 (112)88 (93)58 (61)56 (60)
Performance factor per TF (IPC compared to RDNA1)10.210.210.210.2 (10.2)7.06.91011.3
Relative Die Size (% compared to 5700XT)100?120191250250300217
Relative Die size (% compared to 3080)40?487710010012087
Relative Die size (% compared to Big Navi)52?63100131131157114

* XSX GPU portion only, estimated by calculating from die shot. Confirmed with a secondary method since we know the Zen 2 chiplet size, which was substracted from the total die size. Not fully accurate but should be close enough.

** There was a picture indicating the Xbox Series X Could deliver up to ~310W. No console is going to risk using near the max capability of the PSU, so count 250W max. If you take the power the CPU, SSD, other I/O components need to use, 150W is a reasonable estimate. Additionally, assuming AMD complies with their 50% power consumption decrease for the same performance, we can say;
52/40=1.3 (the XSX has 30% more CUs)
1.3*225W = 293W
50% power consumption is 293*0.5=146W

*** If a console can reach 2.2 GHz, a PC part should have no problem reaching it. It can mean that power efficiency decreases though. I added a secondary 2GHz value in case higher clocks are not reached.

**** Calculated using double the CU of the 5700XT, but using the XSX transistor density. This can easily be 500 mm2 (or slightly more) though.

***** Using the CU ratio of big Navi and XSX with its power consumption nets you approximately 230W. Multiplying by 80/52 nets you around 272W. Some lower efficiency is expected due to the higher clocks, so 300W seems reasonable. Back calculating the power consumption per CU, gives you a value of 3.75. If you multiply this by 1.5 (50% increase in power consumption), this nets you the power consumption of the 5700XT. It all balances out.


So... Conclusions...;
  • RTX series cards cannot go much higher than 1.7 GHz, while RDNA2 should have no trouble hitting at least 2 GHz, possibly more than 2.2GHz.
  • Power consumption per SM has not increased from RTX2080 to RTX3080. This is a good thing, since performance has increased.
  • Even if 2GHz is the max frequency that Big Navi can reach, it should still be around 5% faster than the RTX 3080, provided it has 80CUs at RDNA1 IPC and isn't RAM bottlenecked.
  • Ampere/RTX3000 series has atrocious IPC for gaming compared to both Turing and RDNA. Per TFLOP, Ampere is around 70% of both Turing and RDNA.
  • The Big Navi die size is expected to be slightly smaller than the RTX 2080, while performing like an RTX 3080. The RTX 3080 die size is around 15% larger than the RTX 2080.
  • A smaller die size means that they can possibly charge less for it than nVidia can for their RTX 3080. I say possibly, because TSMC 7nm is more expensive than Samsung 8m, but yields are better too.
  • Big Navi will be around RTX 3080 performance, likely even slightly higher. It is possible for it to reach or even surpass RTX 3090 performance if there is an IPC increase, while consuming less power.

Could Big navi be even better? Yes, IF;
  • Clocks are higher than expected (possible)
  • IPC increase is significant (I don't expect more than 5%)
  • Power consumption is significantly better than expected (unlikely)
  • If it is priced below the RTX 3080 while being equal or better

Could Bag Navi be significantly worse? Yes, IF;
  • AMD is unable to reliably create cards with 80CUs and needs to cut them down to 72CUs or less. (unlikely)
  • Clocks are unable to go higher than XSX/5700XT clocks (unlikely)
  • Die area is a lot larger than expected to reach 80CUs, at least RTX 2080 size or larger (unlikely)
  • Power consumption is significantly higher than expected (possible)
  • IPC decreases just like Ampere (unlikely)
  • If it is priced above the RTX 3080 while being equal.

Lastly, Some final notes
  • It has been confirmed that Big Navi will release before the consoles. That puts the release window in October of this year, or the first week of November.
  • Despite the short wait, supplies for especially the RTX 3080 and RTX 3090 are apparently low, so, if you want one of those, don't wait too long.
  • Don't be surprised if AMD releases Big Navi for $599.
  • For the ones wondering about AMD's DLSS alternative... FidelityFX exists. And even if that doesn't latch on, DirectML will likely be leveraged to create a DLSS alternative.
  • I would be surprised if AMD doesn't have their own RTX I/O alternative, especially since the Xbox Series X supports the same thing. Not to mention RDNA2 has been confirmed to be DX12_2 compliant.
  • I know that larger dies are less efficient than smaller ones. It is also one of the reasons I did not account for any IPC increase. I believe any IPC increase will be off-set by the loss of efficiency of the larger amount of CUs. It should balance out.

Final disclaimer:
The table can possibly contain mistakes. I checked and verified, but I cannot guarantee full accuracy. This is a logical deduction based on what we know, with some basic assumptions.
I personally believe Big Navi performance will not be far off from above estimations, but it is your job to keep your own expectations in check.

And the drivers are still going to suck, which means that whatever they deliver, it's still going to be objectively inferior in any real-world use.
 
Last edited:

Ascend

Member
frequencies on console are not conservative at all this time.
the 2.1ghz is not "maintainable" for 32cu
XSX indicate the sweet spot for 52 CU is 1.8ghz
And you expect bigger will clock faster ?
It would be more a surprise than the reverse.
Consoles are much more restrained to a power budget than PCs are. There's nothing stopping RDNA2 on PC from using twice the power it is using on the XSX or PS5.
 

RoboFu

One of the green rats
Why not to get your hopes up ... yet.

rdna is still in its infancy. Like ryzen it will take two or three revisions to really get to where they need to be.

but rdna has a lot of promise so I do believe they will get there but they just need time to mature it.
 

Skifi28

Member
frequencies on console are not conservative at all this time.
the 2.1ghz is not "maintainable" for 32cu
XSX indicate the sweet spot for 52 CU is 1.8ghz
And you expect bigger will clock faster ?
It would be more a surprise than the reverse.

We don't really know where console clocks fall without anything to compare them with. For all we know they could be conservative as usual. meaning a desktop card with a proper cooler with blow past 2.2Ghz. That'd be fun.
 

martino

Member
We don't really know where console clocks fall without anything to compare them with. For all we know they could be conservative as usual. meaning a desktop card with a proper cooler with blow past 2.2Ghz. That'd be fun.
you're right but it's speculated and expected this next gen system target 300W or more this time.
so not same target as usual
 

Ascend

Member
I'd like AMD to actually tell us why we shouldn't underestimate it rather than constant speculation, hopes and dreams from forum posters.

Yours, an AMD shareholder.
You mean like they did with Vega and ended up disappointing everyone? No one is noticing that they are basically silent this time? That's quite different compared to their previous marketing strategies.

The sentiment here is that AMD will fail. I can understand that, considering the past. It's the same sentiment people also had with Ryzen being able to challenge Intel. And in many ways, that flip seemed a lot harder than the GPU one. In hindsight it's easy to say that Intel was lazy, but everyone was worshiping Intel's CPUs at the time. Pretty much like people are worshiping nVidia GPUs now.

If there is a valid reason as to why AMD will reach max RTX 3070 performance for example, I'd love to hear it. As of now, everything points in the direction of at least RTX 3080 performance. And nVidia is not going to price these cards so low for no reason.
 

Jonsoncao

Banned
Consoles are much more restrained to a power budget than PCs are. There's nothing stopping RDNA2 on PC from using twice the power it is using on the XSX or PS5.
5700XT does not scale well with the frequency part. AMD's RT solution decides that the final performance on RT-enabled games is rather CU-bounded which RDNA2 inherits from 5700XT.

Meanwhile, NV's RT solution is much more flexible. I can see something like ROG 3080 OC = 110% Big Navi if the estimated specs of the Big Navi is accurate.
 

Pagusas

Elden Member
I hope they come out with something competitive, I just don't want to get my hopes up as they are good at dashing them. For the sake of the market though, I hope you are right, I'm just going to expect a dud from them and hopefully be surprised.
 
Jay actually made a pretty decent video discussing the position of each company and speculates a little on what he hopes is for the future on both teams.
 

Ascend

Member
5700XT does not scale well with the frequency part. AMD's RT solution decides that the final performance on RT-enabled games is rather CU-bounded which RDNA2 inherits from 5700XT.

Meanwhile, NV's RT solution is much more flexible. I can see something like ROG 3080 OC = 110% Big Navi if the estimated specs of the Big Navi is accurate.
You mean its power consumption doesn't scale well with frequency? That is true. It's too bad we don't have data on the PS5, because that would shed some more light on RDNA2 power consumption.
 
Sorry but I just can't bring myself to hype an AMD GPU.

They have spent years now releasing GPUs that always wind up being less powerful and more expensive than the rumors led us to believe. ( And usually releasing FAR later than expected too ).

If AMD can release a new GPU that's actually compelling on the high end, then that's great, but I'm not expecting it.
 
Hmm, FidelityFX does look pretty good. I'm hard pressed to tell it apart from DLSS at the distance I sit. On the other hand I expect you'd be able to reconstruct from even lower resolutions with DLSS and still make it look good.



What is this? This is nothing.

Does the person who made this video even understand what it is they are comparing?

What FidelityFX settings were used? What resolution is it upscaling from? It doesn't say.

What DLSS settings were used? What resolution is it upscaling from? (Performance or Quality?) It doesn't say.

What performance was gained by using either technique? It doesn't say. There's no FPS shown at all.

This video isn't designed to inform. It's designed to make average joe think that maybe AMD actually has a response to DLSS.
 
Last edited:
Hmm, FidelityFX does look pretty good. I'm hard pressed to tell it apart from DLSS at the distance I sit. On the other hand I expect you'd be able to reconstruct from even lower resolutions with DLSS and still make it look good.


You're missing the main point of DLSS: the output not only looks very comparable to native resolution, but also is much less taxing on hardware. With DLSS you get a huge boost in performance while image still looks similar to native.
 

MiguelItUp

Gold Member
AMD definitely has a chance to go into some good competition with NVIDIA, I just don't know if they will. Well, and if so, how much? I mean, how far can they really go in terms of competing with price and technology. I honestly hope they surprise us, but man, I really don't know...
 

nochance

Banned
These topics are becoming more worrying by the minute. AMD needs to come out with some news just for the sake of their fanbase (which currently is married with console fanboys).

I assume that both SONY and Microsoft are running hotlines with AMD after the NVidia presser, and no one quite knows what to do.
 

nochance

Banned
AMD definitely has a chance to go into some good competition with NVIDIA, I just don't know if they will. Well, and if so, how much? I mean, how far can they really go in terms of competing with price and technology. I honestly hope they surprise us, but man, I really don't know...
How exactly? They already made the move to the new manufacturing process and new architecture. If they are going to massively overhaul it (at which point they would not call it RDNA 2) by adding compute units necessary to compete with NVidia, then they will end up with the same bottlenecks and generational limitations that NVidia faced with Turing 2 years ago.

What they probably did is the same thing they did with Ryzen: blow up the size of the chip with everything else remaining more or less equal. That's great for rasterization performance (they might land between RTX 3070 and 3080 on that front), but this generation is all about ray tracing, and Ampere has a full blown solution with increased rasterization performance, dedicated RT hardware and dedicated ML hardware along with technology to utilize it, to allow for high resolution gaming with RT enabled, without sacrificing the rasterization performance (in fact freeing up resources on that front).
 

SF Kosmo

Al Jazeera Special Reporter
Hmm, FidelityFX does look pretty good. I'm hard pressed to tell it apart from DLSS at the distance I sit. On the other hand I expect you'd be able to reconstruct from even lower resolutions with DLSS and still make it look good.


"From the distance I sit" is a testament to the unimportance of 4K rather than the quality of FidelityFX. Zoom ins of the two show they're not even remotely comparable, and yes, playing at 1440p with a little smoothing/sharpening is probably "good enough" for a lot of people but DLSS can push it a lot further and still look good. The difference between the two is really night and day up close, which means you can't really use FFX to ratchet down the res very aggressively.

I think nVidia's pricing tells us that BigNavi is going to be a lot more competitive than last gen, at least for traditional shader performance, but I think the AI and RT stuff matters, and I don;t think AMD is going to be able to close that gap entirely.
 
Last edited:

Ascend

Member
"RTX series cards cannot go much higher than 1.7 GHz "

you know you're just baiting constantly why pretend ?
So you have no real answers. Just condescending baseless annoying derailing remarks.
All the announced RTX 3000 series cards top out at around 1.7 GHz if you haven't noticed.
I guess we're done here. Bye.

Even if RDNA2 matches 3080 in SM performance, they will lose significantly to RTX and Tensor DLSS. This generation is about RT. You need performance across the board.
We have no information on their RT performance. So that one is still a wait and see. No one is expecting it to be as good as Ampere. Time will tell. We should know in a month or so, if not earlier.
 
Last edited:
All the announced RTX 3000 series cards top out at around 1.7 GHz if you haven't noticed.
And that means that 30 series maxes out at 1,7 GHz :messenger_grinning_sweat: Please.

You somehow conveniently "missed" that all 20 series FE cards including 2080 Ti, 2080, 2070 could run at 2 GHz and even higher.

Oh no 2080 Ti with its tiny 1,55 GHz have LOWER than 3080 official 1,7 GHz boost clocks by a whoping 200 MHz :messenger_face_screaming: Does it mean that 3080 series will do 2,2 GHz since 2080 Ti easily does 2 GHz? NO of course not. Still that is higher posibility than maxing out at 1,7 GHz....
 
GPU5700XT (RDNA1)PS5 (RDNA2)Xbox Series X (RDNA2)Big Navi (estimated)RTX 3090RTX 3080RTX 2080TiRTX 2080
Compute units / SM4036528082686846
Frequency (GHz)~1.8~2.21.82.2 (2)***1.71.71.51.7

*** If a console can reach 2.2 GHz, a PC part should have no problem reaching it. It can mean that power efficiency decreases though. I added a secondary 2GHz value in case higher clocks are not reached.

At first when I saw your big navi clockspeed estimation I was like
iu


then I saw your triple asterisk note and I was like
iu


if cerny does not deliver on his promise of 2.23 "most of the time", he will have caused so much misinformation with his ps5 presentation,
that he should go down in history as equal to ps3 motorstorm and killzone trailers


Other than this, nice topic (y)
We will see at some point what AMD can bring to the table. I am waiting for them too.
 

Gamerguy84

Member
Im not writing them off at all.

Ill wait to see what all the mad scientist have to say about big navi before I make my next build.

Im not primarily a PC gamer but my daughter is. She doesnt go on forums and has no need for the greatest card out there.

I was thinking of a 3070 or 2080 but we will see. What we have now works great ATM.
 

PhoenixTank

Member
I'm not quite that optimistic and wider parts tend to have lower sustainable clocks overall, but I don't really rule them out until the cards are in play, y'know? We're nearly there now.
To me this thread isn't much more than a "probably worth it to wait and see" but I still consider that to be good advice unless your GPU is literally dying right now.
 

ZywyPL

Banned
One thing many seem missing/forgetting is the new SM units that now pack 128 cores as oppose to usual 64. It might be hard to fill all those 8-10k cores, but once we get those promised hundreds of millions of polygons per scene next-gen games, I thing this is when Ampere cards will truly shine compared to Turing/Navi. Not to mention no one even talks about 1080p and 1440p anymore, that's how overpowered those cards are.


I'd like AMD to actually tell us why we shouldn't underestimate it rather than constant speculation, hopes and dreams from forum posters.

Don't forget PowerPoint slides, lots of them ;)
 

MetalRain

Member
I think pricing of Nvidia 3000 series tells that upcoming AMD products will challenge 3080 in price/performance and that's great.

I think 2.2 GHz is probably not going to happen and maybe 72 CU is that sweet spot between performance/yield/price. So that would make it slightly cheaper than 3080 and slightly slower, but only time will tell.
 
Yep, waiting for RDNA2 6000-series is the most sensible course of action. I'm expecting the 6900XT will be in-btween 3080 and 3090 with more memory than the paltry 10GB on the 3080 and slightly faster to boot.

I'm not quite that optimistic and wider parts tend to have lower sustainable clocks overall, but I don't really rule them out until the cards are in play, y'know? We're nearly there now.
To me this thread isn't much more than a "probably worth it to wait and see" but I still consider that to be good advice unless your GPU is literally dying right now.

It is not. All rumours (aside from the troll one yesterday) point to performance around a 3080. Napkin math leads to the same prediction looking at next gen consoles with their 2080 performance in just a 150W TDP, one achieving this with just 36 CUs.
 
Last edited:

Boss Mog

Member
People need to realize that most gamers don't need or even want a 3080 or 3090. The 3070 is slightly better than the 2080 Ti and how many PC gamers have a 2080 Ti now; a very small percentage I'd wager. A 3070 will be plenty for the vast majority of gamers. So if Big Navi competes with the 3070 but launches for $100-150 less, then AMD will be the people's champ despite not holding the overall performance crown.
 
Last edited:
At first when I saw your big navi clockspeed estimation I was like
iu


then I saw your triple asterisk note and I was like
iu


if cerny does not deliver on his promise of 2.23 "most of the time", he will have caused so much misinformation with his ps5 presentation,
that he should go down in history as equal to ps3 motorstorm and killzone trailers


Other than this, nice topic (y)
We will see at some point what AMD can bring to the table. I am waiting for them too.

Oh please. Don't now bring your trolling in here mate, it derails every time.

The asterisk is not to question Cerny's claim at all, it's because those would be high clocks for something that is much higher than 36 CUs. Give it a rest.
 
Oh please. Don't now bring your trolling in here mate, it derails every time.

The asterisk is not to question Cerny's claim at all, it's because those would be high clocks for something that is much higher than 36 CUs. Give it a rest.
....now if only there was an amd card with 36 CUs so that we could test this

oh snap! 5700 plain has 36 CUs, and also has 448GB/s bandwidth, just like ps5.
what do we know from 5700? game clock 1.625Ghz // boost clock 1.725Ghz
but lets say that these limits were there just to make way for 5700XT
with BIOS checking, you can unlock the restrictions and overclock as much as you like. no matter the cooling solution, can't go close to 2.2Ghz for the love of its life

lets move on to its bigger brother, 5700 XT : game clock 1.755 Ghz // boost clock 1.905
here is a "no limits" overclock test for 5700 XT, where BIOS and registry were touched, a fat liquid cooling block was attached, and card was overclocked to the absolute highest limit.
*notice that even though all barriers were uplifted and card could be clocked to 2.300Ghz, the silicon was no good for anything more than 2.100Ghz. So it was put there, along with a lock that it doesn't spike down below 2.050Ghz


as anyone that reads the test can see, overclocking the 5700 XT up to its hardware limits, taking temperatures completely out of the equation,
for a 18% increase in clockspeeds there was a 40% increase in consumption, and only a ~7% average performance increase

the article calls the results "disappointing", but enlighting as to why amd has put these limits, and suggests that it makes much more sense to buy a stronger GPU that to spend money on fancy cooling so as to raise 5700XT's clocks.


now, what efficiency improvement margin do we expect for amd's latest? 5%? 8%? even if 10%, cerny's magic words do not compute.

I guess we will find out this Christmas who is the troll. could prove to be me -as I go through these numbers and draw conclusions, but I'd suggest that you don't be so hasty to rule out that cerny was the troll all along.
 
Last edited:
Top Bottom