I feel like this should be pretty obvious.
Big Navi has 80 Compute Units (5120 shaders), and the RTX 3080 has 68 Streaming Multiprocessors (4352 shaders).
You just need to take a look at the 5700XT vs the 2070 Super. Both have 2560 shaders. If you lock both at the same frequency, they perform pretty much identically. The only advantage Turing had over Navi 10 was the fact that it could clock higher, and could scale a bit better to those clocks.
Ampere won't have such an advantage, as there appears to be no significant clockspeed increase this time around. Even if it could clock higher than the 1700MHz rated boost (according to leaks), its already consuming a ridiculous 320W. The thermal headroom you'd need for higher clocks would be ludicrous.
This is exactly what Nvidia wants.
Can't have a competitive market, if nobody buys the competing product.
Its this guy
Note the date. Its also a reply to an earlier tweet of his own that got the GA100 die size reasonably accurate (leaked approx. 800mm2, the actual die is 826mm2 - good enough) and also got the transistor count pretty damn accurately as well (leaked approx 55 Billion transistors, the actual die has 54.2 Billion transistors - again good enough).
He also had a tweet from Feb 2019 (which unfortunately I cannot find), where he noted that the fully unlocked GA100 die would have 128SM (8192 cuda cores), which was absolutely correct. This tweet has him calling that the A100 final chip would have 1 GPC cut down, so only 108 SM's (6912 cuda cores) active. The chip specs he nailed around a year in advanced, and the final active processor specifications a full month in advanced. He's been basically spot on with pretty much every Nvidia leak so far.
Here's another tweet:
Where he leaked the existence of GDDR6X, which nobody believed because there were no JEDEC specs for GDDR6. A few weeks before the launch of gaming Ampere, and lo and behold Micron confirm the existence of GDDR6X memory, which happens to be a non-JEDEC certified VRAM spec.
A very reliable source of information.
Thanks.
But note that 3080 is supposed to be GA102, not GA104 this time.
In other words, he's saying AMD will comfortably beat 3070, to a point that 3070Ti is useless (still slower than Big Navi).
Makes me wonder how big 3080 and big Navi chips are, and how much the former would cost.
AT BEST Big Navi will be on par with a 3080 but with far worse raytracing performance and no answer for DLSS.
RT performance at this point is less relevant that GPU PhysX was, only a handful of games supports it and there is nothing in the forseable future that would change it.
The answer to DLSS is "use your brain".
Based on the leak above Big Navi wipes the floor with GA104, but not GA102.
If 3080 is indeed GA102 (which, given it's power consumption of 320W is likely the case), it means AMD doesn't beat it.
So good luck with the price on both 3080 and 3090, chuckle.
But, remember, #TheMoreYouBuyTheMoreYouSave lol.
Hey everybody!! I just came from the "Only 1% of the audience cares about backwards compatibility!!" thread to remind you all that the RTX 3090 "doesn't matter", anyway!! AMD are geniuses just shooting for that BIG 3080 money!!
Hell yeah!
I mean, 1% there and 1% there, you are genius for noticing that astonishing similarity..
1% is uber important, mind you.
For instance, Sun has absorbed 99% of all the matter in our solar system, so all planets together are just 1% of the total mass.
See? 1%, again.
I think it is hard to overestimate importance of the 1% of something.
Now we just need to convince game developers to invest major efforts to support 1% of the market. Any plans on how do we do it?