• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Rumor: AMDs Big Navi performance on par with 3080RTX

PhoenixTank

Member
When companies manufacture a product they have to take into consideration the cost of maintaining that product. In regards to GPU, that would be driver support. Regardless of the number of sales the company needs to have a plan for how they will implement driver support, how often they'll deploy updates, and the most efficient way to make that happen. AMD wasn't a nobody when it came to GPUs. They sold enough GPUs that they should have had the resources available for driver development.
Being fair, AMD scraped along with losses for quite a long period of time. Literally selling and leasing back major buildings to keep afloat. That is not an environment which would attract and keep the talent required for a sprawling development project like GPU drivers, IMHO. I swear they were avoiding WHQL cert at one point to save cash, but no idea if that was actually true.
I am cautiously optimistic that they'll do okay now that they have had some cash to play with and are on the second generation of a new architecture. It all remains to be seen, though.
 

AGRacing

Member
Hey everybody!! I just came from the "Only 1% of the audience cares about backwards compatibility!!" thread to remind you all that the RTX 3090 "doesn't matter", anyway!! AMD are geniuses just shooting for that BIG 3080 money!!

Have a great day! And remember ; "If it isn't for me - screeeeeew you!!!"

PJApV3t.gif
 
Last edited:

IFireflyl

Gold Member
Being fair, AMD scraped along with losses for quite a long period of time. Literally selling and leasing back major buildings to keep afloat. That is not an environment which would attract and keep the talent required for a sprawling development project like GPU drivers, IMHO. I swear they were avoiding WHQL cert at one point to save cash, but no idea if that was actually true.
I am cautiously optimistic that they'll do okay now that they have had some cash to play with and are on the second generation of a new architecture. It all remains to be seen, though.

You might be right. I'm certainly not an expert on how AMD did their business. Either way, I look forward to seeing what they bring to the table going forward.
 

Papacheeks

Banned
We will just file this under "no shit"

seriously, AMD is no match for Intel or nVidia. The only way AMD can win is by shrinking their chips, but once the big guys shrink, they're simply more efficient and powerful and better designed. For instance, AMD at 7nm is only slightly better than Intel at 14nm+(repeated). Intel will destroy them at 10nm. nVidia will destroy them at 8nm.

PLEASE STOP. You have no clue on the improvements in their design and arc that they have iterated. their version of hyper threading which is smt, shits on intel. Their improved latency between CCX's has improved drastically, and is why in almost all other applications as time has gone on they are destroying intel cpu's.

Hell even with the recent updates done in the past 6 months in Adobe work loads has seen improvements in even the first generation of ryzen. Games included.

Their core tech in work loads has been running circles on intel and notice how each iteration their performance improves but they still keep modest overclocks and tdp.

Just stop.

Ryzen in a viable platform that even with the x570, b550 have features for their chipsets that intel wont get for another year.

They also are going to be first to market when it comes to chiplet for gpu's.

Something nvidia has been trying to do for sometime. AMD is using that tech they already have for cpu's into their gpu, and RDN3-4 will be the first showing.
 

supernova8

Banned
While this news might be disappointing for some, and means Nvidia’s flagship RTX 3080 Ti could have no direct competition, AMD is reportedly planning to price its RDNA 2 flagship to undercut Nvidia’s GeForce RTX 3080 in a bid to offer consumers better value in the high-end market.

If this part is true then I'm pretty happy. RTX 3080 performance closer to RTX 3070 would be a godsend but I'm not sure I trust AMD to massively undercut NVIDIA just yet.
 

supernova8

Banned
Well you're not gonna get that.
That exact same mentality is exactly what has led to the current market situation. AMD and ATi before them have been thought of as a mechanism for getting Nvidia to lower their prices. You can't provide the software development resources for driver development etc. if nobody buys your GPU. That costs money.

Since the HD5000 series people have patted AMD on the back for delivering very good value products, and them promptly gone and bought an Nvidia card instead as soon as the price dropped. Back then, AMD's drivers weren't even that bad, which is the amazing thing. Nvidia had their fair share of appalling drivers from that era. One infamous case where a driver accidentally removed the software thermal limiter on the GPU and literally caused the GPU die to go into thermal runaway and catch fire. Nvidia have had bumpgate, which was a problem where the "bumps" on Nvidia chips would wear and break due to thermal stress (lol Fermi). This problem was awful, and yet Nvidia never accepted fault and instead blamed OEMs. If you want to know why Apple to this day do not use Nvidia GPUs - Bumpgate is the cause.
Then there was the infamous GTX 970 3.5GB fiasco as recent as Maxwell. The GTX 970 was advertised as a core-reduced 980 with 4GB of RAM and 64 ROPS. The reality was that Nvidia had not just disable Cuda cores, but they had disabled part of a memory controller and its associated ROPS. So only 3.5GB of VRAM was actually usable and only 56 ROPS were active.
More recently than that did people forget the frequency with which 2080Tis failed and needed to be RMA'd?

The reality is that Nvidia has had their fair share of problems, with drivers and even hardware failure. During GCN, AMD's drivers were nearly faultless as succeeding architectures were derivatives of the previous set. As such the drivers had matured and everything was perfectly fine. The drivers were so good in fact that the HD7970 (Tahiti), which started off slower than the GTX 680 (GK104), but in some circumstances is now faster than the GTX 780 Ti (GK110). The only time I remember drivers being a problem was Navi 10, where the move to RDNA was handled exceptionally poorly by the way of drivers.

Why has everyone forgotten all this shit? Instead AMD and ATI have had to suffer this "bad driver" meme that's been haunting them for decades.

I am by no means an AMD fanboy, but if thats how you perceive me then so beit. But I've been following this market for years now, I've noticed a pattern. Nvidia's problems are either forgotten or are framed as user-error. Meanwhile AMD's problems are not only remembered, but they hang over them like a spectre and are used to caveat any GPU launch before its even happened. "Well they might have the performance crown, but their drivers are bad, so I'll buy Nvidia anyway".

I suppose you could blame a combination of willfull ignorance and/or foolhardy brand loyalty on the part of the PCMR community, and the fact that Nvidia deployed a systematic astroturfing campaign in partnership with Arbuthnot Entertainment Group (AEG) and offered freebies to prominent members of the tech community and forums to talk up Nvidia products, whilst under an NDA to never mention that they're receiving kickbacks. The current state of the GPU market is the logical conclusion of a campaign set in motion over a decade ago.

So now you can take a look at the 5700/5700 XT as an example of AMD no longer playing the price cut game. They'll undercut Nvidia by as little as they can get away with. Why bother cutting their own margins into oblivion when Nvidia will cut GeForce prices to be $50-100 above the equivalent Radeon and people go GeForce anyway? They will now try and maximise the profit on any GPU they sell. If Nvidia raise prices by $200 per tier, AMD will happily set their prices $50-100 below that benchmark.

If they don't sell that well? Well who cares? AMD sell 20-30 million console APU's every year which creates great revenue - even if margins are lower. And the rest of their energy will be spent on Datacentre where margins are much higher than PC gaming.

The PCMR has spoken, and they seem to be willing to pay more for Nvidia at any given price or tier. So I guess we'll just have to deal with it.

If I were AMD I would run a really aggressive marketing campaign with discounts for people who buy a Ryzen/Radeon bundle whether it's self-build or via an OEM. Even if people end up abusing it and selling the Radeon GPUs online, people will end up buying them and their mind share will start to creep up. It'll be like "oh yeah Radeon GPUs are pretty good actually!"

At the moment it's a case of do I go for the safe bet at $400 or the slightly dodgy bet at $350. If it were $400 vs $299 you'd probably take a punt but as you say NVIDIA can and will compete on price when necessary to discourage AMD from doing just that.
 

llien

Member
I feel like this should be pretty obvious.
Big Navi has 80 Compute Units (5120 shaders), and the RTX 3080 has 68 Streaming Multiprocessors (4352 shaders).

You just need to take a look at the 5700XT vs the 2070 Super. Both have 2560 shaders. If you lock both at the same frequency, they perform pretty much identically. The only advantage Turing had over Navi 10 was the fact that it could clock higher, and could scale a bit better to those clocks.
Ampere won't have such an advantage, as there appears to be no significant clockspeed increase this time around. Even if it could clock higher than the 1700MHz rated boost (according to leaks), its already consuming a ridiculous 320W. The thermal headroom you'd need for higher clocks would be ludicrous.




This is exactly what Nvidia wants.
Can't have a competitive market, if nobody buys the competing product.


Its this guy



Note the date. Its also a reply to an earlier tweet of his own that got the GA100 die size reasonably accurate (leaked approx. 800mm2, the actual die is 826mm2 - good enough) and also got the transistor count pretty damn accurately as well (leaked approx 55 Billion transistors, the actual die has 54.2 Billion transistors - again good enough).

He also had a tweet from Feb 2019 (which unfortunately I cannot find), where he noted that the fully unlocked GA100 die would have 128SM (8192 cuda cores), which was absolutely correct. This tweet has him calling that the A100 final chip would have 1 GPC cut down, so only 108 SM's (6912 cuda cores) active. The chip specs he nailed around a year in advanced, and the final active processor specifications a full month in advanced. He's been basically spot on with pretty much every Nvidia leak so far.

Here's another tweet:


Where he leaked the existence of GDDR6X, which nobody believed because there were no JEDEC specs for GDDR6. A few weeks before the launch of gaming Ampere, and lo and behold Micron confirm the existence of GDDR6X memory, which happens to be a non-JEDEC certified VRAM spec.

A very reliable source of information.


Thanks.
But note that 3080 is supposed to be GA102, not GA104 this time.
In other words, he's saying AMD will comfortably beat 3070, to a point that 3070Ti is useless (still slower than Big Navi).
Makes me wonder how big 3080 and big Navi chips are, and how much the former would cost.

AT BEST Big Navi will be on par with a 3080 but with far worse raytracing performance and no answer for DLSS.
RT performance at this point is less relevant that GPU PhysX was, only a handful of games supports it and there is nothing in the forseable future that would change it.

The answer to DLSS is "use your brain".

Based on the leak above Big Navi wipes the floor with GA104, but not GA102.
If 3080 is indeed GA102 (which, given it's power consumption of 320W is likely the case), it means AMD doesn't beat it.
So good luck with the price on both 3080 and 3090, chuckle.

But, remember, #TheMoreYouBuyTheMoreYouSave lol.


Hey everybody!! I just came from the "Only 1% of the audience cares about backwards compatibility!!" thread to remind you all that the RTX 3090 "doesn't matter", anyway!! AMD are geniuses just shooting for that BIG 3080 money!!
Hell yeah!
I mean, 1% there and 1% there, you are genius for noticing that astonishing similarity..
1% is uber important, mind you.
For instance, Sun has absorbed 99% of all the matter in our solar system, so all planets together are just 1% of the total mass.
See? 1%, again.

I think it is hard to overestimate importance of the 1% of something.

Now we just need to convince game developers to invest major efforts to support 1% of the market. Any plans on how do we do it?
 
Last edited:

supernova8

Banned
Thanks.
But note that 3080 is supposed to be GA102, not GA104 this time.
In other words, he's saying AMD will comfortably beat 3070, to a point that 3070Ti is useless (still slower than Big Navi).
Makes me wonder how big 3080 and big Navi chips are, and how much the former would cost.


RT performance at this point is less relevant that GPU PhysX was, only a handful of games supports it and there is nothing in the forseable future that would change it.

The answer to DLSS is "use your brain".

Based on the leak above Big Navi wipes the floor with GA104, but not GA102.
If 3080 is indeed GA102 (which, given it's power consumption of 320W is likely the case), it means AMD doesn't beat it.
So good luck with the price on both 3080 and 3090, chuckle.

But, remember, #TheMoreYouBuyTheMoreYouSave lol.

I'd say ray tracing is a pretty big one if you consider NVIDIA has based a lot of their product on the whole RTX thing. It's not like they came out 'Geforce PTX 780' or something when they acquired PhysX. It's only insignificant right now because the performance isn't really there for developers to have a fully ray-traced game (along with nice textures, models etc.) without absolutely tanking the framerate.
 
RT performance at this point is less relevant that GPU PhysX was, only a handful of games supports it and there is nothing in the forseable future that would change it.
So...the release of the next gen consoles which both support RT on hardware level is outside of your "forseeable future"?
That nearly all bigger games will come with some sort of RT implementation in the future is a given at this point now that the standard platforms get it.
Thus, claiming the RT performance doesn´t matter for new hardware releases is downright stupid.
And with RT, features like NVidia`s DLSS will become more and more important, too.
 
Last edited:

martino

Member
So...the release of the next gen consoles which both support RT on hardware level is outside of your "forseeable future"?
That nearly all bigger games will come with some sort of RT implementation in the future is a given at this point now that the standard platforms get it.
Thus, claiming the RT performance doesn´t matter for new hardware releases is downright stupid.
And with RT, features like NVidia`s DLSS will become more and more important, too.

imo current context didn't do justice to tech nvidia put in rtx 2xxx (despite its ugly price) because most of them are still not used and will not be used until one years at best (vrs advanced tier / mesh shader / rt)
i expect those card to age well enough because of that
 
Last edited:

llien

Member
So...the release of the next gen consoles which both support RT on hardware level is outside of your "forseeable future"?
Release of "RTRT" supporting consoles, as well as release of VR supporting consoles, is part of the foreseable future/past, the same cannot be said about RTRT/VR being extremely popular in games though.
 
Last edited:
Top Bottom