• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

The Ampere 3090 36 tflop beast...is only 10% faster than the 3080

So much for that Nvidia marketing show blitz at the start of this month that painted Ampere as the second coming.



ComputerBase similarly found the gap at just 11%:

At 4K, the 3090 is 10% faster than the 3080. Despite having 36 tflops, way over 100% more than the 13 tflops 2080 Ti, it is merely 45% faster than the old Turing on 12nm.

AMD actually has a shot to get close to this thing, seeing as Navi 21 top card is confirmed as 80CUs....
 
Last edited:
So much for that Nvidia marketing show blitz at the start of this month that painted Ampere as the second coming.



At 4K, the 3090 is 10% faster than the 3080. Despite having 36 tflops, way over 100% more than the 13 tflops 2080 Ti, it is merely 45% faster than the old Turing on 12nm.

AMD actually has a shot to get close to this thing, seeing as Navi 21 top card is confirmed as 80CUs....

Wait, you think AMD actually has a shot at beating the 3080 or even 3090?! Lol, they barely threw blows with the 2070S. There's still the 2080, 2080Ti, and RTX titan (just to throw in there). I'm pretty sure AMD would be shouting from The mountaintops about their new GPU, if it could even compete with Nvidia. Silence speaks more than words.
 

Kuranghi

Member
This must be "the underdogs have suddenly gotten the upper hand even though things haven't really changed much"-week.

Are we maybe getting ahead of ourselves a bit?

tumblr_inline_nn4j6etWkO1qzqdem_500.gifv
 
Last edited:

Max_Po

Banned
This has been posted numerous times.

its a card for video editors/professionals not gaming.

I bet they are curiously waiting for AMD to reveal Big Navi and then reveal their 12,16 or 20 gig 3080 Super/Ti Cards

Actually I should have added, this has been mentioned by Hardare outlets like that PC Gaming Jebus on youtube... GamingNexus ?
 
Last edited:
Wait, you think AMD actually has a shot at beating the 3080 or even 3090?! Lol, they barely threw blows with the 2070S. There's still the 2080, 2080Ti, and RTX titan (just to throw in there). I'm pretty sure AMD would be shouting from The mountaintops about their new GPU, if it could even compete with Nvidia. Silence speaks more than words.

There is very, very little extra performance out of a Ampere Titan left on the table. Maybe a couple percent.

And yes, I do think AMD will get 'close', seeing as their aim is to double 36CU 5700XT (2070S performance) with an 80CU card. The mem bandwidth looks low but there are some rumours they are doing something with cache to circumvent that. They'll likely fall short, which is far and away from the sentiment just after Jensen Huang's dishonest marketing show earlier this month, where people were saying AMD is a generation behind.

Nvidia made a mistake choosing Samsung 8nm, yields are terrible hence hardly no-one has been able to buy one.
 

manfestival

Member
This has been posted numerous times.

its a card for video editors/professionals not gaming.

I bet they are curiously waiting for AMD to reveal Big Navi and then reveal their 12,16 or 20 gig 3080 Super/Ti Cards

Actually I should have added, this has been mentioned by Hardare outlets like that PC Gaming Jebus on youtube... GamingNexus ?
Nvidia failed with its marketing. You can't blame the marketed audience for being disappointed with the results. That is just silly
 

acm2000

Member
isnt this more down to games, drivers and engines not being able to make use of the power and the cpu other parts causing bottle necks?
 
isnt this more down to games, drivers and engines not being able to make use of the power and the cpu other parts causing bottle necks?

Not in this case no. The performance increase in RT is also tiny comparative to Turing but that may be held back by the games.
 
Can someone explain this to me? How is it possible that a 36TFL device is only 10% faster then the... Nevermind rtx 3080 has 30...seems right but the price differential is too much for what you get.

We should be asking, how come a 36tflop card that has (according to Nvidia marketing slides) 1.9X performance per watt increase over Turing is in reality only 45% faster than the 13 tflop 2080 Ti whilst drawing 40% more power on top :messenger_sunglasses:
 
Last edited:
Can someone explain this to me? How is it possible that a 36TFL device is only 10% faster then the... Nevermind rtx 3080 has 30...seems right but the price differential is too much for what you get.
3090 is a titan replacement so only worth it if you use it for production workloads. But nvidia also knows there is a group of people who have no problems forking over a lot of money to get the best. Hence all the BS 8K hype.
 
Yep lol

I went and got mine cause my homie at microcenter was holding it, and sold it in the parking lot for $3900 LOL. Just gonna wait for a 3080 or a 3080 20gb if they're real. My 2080 Super is perfect for all of the games coming out
Another buddy of mine bought his and got damn near 4500 for it on offerup this morning
 

Great Hair

Banned
The canadian sandals lover thought it was misleading, even though he has an 8K sponsored nvidia video playing Doom with nvidia settings on his channel.

Looks like a 3080 with slightly better specs, with more ram and a bit slower at times.

 

LordOfChaos

Member
Here's the flipside: At significantly less cost, the 3080 performs almost as well. AMD has its work cut out to catch up. VII has 60CUs, scale to 80, add the IPC gain...Eh. They have to execute this very well is what I'll say, like perfectly, to have a hope to catch up. This is without considering DLSS. Nvidia may market the 3090 as an 8K card because they're going to sell it to whoever's willing to buy it, but I agree that it's more of an upgraded 3080 with more VRAM for workloads that need it, though falls short of a Titan by being gimped in several important ways. That said, with DLSS on, the 3090 is absolutely the best experience at 8K you can get today.


index.php


index.php
 
Here's the flipside: At significantly less cost, the 3080 performs almost as well. AMD has its work cut out to catch up. VII has 60CUs, scale to 80, add the IPC gain...Eh. They have to execute this very well is what I'll say, like perfectly, to have a hope to catch up.

Radeon 7 is a bad measuring stick for RDNA2, it's like 2 gens of architecture behind so the ipc jump from that old dog will be absolutely huge, with 20 more CUs and much higher clocks.
 

TheMan

Member
I guess I dont' understand why the practical speed increase is relatively small if the 3090 has so many more TFs? Is it software not being able to take advantage of the powah?
 

LordOfChaos

Member
Radeon 7 is a bad measuring stick for RDNA2, it's like 2 gens of architecture behind so the ipc jump from that old dog will be absolutely huge, with 20 more CUs and much higher clocks.

Whether you look at the VII or the 40CU 5700 in comparison, everything has to work out very well for them to catch up. Not saying they can't reach the same ballpark. But AMD's shader occupancy has usually trailed Nvidias, and they'll need this scaling to be about perfect.

+ again, there still seems to be nothing like tensor cores, and the advantage of DLSS really can't be understated, we went from 4K sometime in the future to entirely feasible, even 8K with DLSS isn't entirely bullshit.

I guess I dont' understand why the practical speed increase is relatively small if the 3090 has so many more TFs? Is it software not being able to take advantage of the powah?

Looks like Nvidia engaged in some ALU spam here. Keeping them filled with useful work is a hard job I expect the next generation will take another big step forward in, but they can still be useful in compute loads that have an easier time filling them than varying things going on in games. Thing is AMD was also usually behind Nvidia here, so to scale to 80CUs, they really need to hit all the right notes at the same time to compete.
 
Last edited:

TaySan

Banned
I was originally gun-ho on getting the 3090, but i was lucky enough to snag a 3080 FE last night and after seeing the reviews i think i will just save the money and look into Zen 3.
 
Ok Lisa...

A large majority of us have been saying for weeks that the 3090 wasn't gonna be more than 15% at best better than the 3080. This is no surprise. It only surprised people who don't know how GPU's work.
 
Last edited:
Here's the flipside: At significantly less cost, the 3080 performs almost as well. AMD has its work cut out to catch up. VII has 60CUs, scale to 80, add the IPC gain...Eh. They have to execute this very well is what I'll say, like perfectly, to have a hope to catch up. This is without considering DLSS. Nvidia may market the 3090 as an 8K card because they're going to sell it to whoever's willing to buy it, but I agree that it's more of an upgraded 3080 with more VRAM for workloads that need it, though falls short of a Titan by being gimped in several important ways. That said, with DLSS on, the 3090 is absolutely the best experience at 8K you can get today.

The Radeon VII was on the GCN architecture, essentially part of the Vega line of cards like the Vega 64 etc...

The 5700xt was the follow up on the new RDNA architecture with 40CUs and trades blows with a 2070S as it should given the price range it was a mid/mid-high range card.

The 6000 series cards that will be unveiled in October are RDNA2, which offers improvements in IPC/Perf Per Watt and clock speeds/thermals. They also include Ray Tracing.

The XSX is a 52CU RDNA2 APU with some customisations. (We don't know exactly how much will differ or not from final PC RDNA2 cards). On a small APU (which includes the CPU as well) with low power draw the XSX is said to be roughly on par with a 2080 (possibly better?).

We have seen one leaked benchmark for an unknown Radeon card that matches a 2080ti (presumably 6700? Which would compete well with a 3070).

Early in the year, a few months back we saw an open VR benchmark of an unknown Radeon card being roughly 30% more powerful than a stock 2080ti, which would put it on par roughly with a 3080. Now we have no idea what this card was or what the configuration/drivers etc... were but normally over time things improve rather than get worse when it comes to things like this.

Now with all that in mind I think it is very likely that this time RDNA2 will trade blows with the 3000 series and compete well across (most of) the Nvidia stack. Given what we know an 80CU RDNA2 card (with the IPC/Perf per watt improvements that come with RDNA2 over RDNA1) along with whatever other architectural improvements that come with RDNA2, on a larger die (500+mm2) with higher power draw (270 - 300w) at roughly PS5 clock speeds (2.2 boost clock?) then I think they cold have something quite good on their hands.

Nobody is sure how Ray Tracing performs yet and how it compares to Ampere, but given that Ampere did not live up to the hype in terms of RT increase over turing then it is very possible AMD could be quite competitive here, maybe Ampere will still have better RT performance overall but it could be closer than a lot of people thought a few weeks ago. We won't know for sure until reveal+benchmarks but I think they could perform reasonably well here.

DLSS? At the moment it seems like cool tech but only supported in around 6 games so far so I wouldn't base a purchase solely on that. Although it can be a nice bonus. Do AMD have anything to compete with it? Possibly, they have been very tight lipped so far and haven't mentioned anything so we can only wait and see.

Plus Nvidia is on an inferior Samsung node, hence the huge power draw and extravagant cooling solutions. I think AMD might surprise many and compete quite well but who knows, maybe they will fuck it up somehow? Hard to say for certain but so far all evidence we have so far points to them having something good cooking.
 

llien

Member
Not saying they can't reach the same ballpark
The gap is imaginary.
GPUs are fairly straightforward compared to what CPUs are doing.
People project "starved R&D" times into today. Makes little sense.
Which GPU AMD will take on, will depend on which chip size they've picked up as max.

Regardless, 3080/3090 are basically #Fermi2.
 

LordOfChaos

Member
Yeah. 8k with TAA-ish upscaling. Amazing progress.

As much as I enjoy saying DLSS is just spicy TAA as a meme, TAA generally doesn't shit out image quality that is in some cases superior to the native source. Going from 4K being a stretch to being entirely viable with barely any difference in image quality from native is definitely an advantage.
 
Last edited:

LordOfChaos

Member
We have seen one leaked benchmark for an unknown Radeon card that matches a 2080ti (presumably 6700? Which would compete well with a 3070).


You're referring to the Benchmark of the Singularity result right? I mean...

I'd love to see AMD kick Nvidia in the teeth here for going with an inferior node to pocket more profit, guys. I've just been on this cycle too many times and prefer to keep my hopes low, if they do it, marvelous.
 

llien

Member
As much as I enjoy saying DLSS is just spicy TAA as a meme, TAA generally doesn't shit out image quality that is in some cases superior to the native source. Going from 4K being a stretch to being entirely viable with barely any difference in image quality from native is definitely an advantage.
2.0 is TAA based, with all the consequences. The "better than original" with it is highly dubious, to put it softly.

1.0 was the true AI take, which could in theory beat "native" in certain cases. But in reality, it was losing to Fidelity Fx (which, mind you, works on all GPUs). Now, there is no place for "better than native" to come from.
 

LordOfChaos

Member
2.0 is TAA based, with all the consequences. The "better than original" with it is highly dubious, to put it softly.

1.0 was the true AI take, which could in theory beat "native" in certain cases. But in reality, it was losing to Fidelity Fx (which, mind you, works on all GPUs). Now, there is no place for "better than native" to come from.

And why is Contrast Adaptive Sharpening comparable to upscaling? It can appear to be slightly sharper because of the added sharpening, but sharpening also can introduce artifacts and grain.

DSOGaming called DLSS 2.0 slightly better than both native and FidelityFX's contrast adaptive sharpening, yes the better than native in some scenarios applies to the second version, the difference is not requiring game specific training.

How is this "not a true AI take"?

 
Last edited:

GlockSaint

Member
The "8k ready" hype is so cringy and sleezy. I can't find any 8k performance by the 3080 which will probably have similar performance. I mean just think about it, we are just now getting good framerates at 4k without crazy setups, what makes us think a card suddenly out of the blue will give us playable games at a resolution four times the pixel count of 4k. Pure bs marketing. Only doom can run it native cause it's probably the most well optimized port of the generation.
 
You're referring to the Benchmark of the Singularity result right? I mean...

I'd love to see AMD kick Nvidia in the teeth here for going with an inferior node to pocket more profit, guys. I've just been on this cycle too many times and prefer to keep my hopes low, if they do it, marvelous.

Yeah I think that was the one. Look I get where you are coming from entirely, I'm somewhat weary of hoping for too much myself and getting badly burned.

But at the moment there are a lot of signs, evidence and differences compared to the past that seem to point to them having something good. At this point the ball really is in their hands, if they drop it that is really going to suck for Radeon GPUs and competition in the GPU market bringing us one step closer to an Nvidia monopoly. I would prefer that didn't happen but who knows, maybe they mess it up?

Having said that the evidence so far seems to all be pointing towards something solid, let me summarize below:

  • AMD as a company are no longer starved for revenue to invest in R&D due to Ryzen success.
  • Raja Koduri, who was a strong proponent of GPU Compute now works at Intel instead of heading up Radeon group.
  • RDNA1 was AMDs first entirely new architecture in years, previously making iterations on GCN.
  • RDNA1 brought massive performance per watt and IPC gains over GCN. It was designed to be modular aiming towards a future chiplet design. (RDNA3?)
  • With RDNA1, power efficiency was not quite good enough to make a larger (80CU?) card than 5700xt as it would have been maybe 400w so AMD waited.
  • RDNA2 is said to be a 50+% performance per watt increase over RDNA1.
  • XSX on a 52CU RDNA2 low power, low clocked APU manages to reach 2080+ levels of performance.
  • AMD have been agressively embracing new node shrinks on Ryzen and their GPUs, this time they have a superior TSMC node to Nvidia.
  • Leaked AoS benchmark shows an unknown radeon card matching a 2080ti (this is presumably a 6700?)
  • A much earlier leaked Open VR benchmark shows an unknown Radeon card beating a 2080ti by roughly 30% (3080 level of performance).
  • The biggest "Big Navi" is supposed to be an 80CU card on an estimated 500+mm" die
  • The biggest card is rumoured to draw 270-300w power.
  • AMD are no longer 1 year+ late to the party. They will launch only a short 2ish months after Nvidia, that is a huge step forward.
  • Nvidia rushed the release of the 3000 series cards when they didn't have enough stock and reduced prices, they wouldn't do this unless they were worried about what AMD was cooking and wanted to launch first to define the generation and gain mindshare.
I've been following the leaks and rumours very closely, hopefully AMD don't mess it up and drop the ball but so far things are looking very promising for them.
 
Last edited:

Iamborghini

Member
2.0 is TAA based, with all the consequences. The "better than original" with it is highly dubious, to put it softly.

1.0 was the true AI take, which could in theory beat "native" in certain cases. But in reality, it was losing to Fidelity Fx (which, mind you, works on all GPUs). Now, there is no place for "better than native" to come from.


DLSS 2.0 vs Fidelity FX
Tell me where and how Fidelity FX is better than DLSS 2.0??
 
Ok so im not nearly as smart as you guys lol but can someone in idiot terms tell me if im wrong for having my heart set on a 3090 evga ftw3 edition ? And if so what the hell should i buy or what route should i take ? To even further cut to the chase we all know games will look exponentially better in two-3 years..real next gen games if you will...and i want to know what route taken will be able to cover those games at 4k at least at 60 fps. Thats what i really want to know. Will a hypothetical red dead 3 Or gta6 have my 3090 struggling to reach 4k 60 fps? Thats what i really want to know. Will a 3090 cover this generation as far as 4k60fps goes ? Please someone lmao 😂
 
10% better... until games need more than 10GB of VRAM.

Well I agree, the 10GB 3080 is slightly gimped, would not touch it and wait for 20GB or even 3070 Ti 16GB which was already spotted.
Before Reviews
Untitled.png




After Reviews
FDEFS.png

You've dodged a bullet there mate!

Because teraflop isn't the synonym of the word performance contrary to popular belief.

Well I was telling the console boys this, and 3090 perf goes to highlight how useless a metric it is, as 1 Ampere tflop is not equal to 1 Turing, it's less efficient.
 
Top Bottom