• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.
  • The Politics forum has been nuked. Please do not bring political discussion to the rest of the site, or you will be removed. Thanks.

NVIDIA Ada Lovelace GPUs For GeForce RTX 40 Series To Bring The Same Generational Jump As Maxwell To Pascal

3liteDragon

Member
Mar 3, 2020
2,104
12,852
755
NVIDIA GeForce RTX 40 Series graphics cards featuring the Ada Lovelace GPU architecture are expected to deliver the same generational performance jump that we saw moving from the 9-series Maxwell GPUs to 10-series Pascal GPUs. The rumor comes from Ulysses, who has been talking about NVIDIA's next-gen parts for a while now.
NVIDIA GeForce RTX 40 GPUs With Ada Lovelace Architecture Expected To Deliver Same Generational Jump As We Saw With Maxwell To Pascal

The NVIDIA GeForce GTX 10 series graphics cards based on the Pascal GPU architecture were a huge performance jump compared to its Maxwell-based GeForce 9 series precessors. The 16nm chips delivered a major improvement to performance, efficiency, and overall value, also marking one of the biggest leaps in 'Ti' graphics performance. The GeForce GTX 1080 Ti is still regarded as the best 'Ti' graphics card ever made, a performance jump that NVIDIA has been unable to match with its Turing and Ampere flagships.



Now the NVIDIA GeForce RTX 40 series is expected to deliver the same generational performance jump over the GeForce RTX 30 series. Based on the Ada Lovelace GPU architecture, the GeForce 40 series graphics cards are expected to utilize TSMC's 5nm process node and while they will be very power-hungry, their efficiency numbers will go up tremendously thanks to the huge performance jump.

There are also some other details mentioned regarding clock speeds and launch timeframe. We know that the GeForce RTX 40 series is a long way from now and the rumor is that we won't expect these cards to launch until late Q4 2022. This is also due to the fact that NVIDIA will reportedly be offering an intermediate SUPER refresh of its GeForce RTX 30 series lineup in 2022. So if that lineup was to come before RTX 40 series, then we can expect the launch to slip further in Q1 2023. This means that AMD might just have its RDNA 3 lineup out by the time NVIDIA launches its new GPU family.



In terms of clock speeds, the Ada Lovelace-powered NVIDIA GeForce RTX 40 Series GPUs are said to offer clock speeds between 2.2 to 2.5GHz (boost). This is a nice improvement to 1.7-1.9GHz clocks that the Ampere architecture churns out currently on average. The Pascal architecture also clocked impressively and was the first GPU architecture to breach the 2.0GHz clock speed limit however, it is AMD who has taken the clock speed throne with its RDNA 2 GPU architecture which can hit clocks beyond 2.5GHz with ease.

Based on these numbers, if the RTX 4090 or whatever the AD102 GPU is featured inside features 18,432 CUDA cores, then we are getting up to 80 TFLOPs of FP32 compute performance at 2.2GHz which is insane and over 2x the single-precision floating-point jump over RTX 3090. These numbers do align with the rumors that we can expect up to a 2.5x performance jump with the Ada Lovelace-GPUs-based NVIDIA GeForce RTX 40 graphics cards.


NVIDIA CUDA GPU (Generational Comparison) Preliminary:

GPUTU102GA102AD102
ArchitectureTuringAmpereAda Lovelace
ProcessTSMC 12nm NFFSamsung 8nm5nm
Graphics Processing Clusters (GPC)6712
Texture Processing Clusters (TPC)364272
Streaming Multiprocessors (SM)7284144
CUDA Cores46081075218432
Theoretical TFLOPs16.137.6~80 TFLOPs?
Flagship SKURTX 2080 TiRTX 3090RTX 4090?
TGP250W350W400-500W
ReleaseSep. 2018Sept. 202022 (TBC)
But the biggest question remains, will the NVIDIA GeForce RTX 40 Series graphics cards with Ada Lovelace GPUs bring back the same pricing as Pascal? Not only was the GTX 1080 Ti the best card for its performance, but it also offered the best value of any 'Ti' graphics card which is why we all loved the Pascal family. The $699 US pricing made the GTX 1080 Ti an amazing value but not only that, but NVIDIA also dropped pricing of its standard lineup to $499 (GTX 1080) and $349 for the GTX 1070 which we definitely should be going back to. However, the rise in component & logistics costs may not bring back the same level of pricing again.

NVIDIA GeForce GPU Segment/Tier Prices

Graphics Segment2014-20162016-20172017-20182018-20192019-20202020-2021
Titan TierTitan X (Maxwell)Titan X (Pascal)Titan Xp (Pascal)Titan V (Volta)Titan RTX (Turing)GeForce RTX 3090
Price$999 US$1199 US$1199 US$2999 US$2499 US$1499 US
Ultra Enthusiast TierGeForce GTX 980 TiGeForce GTX 980 TiGeForce GTX 1080 TiGeForce RTX 2080 TiGeForce RTX 2080 TiGeForce RTX 3080 Ti
Price$649 US$649 US$699 US$999 US$999 US$1199 US
Enthusiast TierGeForce GTX 980GeForce GTX 1080GeForce GTX 1080GeForce RTX 2080GeForce RTX 2080 SUPERGeForce RTX 3080
Price$549 US$549 US$549 US$699 US$699 US$699 US
High-End TierGeForce GTX 970GeForce GTX 1070GeForce GTX 1070GeForce RTX 2070GeForce RTX 2070 SUPERGeForce RTX 3070 Ti
GeForce RTX 3070
Price$329 US$379 US$379 US$499 US$499 US$599
$499
Mainstream TierGeForce GTX 960GeForce GTX 1060GeForce GTX 1060GeForce GTX 1060GeForce RTX 2060 SUPER
GeForce RTX 2060
GeForce GTX 1660 Ti
GeForce GTX 1660 SUPER
GeForce GTX 1660
GeForce RTX 3060 Ti
GeForce RTX 3060 12 GB
Price$199 US$249 US$249 US$249 US$399 US
$349 US
$279 US
$229 US
$219 US
$399 US
$329 US
Entry TierGTX 750 Ti
GTX 750
GTX 950GTX 1050 Ti
GTX 1050
GTX 1050 Ti
GTX 1050
GTX 1650 SUPER
GTX 1650
TBA
Price$149 US
$119 US
$149 US$139 US
$109 US
$139 US
$109 US
$159 US
$149 US
TBA
NVIDIA and AMD have really been moving the price bar up for their high-end cards for a while now and unless there's some really heated GPU competition between both with enough supply in the channels, we don't expect 'Ti' variants to fall back down the $1000 US pricing segment.
 
Last edited:

Rikkori

Member
May 9, 2020
2,686
5,083
470
That's not that great (performance) actually. MCM RDNA gonna smash their cheeks in. Good news is they'll probably flood the market though, so it's going to be a great time to pick up a low/mid-range GPU after this drought.
 

GymWolf

Member
Jun 11, 2019
19,674
29,748
655
 

KuraiShidosha

Member
Jun 17, 2020
176
362
270
1080 Ti still going strong. Held off on Turing and Ampere because I knew they were a colossal ripoff for what I was getting. Only true upgrades were the 2080 Ti which SUCKS for the money, the 3080 which was a VRAM capacity downgrade but decent performance, and finally a 3090 which is the only true upgrade path and again is a major ripoff.

4090 here I come. Come and obsolete the crap out of 20 and 30 series cards. Can't wait.
 

WitchHunter

Member
Jun 22, 2021
555
406
275

Buggy Loop

Member
Jun 9, 2004
6,382
3,096
1,710
Quebec, canada
That's not that great (performance) actually. MCM RDNA gonna smash their cheeks in. Good news is they'll probably flood the market though, so it's going to be a great time to pick up a low/mid-range GPU after this drought.




And the cycle begins again..

Nobody is getting caught their pants down with MCM. It’s been on literally every high end silicon designer’s R&D for years now. It’s a matter of finding the limits of monolithic improvements on the new nodes. It’s a balance/optimization between lower yields on monolithic designs thus more expensive versus higher yields on multiplie smaller chips but a shitload more expensive in the communication buses between them and reduced efficiency compared to an equivalent monolithic (duh). It’s heading there whenever foundries will hit a wall, but so far, TSMC keeps advancing nicely.

But to think that the first gen of MCM is a killer of anything is quite silly, it’s an optimization calculation and different architectures will have different « go / no-go » results for MCM.
 

Kenpachii

Member
Mar 23, 2018
8,982
11,212
815
Disappointing news. AMD isn't pushing them hard enough they should have released RDNA3 Q1 2022 or q2 at best.

I am in for a upgrade and really hoped RDNA3 would push Nvidia forwards so we could see in the next 2 years massive improvements in performance.

However they are going to cruise next year out entirely with a super refresh which probably will still be lackbusting, then drop insane wattage cards which are also a no go for me, atleast performance jump will finally make RTX something interesting in games without the need for dlss.

However 2023 we will be seeing some next gen games hitting for sure already at that point and frankly i wonder how well those cards will hold its own at higher settings at that point which again will triggers me to wait again for hopper.

Atleast the supers could have some more v-ram i guess, highly doubt it. It's nvidia after all.
 

tusharngf

Member
Oct 25, 2017
379
943
395
80 teraflops holy shit. I still remember people on beyond3d forums discussing about how 1 billion transistor GPU will change everything. Now 3090 boosts almost 28 billion transistors on chip with 36tf power. Absolutely insane level of jump in graphics performance in past 15yrs.
 
Last edited:

jigglet

Member
May 18, 2020
3,213
5,809
630
Not all games will run 60 fps @ 4K with everything at max.

RDNA3 is needed.

Not all, or not any? Can any non-indie games run native 4k at a locked 60 (so the 0.01% lows need to be above 60) even on a 3090?
 
D

Deleted member 17706

Unconfirmed Member
Damn , 1080 Ti while it's already outdated now , it's almost 100% faster than it's predecessor 980 Ti . One of the best price/perf card ever produced for sure besides 8800GT .

Yeah, in hindsight, the 1080 Ti had to have been one of the best ever in terms of GPU value.
 
  • Like
Reactions: ZywyPL

hlm666

Member
Feb 25, 2021
393
484
265
Are they called Lovelace because they suck?
They have been naming their gpus after scientists/mathematicians.

"Augusta Ada King, Countess of Lovelace (née Byron; 10 December 1815 – 27 November 1852) was an English mathematician and writer, chiefly known for her work on Charles Babbage's proposed mechanical general-purpose computer, the Analytical Engine. She was the first to recognise that the machine had applications beyond pure calculation, and to have published the first algorithm intended to be carried out by such a machine. As a result, she is often regarded as the first computer programmer."
 

rnlval

Member
Jun 26, 2017
1,270
1,014
460
Sector 001
gpucuriosity.wordpress.com
The advantage of GTX 1080 Ti over GTX 980 Ti is about 46% (less with GTX 980 Ti AIB OC which is similar to GTX 1070).
The advantage of GTX 1080 Ti over GTX 980 Ti OC (similar to GTX 1070) is about 40%.


The advantage of RTX 3080 Ti over RTX 2080 Ti is about 32% with mostly last-gen raster-only games.
 

kiphalfton

Member
Dec 27, 2018
1,634
2,401
490
It will perform 30% better than it's previous generation counterpart, like always, and people will still act surprised when it happens.
 

Rikkori

Member
May 9, 2020
2,686
5,083
470
Nobody is getting caught their pants down with MCM. It’s been on literally every high end silicon designer’s R&D for years now. It’s a matter of finding the limits of monolithic improvements on the new nodes.
Right, and so were chiplets & IF and yet AMD got there quicker than Intel and ate their lunch. Just because everyone is aware of something doesn't mean everyone can get to it at the same time. There are many reasons why AMD is better situated than Nvidia for MCM at least initially, hence RDNA 3 will smash their cheeks in. Only real question is what they'll do for RT and if they can add FSR 2.0 w/ ML in time for the new gen.
 

Buggy Loop

Member
Jun 9, 2004
6,382
3,096
1,710
Quebec, canada
Right, and so were chiplets & IF and yet AMD got there quicker than Intel and ate their lunch. Just because everyone is aware of something doesn't mean everyone can get to it at the same time. There are many reasons why AMD is better situated than Nvidia for MCM at least initially, hence RDNA 3 will smash their cheeks in. Only real question is what they'll do for RT and if they can add FSR 2.0 w/ ML in time for the new gen.

Again, the Ryzen argument with little critical thinking..

I’m on Ryzen since day 1, and upgraded to the 5600x earlier this year. It took 3 generational architectures , 4 iterations just to get their heads out of the water just to beat Intel by a few % and they did so by also raising their prices considerably to the point where sometimes Intel is a better value now with the lower prices, even more so considering how ram kits for a zen setup are more picky on speed and latency, which raises the price.

This was against stubborn Intel insisting on using their foundry on 14nm+++++++++++ node and jamming integrated GPUs everywhere or at least, offering a version without as a second thought. Also a complacent Intel who did not think competition would ever appear and reduced engineering staff thinking that yearly upgrades with minimal % differences would be their business for the rest of time because AMD was almost bankrupt.

Now I like a good underdog story like anyone, selecting a zen 1 rather than Intel was more like a fuck you to Intel rather than it being more powerful ( and god the first AM4 boards were disasters with memory.. was not a good choice in retrospect), and we’ll get an healthier CPU market because of it, as the dragon Intel seem to have woken up.

Intel will be dangerous again, and on monolithic, because their foundry finally unclogged but also has TSMC silicon reserve. Their density is even better than TSMC so that will be interesting. Gaming wise at least, I’m sure AMD will be king of 64+ core monsters as these only work with chiplets, but that doesn’t impact gaming/laptops.

But Nvidia ain’t no Intel. They are completely opposite. They didn’t get complacent with what worked, they basically hand holded and carried Microsoft on their shoulders for RT & ML API implementation, they’re involved in virtually all universities that do computer engineering or deep learning for so long, and more specifically CUDA which has been going on for a decade with still no sign of AMD even thinking of an alternative, CUDA is that deeply entrenched in everything that is work/ research. They are working with and producing a shitload of research papers in collaboration with universities. Their R&D has not been sitting idle like at all. And unlike CPUs which fundamentally has not changed since nearly 30 years, GPUs do and just had a paradigm shift recently where the focus is shifted to support RT & ML and will be supported going forward more than ever.

So please, keep that zen shit comparison for brainlets on r/amd, it doesn’t hold any fucking water.
 
Last edited:

Rikkori

Member
May 9, 2020
2,686
5,083
470
It took 3 generational architectures , 4 iterations just to get their heads out of the water just to beat Intel by a few %
That's the wrong way to look at it. Seeing only gaming numbers is myopic, the gains AMD has achieved in terms of ppw, core count and overall scalability have been there since day 1. Moreover, Intel is still playing catch-up with no end in sight.
and they did so by also raising their prices considerably to the point where sometimes Intel is a better value now with the lower prices, even more so considering how ram kits for a zen setup are more picky on speed and latency, which raises the price.
Irrelevant. Current prices are entirely down to S&D and even then Intel's not as rosy as you paint it, and again - only applicable to desktop builders.
This was against stubborn Intel insisting on using their foundry on 14nm+++++++++++ node and jamming integrated GPUs everywhere or at least, offering a version without as a second thought.
They didn't "insist", they didn't have a choice because progress isn't guaranteed and their node jumps weren't succesful. As for iGPU, there's many good reasons why you keep that, from both a business standpoint and the manufacturing & design pov. Go check Intel's Q&A on reddit at the launch of Rocket Lake.

But Nvidia ain’t no Intel. They are completely opposite. They didn’t get complacent with what worked, they basically hand holded and carried Microsoft on their shoulders for RT & ML API implementation, they’re involved in virtually all universities that do computer engineering or deep learning for so long, and more specifically CUDA which has been going on for a decade with still no sign of AMD even thinking of an alternative, CUDA is that deeply entrenched in everything that is work/ research. They are working with and producing a shitload of research papers in collaboration with universities. Their R&D has not been sitting idle like at all.
Who says they're sitting idle? Again, just because you want something doesn't mean you'll get it. Just because RDNA3 will smash their cheeks in on desktop's highest card doesn't mean Nvidia will fail, or they'll lose their ML dominance - something I didn't even claim. There's many ways to skin a cat, and simply having the most powerful GPU doesn't mean everything. It will sure be fun to see tho.
And unlike CPUs which fundamentally has not changed since nearly 30 years,
Utter ignorance.
GPUs do and just had a paradigm shift recently where the focus is shifted to support RT & ML and will be supported going forward more than ever.
Let's play a guessing game, what does RT & ML performance scale with? In fact, let's not. Cramming more cores & innovating on memory bw (ala Inf Cache) is exactly how you increase your performance in those realms. Though on the ML side for AMD it's less the performance that's missing as much as the software support is. For what they need - a DLSS, it's irrelevant though, those performance requirements are low enough you could run it fine even on current RDNA GPUs.
So please, keep that zen shit comparison for brainlets on r/amd, it doesn’t hold any fucking water.

 

Malakhov

Member
Jun 6, 2004
8,192
2,186
1,805
Thinking of selling my desktop and buy a gaming laptop instead. Want the benefits to be able to play wherever I want

How is a 3060 laptop? Decent with a 4k output on a 4k tv?
 

TrueLegend

Member
Jun 7, 2021
481
1,116
500
Thinking of selling my desktop and buy a gaming laptop instead. Want the benefits to be able to play wherever I want

How is a 3060 laptop? Decent with a 4k output on a 4k tv?
Save yourself. Watch a lot of videos. The laptop gpus are barely beating 1660. Many 3000 series laptop are 3000 series in name only.
 

Rickyiez

Member
Jan 20, 2020
915
1,399
515
Save yourself. Watch a lot of videos. The laptop gpus are barely beating 1660. Many 3000 series laptop are 3000 series in name only.
No not really . A 3060 laptop with proper power delivery (115W) is faster than 1660 by 50% . Power delivery is the most important spec for mobile RTX30 , it can make or break the performance .



source
 
Last edited:

Malakhov

Member
Jun 6, 2004
8,192
2,186
1,805
No not really . A 3060 laptop with proper power delivery (115W) is faster than 1660 by 50% . Power delivery is the most important spec for mobile RTX30 , it can make or break the performance .



source
But still not worth it? Wanted to be able to play this wherever I wanted, and output in 2k as well.
 
  • Like
Reactions: jaysius

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Mar 31, 2011
6,466
4,665
1,095
Just bought RTX 3000 cards now this come on

You should probably wait for Hopper then....ohh wait ~2 years after that well have the next next generation....so you should probably wait even longer.
 

Rickyiez

Member
Jan 20, 2020
915
1,399
515
But still not worth it? Wanted to be able to play this wherever I wanted, and output in 2k as well.
Worth it or not it depends on how much you're willing to spend . The compromise is always there by substituting a desktop with a laptop , that the performance will be downgraded . My point is Mobile RTX3060 isn't that bad and with DLSS it's worth something but you should expect it to perform around desktop RTX2060 Super at best .

You will want a 3070 laptop at least for newer games in 1440p/2k comfortably .
 
  • Like
Reactions: jaysius