• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen NVIDIA GeForce RTX 4090 With Top AD102 GPU Could Be The First Gaming Graphics Card To Break Past 100 TFLOPs

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
You're right I'm more concerned about gaming cards and exclude those workstation ones. But simply by looking at the leaked specs [if they're accurate] you can tell the perf gap is so historically ginormous that clearly a card is missing between them.

Clearly the specs of 4080 listed looks nothing more than what 4070 should be and the real 4080 is not coming at launch, but soon after just named 4080 ti instead to put higher price than usual on xx80 card.
F are they really gonna try to sell 4070 as 4080 and 4060 as 4070? If true than it's really greedy move :messenger_angry: I'd expect 4080ti to come out the door very soon after.


Its more that the 30 series kinda spoiled us.
The 3080 and its 12GB variant are basically range toppers, the 3080ti is barely worth the extra dollars and the 3090 is a VRAM machine that should have been badged as Titan.

Nvidia are going back to their more usual lineup.
xx80ti (Titan, true range topper) was always on the second biggest chip xx102
xx80 was some ways down the cue on a smaller chip.
xx70 was cutdown xx80s which is why I always bought xx70s the perf gap wasnt huge like it is with the 3070 to 3080.

In terms of gaming performance we have to see how that cache helps with games, because its an absolutely massive increase in cache.
In raw numbers it doesnt look like a big upgrade for 3080 owners.
I am hoping that reduced memory makes these new cards pretty bad for mining, but still really great for gaming....AMDs infinity cache helps them so much in gaming but isnt particularly good for mining.
Nvidia following suit while still having "quick" memory as well should have a good near free increase in performance.
Add in RTX/IO and in time we should see the true strengths of Ada.

The 4090 looks like a joke of a card though. Its on the large memory interface and its near full AD102.
If I had that kind of disposable income it would be a sure bet to last the generation.
For now ill wait to see what the cache actually does in gaming scenarios and decide whether its worth actually upgrading from Ampere.....might skip Ada and just wait it out playing at console setttings.
 

Xyphie

Member
What? Absolutely not chance. Where did you hear that, an Nvidia press release?

2x really isn't unreasonable.

You go from 84 -> 144 SMs on the unbinned chip, that's a ~70% increase. Then you'll likely get some clock increases. My RTX 3080 clocks just shy of ~2GHz out of the box. +500MHz shouldn't be unreasonable given that TSMC N7 peaks at like 2.8GHz on RDNA2. On top of that there'll be some architectural improvements.
 

BlueHawk

Neo Member
The 2080 literally traded blows with a 1080Ti in rasterization. The 3080 was a 3090 chip that had a few faulty cores disabled. The 4080 is the 3070 chip because you idiots bought out $1500+ GPUs like they were free candy. You get arguably the largest piece of the puzzle in the cuda core count. +20% core count + probable if not guaranteed clock bump + memory changes + architectural improvements. A multi game average of +40% is optimistic if anything. Performance doesn't scale linearly with higher cores and clocks.



High power PC gaming has actually been on a dramatic incline for several years now. Steamdeck is still in the low hundreds of thousands of units range while 3070 and higher tier desktop chips (i.e new $800+ high end GPUs) have sold around 6 million units. The number of Steamdeck buyers who also own a "high power PC" is probably more than 95%. They're the same customer in other words. Steamdeck hasn't even begun or pretended to demonstrate a demand outside of hardcore PC gamers.
Thank you for the informative reply! Such a nice change. I am quite suprised high end GPUs have sold 6m+, everyone I know who used to be a PC gamer have moved to consoles or phones. Maybe it’s because I’m older and have kids and my circle have similar circumstances? Interesting, non-the-less.

I'm kinda there with you. My 6800 XT draws less power than a 3080 and beats/match it at my resolution (1440p). If I do go AMD for my next GPU, I hope they keep their power per watt approach, honestly.
I had a 2070RTX when I built my PC but got disappointed in the price for performance, it was very similar to Xbox SX, and I actually ended up selling my PC(at profit, weirdly) and felt a bit burnt out from building again for a while. Hopefully the prices go back to what they were like “in the old days” when PC was cheaper than consoles at one point.

This is why I’m hoping the Steamdeck becomes a huge success, because if it does, a Pro version or a “next gen” upgrade might just make PC gaming more attractive for price/performance again personally.
 
Last edited:

winjer

Member
2x really isn't unreasonable.

You go from 84 -> 144 SMs on the unbinned chip, that's a ~70% increase. Then you'll likely get some clock increases. My RTX 3080 clocks just shy of ~2GHz out of the box. +500MHz shouldn't be unreasonable given that TSMC N7 peaks at like 2.8GHz on RDNA2. On top of that there'll be some architectural improvements.

It wasn't the 7nm process that gave RDNA2 it's higher clock. RDNA1 was also in 7nm, and clocked much lower.
AMD had some of it's CPU engineers go to the RTG division to improve the new GPU arch.
Up to what point is nVidia able to the the same, is something we'll have to wait to find out.
 

Xyphie

Member
It wasn't the 7nm process that gave RDNA2 it's higher clock. RDNA1 was also in 7nm, and clocked much lower.
AMD had some of it's CPU engineers go to the RTG division to improve the new GPU arch.
Up to what point is nVidia able to the the same, is something we'll have to wait to find out.

There was signficant node maturity N7 > N7P. You can see this very clearly with all products on that node, not just AMD, but Apple, Qualcomm etc. Same reason Zen3 clocks significantly higher than Zen2.
 

winjer

Member
There was signficant node maturity N7 > N7P. You can see this very clearly with all products on that node, not just AMD, but Apple, Qualcomm etc. Same reason Zen3 clocks significantly higher than Zen2.

N7P didn't bring an increase in clock speeds. Just density and a slight better power efficiency.

TSMC’s N7P uses the same design rules as the company’s N7, but features front-end-of-line (FEOL) and middle-end-of-line (MOL) optimizations that enable to either boost performance by 7% at the same power, or lower power consumption by 10% at the same clocks. The process technology is already available to TSMC customers, the contract maker of chips revealed at the 2019 VLSI Symposium in Japan, yet the company does not seem to advertise it broadly.

N7P uses proven deep ultraviolet (DUV) lithography and does not offer any transistor density improvements over N7. Those TSMC clients that need a ~ 18~20% higher transistor density are expected to use N7+ and N6 process technologies that use extreme ultraviolet (EUV) lithography for several layers.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
2x really isn't unreasonable.

You go from 84 -> 144 SMs on the unbinned chip, that's a ~70% increase. Then you'll likely get some clock increases. My RTX 3080 clocks just shy of ~2GHz out of the box. +500MHz shouldn't be unreasonable given that TSMC N7 peaks at like 2.8GHz on RDNA2. On top of that there'll be some architectural improvements.

Thats for the 4090.
We are talking about 4080s.
Their increases arent that large for the 3080 -> 4080.*
Even worse if you consider the 3080 12G.

Nvidia going back to reserving xx102s for 80Ti and above makes the regular 4080 not look that attractive for anyone already with a 3080.
I wouldnt be shocked if the 4080 still trades blows with the 3090Ti, we expect gen on gen xx70s to beat the range topper of the previous generation.
Its just up to how much the cache makes a difference.
The 3080 12G is on a 384bit interface the 4080 is likely a 256bit.
If the cache cant make that up the 3080 12G will be within spitting distance at higher resolutions.

*Numbers are based on rumored spec:
 

Shtef

Member
I was planing to upgrade my GTX 970 to 3060TI this summer, I reckon now its not the best time to get it?
 
Last edited:

Hezekiah

Member
2x really isn't unreasonable.

You go from 84 -> 144 SMs on the unbinned chip, that's a ~70% increase. Then you'll likely get some clock increases. My RTX 3080 clocks just shy of ~2GHz out of the box. +500MHz shouldn't be unreasonable given that TSMC N7 peaks at like 2.8GHz on RDNA2. On top of that there'll be some architectural improvements.
No mate, talking about the 4080, so looking at 68 SMs to 80-82 SMs.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
I was planing to upgrade my GTX 970 to 3060TI this summer, I reckon now its not the best time to get it?
The 4070 might be out by end of July and will probably cost near the same price as a 3060Ti.
Might as well wait it out honestly....unless you can get a really good deal and sell right before the paper launch.

Pray to the eldar gods Nvidia doesnt decide to simply up the MSRP on everything because they have seen people buy 400 dollar GPUs for 900 dollars.
I considered a 500 dollar MSRP for xx70 high already, if they up the xx70 anymore fuck higher resolutions ill be sticking with 1440p till I die.
 
It wasn't the 7nm process that gave RDNA2 it's higher clock. RDNA1 was also in 7nm, and clocked much lower.
AMD had some of it's CPU engineers go to the RTG division to improve the new GPU arch.
Up to what point is nVidia able to the the same, is something we'll have to wait to find out.

Didn't Nvidia done this in the past with the Pascal?
 

tusharngf

Member

NVIDIA GeForce RTX 4090 Ti & RTX 4090 'Preliminary' Specs:​

Graphics Card NameNVIDIA GeForce RTX 4090 TiNVIDIA GeForce RTX 4090NVIDIA GeForce RTX 3090 TiNVIDIA GeForce RTX 3090
GPU NameAda Lovelace AD102-350?Ada Lovelace AD102-300?Ampere GA102-350Ampere GA102-300
Process NodeTSMC 4NTSMC 4NSamsung 8nmSamsung 8nm
Die Size~600mm2~600mm2628.4mm2628.4mm2
TransistorsTBDTBD28 Billion28 Billion
CUDA Cores18432161281075210496
TMUs / ROPsTBD / 384TBD / 384336 / 112328 / 112
Tensor / RT CoresTBD / TBDTBD / TBD336 / 84328 / 82
Base ClockTBDTBD1560 MHz1400 MHz
Boost Clock~2800 MHz~2600 MHz1860 MHz1700 MHz
FP32 Compute~103 TFLOPs~90 TFLOPs40 TFLOPs36 TFLOPs
RT TFLOPsTBD74 TFLOPs69 TFLOPs
Tensor-TOPsTBDTBD320 TOPs285 TOPs
Memory Capacity24 GB GDDR6X24 GB GDDR6X24 GB GDDR6X24 GB GDDR6X
Memory Bus384-bit384-bit384-bit384-bit
Memory Speed24.0 Gbps21.0 Gbps21.0 Gbps19.5 Gbps
Bandwidth1152 GB/s1008 GB/s1008 GB/s936 Gbps
TGP600W450W450W350W
Price (MSRP / FE)$1999 US?$1499 US?$1999 US$1499 US
Launch (Availability)July 2022?July 2022?29th March 202224th September 2020


Just for comparison's sake:

  • NVIDIA GeForce RTX 4090 Ti: ~103 TFLOPs (FP32) (Assuming 2.8 GHz clock)
  • NVIDIA GeForce RTX 4090: ~90 TFLOPs (FP32) (Assuming 2.8 GHz clock)
  • NVIDIA GeForce RTX 3090 Ti: 40 TFLOPs (FP32) (1.86 GHz Boost clock)
  • NVIDIA GeForce RTX 3090: 36 TFLOPs (FP32) (1.69 GHz Boost clock)
Source: https://wccftech.com/roundup/nvidia-geforce-rtx-4090-ti-rtx-4090/
 

Hezekiah

Member
The 4070 might be out by end of July and will probably cost near the same price as a 3060Ti.
Might as well wait it out honestly....unless you can get a really good deal and sell right before the paper launch.

Pray to the eldar gods Nvidia doesnt decide to simply up the MSRP on everything because they have seen people buy 400 dollar GPUs for 900 dollars.
I considered a 500 dollar MSRP for xx70 high already, if they up the xx70 anymore fuck higher resolutions ill be sticking with 1440p till I die.
I can't see them not doing it.

They can probably just inflate prices, and then drop the if AMD delivers on its promises.
 

Sanepar

Member
4070 will be capped with 12gb ram, Nvidia insist in fuck XX70 since 970 with few ram, forcing u to replace gpu every gen. Power will be there but 12gb will limit lifetime or force to lower resolution.
 

Celcius

°Temp. member

NVIDIA GeForce RTX 4090 Ti & RTX 4090 'Preliminary' Specs:​

Graphics Card NameNVIDIA GeForce RTX 4090 TiNVIDIA GeForce RTX 4090NVIDIA GeForce RTX 3090 TiNVIDIA GeForce RTX 3090
GPU NameAda Lovelace AD102-350?Ada Lovelace AD102-300?Ampere GA102-350Ampere GA102-300
Process NodeTSMC 4NTSMC 4NSamsung 8nmSamsung 8nm
Die Size~600mm2~600mm2628.4mm2628.4mm2
TransistorsTBDTBD28 Billion28 Billion
CUDA Cores18432161281075210496
TMUs / ROPsTBD / 384TBD / 384336 / 112328 / 112
Tensor / RT CoresTBD / TBDTBD / TBD336 / 84328 / 82
Base ClockTBDTBD1560 MHz1400 MHz
Boost Clock~2800 MHz~2600 MHz1860 MHz1700 MHz
FP32 Compute~103 TFLOPs~90 TFLOPs40 TFLOPs36 TFLOPs
RT TFLOPsTBD74 TFLOPs69 TFLOPs
Tensor-TOPsTBDTBD320 TOPs285 TOPs
Memory Capacity24 GB GDDR6X24 GB GDDR6X24 GB GDDR6X24 GB GDDR6X
Memory Bus384-bit384-bit384-bit384-bit
Memory Speed24.0 Gbps21.0 Gbps21.0 Gbps19.5 Gbps
Bandwidth1152 GB/s1008 GB/s1008 GB/s936 Gbps
TGP600W450W450W350W
Price (MSRP / FE)$1999 US?$1499 US?$1999 US$1499 US
Launch (Availability)July 2022?July 2022?29th March 202224th September 2020


Just for comparison's sake:

  • NVIDIA GeForce RTX 4090 Ti: ~103 TFLOPs (FP32) (Assuming 2.8 GHz clock)
  • NVIDIA GeForce RTX 4090: ~90 TFLOPs (FP32) (Assuming 2.8 GHz clock)
  • NVIDIA GeForce RTX 3090 Ti: 40 TFLOPs (FP32) (1.86 GHz Boost clock)
  • NVIDIA GeForce RTX 3090: 36 TFLOPs (FP32) (1.69 GHz Boost clock)
Source: https://wccftech.com/roundup/nvidia-geforce-rtx-4090-ti-rtx-4090/
If this is real then the 4090 and 4090 Ti performance is going to be monstrous. Ouch at that 600 watts though.
 

paolo11

Member

NVIDIA GeForce RTX 4090 Ti & RTX 4090 'Preliminary' Specs:​

Graphics Card NameNVIDIA GeForce RTX 4090 TiNVIDIA GeForce RTX 4090NVIDIA GeForce RTX 3090 TiNVIDIA GeForce RTX 3090
GPU NameAda Lovelace AD102-350?Ada Lovelace AD102-300?Ampere GA102-350Ampere GA102-300
Process NodeTSMC 4NTSMC 4NSamsung 8nmSamsung 8nm
Die Size~600mm2~600mm2628.4mm2628.4mm2
TransistorsTBDTBD28 Billion28 Billion
CUDA Cores18432161281075210496
TMUs / ROPsTBD / 384TBD / 384336 / 112328 / 112
Tensor / RT CoresTBD / TBDTBD / TBD336 / 84328 / 82
Base ClockTBDTBD1560 MHz1400 MHz
Boost Clock~2800 MHz~2600 MHz1860 MHz1700 MHz
FP32 Compute~103 TFLOPs~90 TFLOPs40 TFLOPs36 TFLOPs
RT TFLOPsTBD74 TFLOPs69 TFLOPs
Tensor-TOPsTBDTBD320 TOPs285 TOPs
Memory Capacity24 GB GDDR6X24 GB GDDR6X24 GB GDDR6X24 GB GDDR6X
Memory Bus384-bit384-bit384-bit384-bit
Memory Speed24.0 Gbps21.0 Gbps21.0 Gbps19.5 Gbps
Bandwidth1152 GB/s1008 GB/s1008 GB/s936 Gbps
TGP600W450W450W350W
Price (MSRP / FE)$1999 US?$1499 US?$1999 US$1499 US
Launch (Availability)July 2022?July 2022?29th March 202224th September 2020


Just for comparison's sake:

  • NVIDIA GeForce RTX 4090 Ti: ~103 TFLOPs (FP32) (Assuming 2.8 GHz clock)
  • NVIDIA GeForce RTX 4090: ~90 TFLOPs (FP32) (Assuming 2.8 GHz clock)
  • NVIDIA GeForce RTX 3090 Ti: 40 TFLOPs (FP32) (1.86 GHz Boost clock)
  • NVIDIA GeForce RTX 3090: 36 TFLOPs (FP32) (1.69 GHz Boost clock)
Source: https://wccftech.com/roundup/nvidia-geforce-rtx-4090-ti-rtx-4090/
Noob question. I have 3090 ti Ryzen 5900x with 750 PSU. DO i need to upgrade my PSU if i will go for rtx 4090?
 
Last edited:
If this is real then the 4090 and 4090 Ti performance is going to be monstrous. Ouch at that 600 watts though.
It just hit me. Only a few short 6 years ago the top gpu - Pascal Titan was a little over 10 TF and now soon/this year we're at over 100 TF :messenger_face_screaming: What will gpu tf numbers look in another 6 years?
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
4070 will be capped with 12gb ram, Nvidia insist in fuck XX70 since 970 with few ram, forcing u to replace gpu every gen. Power will be there but 12gb will limit lifetime or force to lower resolution.

They basically had/have no choice but to stick it with 12GB.....it was either that or 24GB....and no way they were gonna give the xx70 24GB.

The xx70 is on a 192bit interface but has increased cache to make up for the reduced interface.
Considering itll likely be a 3080 level card its kinda logical that its a straight up match with the 3080 12G.
 

Orta

console wars 2020 - participant
I have 2070s and run everything well. Thats not even a top tier card.

Only in the past few months have I started turning off a few graphical effects on my battle scarred old 1070! (gaming at 1080p)

I'm as much a pc gaming nerd as the next but is there any need for these cards right now?

*Saying that I'd be first in line for a 4090ti if I wasn't pish poor.
 

Sanepar

Member
They basically had/have no choice but to stick it with 12GB.....it was either that or 24GB....and no way they were gonna give the xx70 24GB.

The xx70 is on a 192bit interface but has increased cache to make up for the reduced interface.
Considering itll likely be a 3080 level card its kinda logical that its a straight up match with the 3080 12G.
Do u think a 4070 will perform like a 3080? 20% more only? Or like 3070 ti? 3090 ti at min is my guess probably 5-10% more than a 3090 ti.
 

CuNi

Member
I'm sticking with my 3080 even when it's only 10GB simply because I can UV it, lose ~5% but lower power draw by something between 100 - 70 Watts.

By the sound of it, 4xxx simply brute forces more performance with more power. I've already been somewhat disappointed at the heat generation of the 5900X and to see this trend continue is rather disappointing.

I hope the 5 or at the very latest the 6XXX cards will slowly start to be less power hungry again, though with the new power spec and new connectors being introduced, I am not sure that will happen any time soon, sadly.
 
Last edited:
With 100 TFLOP performance do you guys think their is going to be a push for 8K?

With all this brute force, what are gamers going to most likely use these cards for?

A) 4k with Raytracing, HDR, High settings at 120hz-144hz on gaming monitors with high refresh rates and 4K UHDTV's with VRR 120hz
B) 8K with Raytracing, HDR, High settings at 60hz-80hz on UHDTVs with 8K 60hz
 
Last edited:

Senua

Member
With 100 TFLOP performance do you guys think their is going to be a push for 8K?

With all this brute force, what are gamers going to most likely use these cards for?

A) 4k with Raytracing, HDR, High settings at 120hz-144hz on gaming monitors with high refresh rates and UHDTV's with VRR 120hz
B) 8K with Raytracing, HDR, High settings at 60hz-80hz on UHDTVs with 8K 60hz
8k can suck my saggy nads. we can't even do 4k with all the bells n whistles yet
 

Sanepar

Member
With 100 TFLOP performance do you guys think their is going to be a push for 8K?

With all this brute force, what are gamers going to most likely use these cards for?

A) 4k with Raytracing, HDR, High settings at 120hz-144hz on gaming monitors with high refresh rates and 4K UHDTV's with VRR 120hz
B) 8K with Raytracing, HDR, High settings at 60hz-80hz on UHDTVs with 8K 60hz
Current cards are not enough for 4K, No current card can max Cyberpunk @4k for only 60 fps.

These next gen gpus probably will be the first to offer decent perf on all games @4k and 60 fps.

8k is useless
 
I think the amount of people with "free money" being severely limited, this time, will increase the chances of getting a GPU on launch. Last time, when 3000 series launched, so many people had the money to buy GPUs, I had never seen such buying power on the move for one component. Imagine getting 2000+ dollars for free, you're out of school and you've always wanted a powerful GPU?

This time, the plebs can take a backseat. Couple this with Crypto on the decline. It could be feasible to get a GPU, on launch at MSRP/"normal" sale prices. Depending on when the 4090 launches, I will step up to it. (unless it is more than 450 watts, fuck that) I have 2 and a half+ months or so left. *crosses fingers*
 

John Wick

Member
With 100 TFLOP performance do you guys think their is going to be a push for 8K?

With all this brute force, what are gamers going to most likely use these cards for?

A) 4k with Raytracing, HDR, High settings at 120hz-144hz on gaming monitors with high refresh rates and 4K UHDTV's with VRR 120hz
B) 8K with Raytracing, HDR, High settings at 60hz-80hz on UHDTVs with 8K 60hz
Definitely 8k with Ray tracing at 60fps.........
Lmfao..........
 

John Wick

Member
I think the amount of people with "free money" being severely limited, this time, will increase the chances of getting a GPU on launch. Last time, when 3000 series launched, so many people had the money to buy GPUs, I had never seen such buying power on the move for one component. Imagine getting 2000+ dollars for free, you're out of school and you've always wanted a powerful GPU?

This time, the plebs can take a backseat. Couple this with Crypto on the decline. It could be feasible to get a GPU, on launch at MSRP/"normal" sale prices. Depending on when the 4090 launches, I will step up to it. (unless it is more than 450 watts, fuck that) I have 2 and a half+ months or so left. *crosses fingers*
I've bought three GPU's since 2016 and sold them everytime for at least double. There is always a thirst for new powerful GPU's.
 
I'm thinking of grabbing a 4090 (cause why not) and passing my 3090 down to my wife who currently uses a 3080. She doesn't game nearly as much as me but a 3090 is still insanely good, plus she uses a 1440p monitor, I'm on a 4k one.
 

VFXVeteran

Banned
With 100 TFLOP performance do you guys think their is going to be a push for 8K?

With all this brute force, what are gamers going to most likely use these cards for?

A) 4k with Raytracing, HDR, High settings at 120hz-144hz on gaming monitors with high refresh rates and 4K UHDTV's with VRR 120hz
B) 8K with Raytracing, HDR, High settings at 60hz-80hz on UHDTVs with 8K 60hz
They aren't that strong. RT will still need DLSS in order to be viable. I will probably skip out on this generation and wait 2 more years. Without sounding like a broken record, I don't think any of these games will have features good enough to warrant upgrading to the 4x00 series.
 
What makes me concerned about AMD/RDNA3 is that Ampere is fabed at Samsung, while Ada should switch to TSMC.
Not only the process is newer, TSMC's process is better than Samsung's.
Today Qualcomm announced a refresh of the current Snapdragon. It's the same chip but with higher frequencies and consuming less power. This was achieved by switching production to TSMC: https://www.anandtech.com/show/1739...n-1-moving-to-tsmc-for-more-speed-lower-power

So, no matter how good Ada as an architecture is, Nvidia will gain a lot, proportionally a lot more from the newer process.
 

WitchHunter

Member
Werent the rumors updated and the RX 7000 is expected to peak at around 75TFLOPs.

If AMD really have dropped the ball and are weaker in pure raster while already being weaker in RT AMD might have a tough fight coming up.

AMD might also have a NAVI 30(?) chip that actually competes with the 4090 that has the full compute units active.




But man 100TFLOPs....Nvidia really arent fucking around are they.
With mining on the decline and LHR3.0 already proving to be a pretty big problem for miners, we might actually be able to purchase these cards near MSRP. (wishful thinking).

Heres to hoping I can snag an xx80 at MSRP within the first 6 months. Then forget upgrading my GPU till the next generation of consoles cuz I only play at 3440x1440p.
Wasn't LHR cracked by the NiceHash group?
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Do u think a 4070 will perform like a 3080? 20% more only? Or like 3070 ti? 3090 ti at min is my guess probably 5-10% more than a 3090 ti.
On paper it the 4070 should match the 3080 10G.
We dont know how the cache increase + lower memory interface will actually work.
Its all speculation until we actually have them in our PCs.
Beyond the 4090 it doesnt look like a huge gain gen on gen.
3080 owners basically have to skip the gen or get a 4090.........or the 80Ti we know is coming AD102 is huge.
Yeah remember the Nvidia hack.....they didnt pay so now everyone has the answers to LHR.....even LHRv3 is basically cracked.
LHRv4 and considering every card is on a lower memory interface mining might be dead for these new cards.....good for us.
 
Last edited:

Sanepar

Member
On paper it the 4070 should match the 3080 10G.
We dont know how the cache increase + lower memory interface will actually work.
Its all speculation until we actually have them in our PCs.
Beyond the 4090 it doesnt look like a huge gain gen on gen.
3080 owners basically have to skip the gen or get a 4090.........or the 80Ti we know is coming AD102 is huge.

Yeah remember the Nvidia hack.....they didnt pay so now everyone has the answers to LHR.....even LHRv3 is basically cracked.
LHRv4 and considering every card is on a lower memory interface mining might be dead for these new cards.....good for us.
It doesn't make any sense what u are saying. A 3070 ti is 5-10% slower than a 3080 with 300w. Only the node shrink from samsung crap 8mm to tsmc 5nm will give us 40% perf imp per watt. Considering arch improvs and all leakers saying improv will be around 60-70% 4070 will probably be on par with a 3090 ti or 5-10% ahead.
 
Top Bottom