• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

New Nvidia RTX 4000 Adola Lovelace Cards Announced | RTX 4090 (1599$) October 12th | RTX 4080 (1199$)

Kenpachii

Member


Don't see the issue.

We have had 2x 8 pins already for a while now pulling 300w out of a PSU, 600w over 4 isn't going to change anything. Rail problem is also not a problem, u either have the capacity on your powersupply or not. Most PSU's are probably single rail at this point anyway.

Bending connector can easily be fixed with a more robust connector / plug, and with a warming label of "not bending" it will be perfectly fine.

30x plug in and out limit, like i had a PSU for 12 years and probably unplugged and plugged about 20 times in total. So honestly, who even plugs that shit for more then 30.

So yea whole lot of nothing it seems like.
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
You dont attach your GPU to the Motherboard, you attach the motherboard to the GPU.

ASUS-ROG-STRIX-OC-1.jpg
 

Danknugz

Member
It’s so hard not to pull a trigger on a 4090…. But I realize i have no games worthy of needing it. My 3090 is still a beast, and the games I’m looking forward to most, Starfield,elder scrolls 6, Witcher 3 Remastered, Witcher 4, GTA6, Squadron and so on, are so far off that by the time any of them are out we’ll be on the 5xxx series or at least the 4090ti will be on shelves.

But man do I like new shiney expensive hardware…
you could resell your 3090 for more the quicker you jump on the 4090, if you do that kind of thing.
 

Chiggs

Member

skneogaf

Member
Someone offered me £800 for my 3090ti so I can put the money towards a 4090.

3090ti are £1100 now so doesn't seem so bad but I bought it for £1879 5 months ago.

My 3090ti isn't quite fast enough for 4k@120hz as the fans go nuts trying to hit 120fps, which I hate the sound of.

I'm likely to keep it the 3090ti but that means I'll own all these a

1080ti
2080ti
3090ti
4090

NVIDIA keeps me working class!
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
I had a dream last night that I got a 4090, it was the size of a PS2 so I couldn't fit it in my case. Then I woke up. 🤣
Ironically the 4090 Aorus Master is bigger than a PS2.
So yeah....that dream of yours might have been a premonition.

Gigabyte-GeForce-RTX-4090-AORUS-Master-_2-1480x987.jpg


Gigabyte-GeForce-RTX-4090-AORUS-Master-_5-1480x987.jpg


Gigabyte-GeForce-RTX-4090-AORUS-Master-_3-1480x987.jpg




Those fans are 110mm each, basically case fans.........this things is taller(longer) than some cases.
 
Last edited:

yamaci17

Member
Choose wisely :messenger_beaming:

M7Jo5wR.png
2070s tbh. wish we had a 16 gb 3070. that would last me a whole gen

i will wait until they actually make a 16 gb no nonsense 70 card at 500-600 bucks price point

up until then, carry on with 3070. play with series s equivalent textures if I have to or give up on ray tracing

i really dont want to spend any money on 10-12 gb cards at this point. especially after seeing artificial %80 allocation limitations in forza horizon 5 (xbox exc) and spiderman (ps exc). their justification and reason for doing this? microsoft dx12 documentations say leave a healthy headroom for other apps. so yeah. having a 10 gb gpu does not warrant you a 10 gb budget. so 3080 does not even have equivalent vram budget compared to sx and ps5, which is dreadful

both games cap out at %80 vram where they dont use more than 6.4 gigs on a 8 gigs card. 10 gigs means u get 8 gigs usable budget, and 12 gigs mean 9.6 gb. 9.6 gb... sounds a bit secure. but then you would run into situations where card having more grunt than its vram. like, 10 gb is meant for sx/ps5. but 4080 is much more powerful than them. xbox sx/ps5 equivalent settings should fill out that 9.6 gb usable budget, but how will you put something on top of it, like, RT overdrive mode? only lastgen games like cyberpunk + crazy rt may have enough gap to be useful with 10 gigs of usable vram. but that would change too past 2024. so the point of having a super RT overdrive capable of card would also be obsoleted.

for example, take series x and fh5 into account, game with sx equivalent settings at 4k maxes out 10 gb budget. (why 8 and 10 gb cards seeing rainbow textures). now imagine adding a huge RT path on top of it. you'd definetely need more than 12 gigs.

that is why 4080 12 gb is a scam product. even if it is powerful, it is scam. it is scam not only by the power/perfrmance ratio but also by virtue of its vram.

nvidia expects to sell a product that should overachieve sx/ps5, but with sx/ps5 equivalent memory budget. it is simply illogical. it only works and would work on current gen games that tap out at 7-8 GB.

cyberpunk with rt overdrive mode at 4k/dlss perf should already hit 12 gb to its maximum. now imagine a nextgen game with enhanced textures that would need 16 gigs. all of a sudden you have a gpu capable of running it but not have enough vram. so much for "super high quality new ray tracing techs".

at this point they're really trolling the entire industry with their vram logic. imagine a gtx 1080 being hamstrung to only 4 gb of budget. practically it would have to adhere to ps4-equivalent settings despite having enormous power compared to it. what is happening here is exactly that with a potential 10 gb 4070 and 12 gb 4080 (so called)
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
2070s tbh. wish we had a 16 gb 3070. that would last me a whole gen

i will wait until they actually make a 16 gb no nonsense 70 card at 500-600 bucks price point

up until then, carry on with 3070. play with series s equivalent textures if I have to or give up on ray tracing

i really dont want to spend any money on 10-12 gb cards at this point. especially after seeing artificial %80 allocation limitations in forza horizon 5 (xbox exc) and spiderman (ps exc). both games cap out at %80 vram where they dont use more than 6.4 gigs on a 8 gigs card. 10 gigs means u get 8 gigs usable budget, and 12 gigs mean 9.6 gb. 9.6 gb... sounds a bit secure. but then you would run into situations where card having more grunt than its vram. like, 10 gb is meant for sx/ps5. but 4080 is much more powerful than them. xbox sx/ps5 equivalent settings should fill out that 9.6 gb usable budget, but how will you put something on top of it, like, RT overdrive mode? only lastgen games like cyberpunk + crazy rt may have enough gap to be useful with 10 gigs of usable vram. but that would change too past 2024. so the point of having a super RT overdrive capable of card would also be obsoleted.

for example, take series x and fh5 into account, game with sx equivalent settings at 4k maxes out 10 gb budget. (why 8 and 10 gb cards seeing rainbow textures). now imagine adding a huge RT path on top of it. you'd definetely need more than 12 gigs.

that is why 4080 12 gb is a scam product. even if it is powerful, it is scam. it is scam not only by the power/perfrmance ratio but also by virtue of its vram.

nvidia expects to sell a product that should overachieve sx/ps5, but with sx/ps5 equivalent memory budget. it is simply illogical. it only works and would work on current gen games that tap out at 7-8 GB.

cyberpunk with rt overdrive mode at 4k/dlss perf should already hit 12 gb to its maximum. now imagine a nextgen game with enhanced textures that would need 16 gigs. all of a sudden you have a gpu capable of running it but not have enough vram. so much for "super high quality new ray tracing techs".

at this point they're really trolling the entire industry with their vram logic. imagine a gtx 1080 being hamstrung to only 4 gb of budget. practically it would have to adhere to ps4-equivalent settings despite having enormous power compared to it. what is happening here is exactly that with a potential 10 gb 4070 and 12 gb 4080 (so called)
Youve got a few options really.
  • Skip the generation - Nextgen cards will hopefully start at 16GB.
  • Buy the 4080'16G - At 1200+ dollars that sounds like a scam to me.
  • Wait for the 4080ti'20G - There is so much space on that AD102 die its a given....still probably a scam.
  • Buy a cheap 3090/Ti and ride the generation out.
  • Save and get the 4090 - Believe it or not the best value Ada card.

Cuz there is not going to be a ~500-600 dollar 4070 that makes sense.
The 4070 they will release is that 10GB garbo and itll be around 3080ti in terms of performance and probably cost 700 dollars.....so might as well just get a cheap 3080ti/3090 if you dont mind missing out on DLSS-Frame Generation.
 

GHG

Gold Member



Someone who wants to do a little writeup might want to break this out to a new thread but I'm not doing so for such a blatant puff piece.

Should have seen how much Alex was clearly dying inside when John was critical of Nvidia in their latest podcast.

As always, wait for real independant reviews.
 
Last edited:

GymWolf

Member
You dont attach your GPU to the Motherboard, you attach the motherboard to the GPU.

ASUS-ROG-STRIX-OC-1.jpg
Ironically the 4090 Aorus Master is bigger than a PS2.
So yeah....that dream of yours might have been a premonition.

Gigabyte-GeForce-RTX-4090-AORUS-Master-_2-1480x987.jpg


Gigabyte-GeForce-RTX-4090-AORUS-Master-_5-1480x987.jpg


Gigabyte-GeForce-RTX-4090-AORUS-Master-_3-1480x987.jpg




Those fans are 110mm each, basically case fans.........this things is taller(longer) than some cases.
I guess you need a big full tower case for this shit right?

Midtower ain't gonna do it...
 

Killer8

Member

Nvidia snake oil strikes again. 2xxx/3xxx owners should sleep easy tonight knowing they are missing out on absolutely nothing.

It's so pointless seeing a 4x gain in performance with little improvement in latency. In all test cases it's worse than DLSS2 with Reflex (and in some cases the latency is even worse than native lmao).

Nice if you you want to pay a premium for smoother looking images, I suppose.
 

Fredrik

Member
Steamdeck is awesome, will deffo upgrade to a 6800u model though as I'm completely sold and invested in the platform now. Mind blowing price of kit.

4090 will be mine at some point. Its just ridiculous though. Almost the size of a ps5.
When I noticed that the Founders 4090 is the same size as my 3070Ti Suprim X I stopped worrying about the size, fits nicely in an old Corsair Obsidian 550D 👍
No idea about the partner cards though.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
I guess you need a big full tower case for this shit right?

Midtower ain't gonna do it...
Nah Midtowers are still fine.
Alot of them will still hold up to 40cm GPUs.
The Aorus Master is actually getting close to that.....which is scary for if the 4090Ti is actually any bigger.

I sold my Evolv X for a P600S (Yes I know I could have cut holes into the front of the Evolv, but fuck it let it be someone elses problem)
00.JPG


Fuck the 4 slot shit though.
If by some miracle or curse I find myself buying one of those 4 slot behemoths......im going to have to nut up and buy a Torrent.
I really like the P600S, I dont want to sell it.
fractal-deisng-torrent-review-banner.jpg


Nvidia snake oil strikes again. 2xxx/3xxx owners should sleep easy tonight knowing they are missing out on absolutely nothing.

It's so pointless seeing a 4x gain in performance with little improvement in latency. In all test cases it's worse than DLSS2 with Reflex (and in some cases the latency is even worse than native lmao).

Nice if you you want to pay a premium for smoother looking images, I suppose.
Frame Generation is not for latency dependent games.
So for sure not any FPS, but if you are playing a slower "cinematic" experience DLSS-Frame Generation could be beneficial for playing at 4K or even 8K.
 

Killer8

Member
DF's preview suggests there are a lot less artifacts with Nvidia's method than the traditional interpolation techniques found in video software like Topaz. Even if this is ultimately worthless for games, i'd actually kill for something that could interpolate videos this well on the GPU.
 
Yep, looks like it adds frames, but the latency stays the same. Which is actually expected.
The DLSS3's feature "frame generation" not only adds frames compared to DLSS2, but increases input delay to the point where it becomes close to native resolution as if DLSS was entirely OFF. Remember DLLS 3 has DLSS 2 reconstruction active as well.
 

lukilladog

Member
I don't believe for a second that dlss3 really requires 4xxx series hardware. I remember that many, many years ago when they started showing AI to process video, I said that they could use it to double the frame rates in games... obviously they also knew it and have been working on it for several years. The optical flow accelerator is also part of Ampere, but they are now saying that it's too slow, but nobody needs 4x the output right?, maybe 2x will do for Us ampere owners?.
 
Last edited:

CrustyBritches

Gold Member
I'm probably a retard, but how can the latency be faster than native?
Higher frame rate = lower latency. DLSS starts with a lower resolution and uses AI to upscale the image to the target res. For 4K, they might have a base resolution of 1080p then use AI to upscale that to 4K. Whereas 4K native has to render a true 4K frame which takes longer.

DLSS 3 is inserting AI generated frames, which increases input latency compared to DLSS 2 that's just doing AI upscaling on existing frames.
 
Last edited:
Motion interpolation isnt something new, but TVs motion upscalers adds around 100ms on average (totally unplayable results). Only recently samsung TVs started adding motion interpolation into game mode with somewhat acceptable results (20ms), but here DLSSx3 basically offers similar input lag as native resolution without motion interpolation and that's on itself is truly revolutionary. What's also interesting DLSSx3 tech use TAA motion vectors, so picture should be less prone to motion artifacts compared to TVs motion upscalers.
 

Diogodx

Neo Member
I'm probably a retard, but how can the latency be faster than native?
Native 4K is 3840X2160.
DLSS3 runs the performance mode that for 4K is 1920X1080.

Lowering the resolution increases frame rate that decreases latency.

Compare DLSS3 with DLSS2+Reflex and you see how much the latency increases by activating the interpolated frames. Now add that to the native rendering or even the DLSS Quality and I think the games will feel like they are running at lower than native fps.

Just imagine the situation. Game is looking good and is smooth at 60 fps and I go and enable the interpolated frames. Great! The game now runs at 90 fps as my fps counter says but when I start to play it feels like is running at 40 fps.
 

Crayon

Member
I'd have to try the frame generated mode to know for sure, but I'm not imagining a great experience from playing a fake 60fps game with logic going at 30fps. I know 30 looks very choppy when you switch right from 60, but I get used to the look in a couple hours. It's the response that hurt more. A lotta people won't care so no problem for them. I think the mismatch will be unpleasant, tho.
 
Native 4K is 3840X2160.
DLSS3 runs the performance mode that for 4K is 1920X1080.

Lowering the resolution increases frame rate that decreases latency.

Compare DLSS3 with DLSS2+Reflex and you see how much the latency increases by activating the interpolated frames. Now add that to the native rendering or even the DLSS Quality and I think the games will feel like they are running at lower than native fps.

Just imagine the situation. Game is looking good and is smooth at 60 fps and I go and enable the interpolated frames. Great! The game now runs at 90 fps as my fps counter says but when I start to play it feels like is running at 40 fps.
The slides show that DLSS 3 is at least on par or has lower latency than native, so regardless it seems like a win.
 
Native 4K is 3840X2160.
DLSS3 runs the performance mode that for 4K is 1920X1080.

Lowering the resolution increases frame rate that decreases latency.

Compare DLSS3 with DLSS2+Reflex and you see how much the latency increases by activating the interpolated frames. Now add that to the native rendering or even the DLSS Quality and I think the games will feel like they are running at lower than native fps.

Just imagine the situation. Game is looking good and is smooth at 60 fps and I go and enable the interpolated frames. Great! The game now runs at 90 fps as my fps counter says but when I start to play it feels like is running at 40 fps.
Is it safe to say that with more capable new Optical Flow Accelerator in 50 series cards the latency increase due to frame generation will be negligible.
 

Diogodx

Neo Member
Motion interpolation isnt something new, but TVs motion upscalers adds around 100ms on average (totally unplayable results). Only recently samsung TVs started adding motion interpolation into game mode with somewhat acceptable results (20ms), but here DLSSx3 basically offers similar input lag as native resolution without motion interpolation and that's on itself is truly revolutionary. What's also interesting DLSSx3 tech use TAA motion vectors, so picture should be less prone to motion artifacts compared to TVs motion upscalers.
You watched the video?

DLSS2 +Reflex
DLSS3 (is the same DLSS2 + Reflex + Interpolation)

Portal + 3ms
Spiderman + 15ms
Cyberpunk + 23ms

If TVs can do 20ms interpolation I don`t see how this is revolutionary.
 
You watched the video?

DLSS2 +Reflex
DLSS3 (is the same DLSS2 + Reflex + Interpolation)

Portal + 3ms
Spiderman + 15ms
Cyberpunk + 23ms

If TVs can do 20ms interpolation I don`t see how this is revolutionary.

I doubt TV's interpolation is even remotely comparable in quality. As the DF video included other highly regarded tools and they paled in comparison., but yeah it would be funny if say LG OLED interpolation was tested as well, but then how would you capture it? Offscreen?
 
Last edited:

yamaci17

Member
Based on how dlss3 works, can I assume it will lower CPU requirements, or am I understanding the tech wrong?
It won't, but it will let you workaround CPU bottlenecks

As I said previously and suspected, NVIDIA is pushing a CPU bound narrative. For example, RTX 4090 at native 4K with ray tracing in Spiderman gets around 100 FPS. With DLSS 2 Performance, it gets between 102-130 FPS. At times it gets 102-105 FPS, it tells us that game runs into a huge CPU limitation. DLSS 3 frame generation practically interpolates the 100 frames into 200 without doing nothing about CPU. What NVIDIA trying to tell us here is that CPU bottlenecks causing DLSS 2 to be not performant at all and frame generation takes the ball and runs away. In other cases like Portal Remix, DLSS 2 Perf still generates a huge performance boost however. That is all on CPU, still. You gotta have the CPU to have a high baseline framerate to begin with.

So... you won't need super beefy CPUs to drive 144/240 hz screens, at least for situations like this.

But if you're CPU bound at 45-60 FPS, then it could still be problematic to some extent.

But yeah, overall, chasing super high framerates can usually put you in CPU bound situations. Frame generations circumvents around that issue. There is no CPU in market tha can drive 150+ fps CPU bound with ray tracing in Spiderman. Most CPUs also get clogged near 60-80 FPS in Cyberpunk with all Ray Tracing turned up to max. Even a 12900k will usually have CPU bound frame drops to mid 60s and I'm not even kidding. So even if you had a potential GPU that could drive, say, 1440p/120 FPS in Cyberpunk with ray tracing enabled, you would run into CPU limitations. Then again, your native framerate without interpolation is still limited by CPU.

If you're however CPU bound at 30-45 FPS, I don't think this tech would work great in those cases.
 
Last edited:
Top Bottom