• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

NVIDIA’s DLSS 2.0 Is Confirmed for Control and Mechwarrior 5. Over 2x Performance Jump At 4K Max Settings.

Leonidas

Member
Driver Is Available Now. Control is Updated to DLSS 2.0 on March 26, the day the Expansion launches.


DLSS 2.0 offers several key enhancements over the original version:
  • Superior Image Quality - DLSS 2.0 offers image quality comparable to native resolution while rendering only one quarter to one half of the pixels. It employs new temporal feedback techniques for sharper image details and improved stability from frame to frame.
  • Great Scaling Across All GeForce RTX GPUs and Resolutions - A new AI network more efficiently uses Tensor Cores to execute 2X faster than the original. This improves frame rates and eliminates previous limitations on which GPUs, settings, and resolutions could be enabled.
  • One Network For All Games - The original DLSS required training the AI network for each new game. DLSS 2.0 trains using non-game-specific content, delivering a generalized network that works across games. This means faster game integrations, and ultimately more DLSS games.
  • Customizable Options - DLSS 2.0 offers users 3 image quality modes - Quality, Balanced, Performance - that control the game’s internal rendering resolution, with Performance mode enabling up to 4X super resolution (i.e. 1080p → 4K). This means more user choice, and even bigger performance boosts.

DLSS 2.0 is an absolute game changer. DLSS for UE4 is now possible.







control-3840x2160-ray-tracing-nvidia-dlss-2.0-performance-mode-performance.png


mechwarrior-5-3840x2160-nvidia-dlss-performance-mode-dx11-performance.png


60 FPS is now possible at 4K in Control at max settings with All Ray-Traced Effects Enabled with DLSS 2.0 Performance Mode :lollipop_smiling_face_eyes:

Without DLSS 2.0 it would have taken at least 3 more generations for this to happen.
 
Last edited:

ZywyPL

Banned
I think this is exactly how NV wants to introduce RT to low and mid-end GPUs in their upcoming Ampere lineup - by making DLSS pretty much "open", not tied to specific game/engine implementation, so people will be able to render at just a quarter of the targeted resolution, that is a mere 540p for FullHD and 720p for QHD. I fully expect DLSS to be sooner or later build-in into the drivers, so it can be running 24/7 even in Windows desktop, like for example FXAA.
 

Shin

Banned
Good news about the image quality, that was DLSS 1.0 biggest problem most likely.
It looked like a blurry mess, curious to see a comparison between Off/1.0/2.0 to get a better idea.
 

Goliathy

Banned
Shhhhh. There are console fanboys lurking on here who will get all in a tizzy if you say things like that!

hmmm;

Machine learning is a feature we've discussed in the past, most notably with Nvidia's Turing architecture and the firm's DLSS AI upscaling. The RDNA 2 architecture used in Series X does not have tensor core equivalents, but Microsoft and AMD have come up with a novel, efficient solution based on the standard shader cores. With over 12 teraflops of FP32 compute, RDNA 2 also allows for double that with FP16 (yes, rapid-packed math is back). However, machine learning workloads often use much lower precision than that, so the RDNA 2 shaders were adapted still further.

"We knew that many inference algorithms need only 8-bit and 4-bit integer positions for weights and the math operations involving those weights comprise the bulk of the performance overhead for those algorithms," says Andrew Goossen. "So we added special hardware support for this specific scenario. The result is that Series X offers 49 TOPS for 8-bit integer operations and 97 TOPS for 4-bit integer operations. Note that the weights are integers, so those are TOPS and not TFLOPs. The net result is that Series X offers unparalleled intelligence for machine learning."


Will be interesting how they compare in real world scenarios later this year.
 

darkinstinct

...lacks reading comprehension.
I'm really interested in DirectML for Xbox. Just like with mesh shading I am certain that Microsoft is cooking up their own custom hardware to have machine learning based upscaling to 4K.
 

Dr.D00p

Member
I'd still rather they released GPU's that can do a native 4K + RTX at decent framerates, rather than rely on smoke & mirror tricks like DLSS, TBH.
 

Shai-Tan

Banned
I'd still rather they released GPU's that can do a native 4K + RTX at decent framerates, rather than rely on smoke & mirror tricks like DLSS, TBH.

Sure but I'm glad as an owner of the current cards that DLSS 2.0 looks better in Control because I really don't want to turn down RTX effects. It's unplayable in 4k without DLSS enabled.
 
Last edited:
Fantastic stuff. I'm so happy I opted for a 2070 over a 1080.

However, Tensor-less DLSS in Control is still a solid implementation and hopefully something similar can be used with consoles and last gen GPU:s.
 

Leonidas

Member
Just tested it in Control with 2080 Super - performance gains are incredible, it dipped below 60 sometimes with all RTX stuff enabled, now it holds it rock solid.

But the DLSS blurrines is still somewhat noticeable, especially in motion, it's like it can't process things fast enough. Nevertheless it works, and if they keep improving it, then the sky is the limit.

Control DLSS 2.0 patch comes on the 26th.
 

Ivory Blood

Member
Control DLSS 2.0 patch comes on the 26th.
Huh? So it's just regular DLSS for now?

Edit: Never actually tried this stuff in Control, so if it isn't that bad with the old version of DLSS, I can only imagine how good the new one will look.
 
Last edited:

Leonidas

Member
Huh? So it's just regular DLSS for now?

Edit: Never actually tried this stuff in Control, so if it isn't that bad with the old version of DLSS, I can only imagine how good the new one will look.

Control DLSS was better than original DLSS, but it still had issues, which DLSS 2.0 has seemingly solved.

On the 26th after the patch with DLSS 2.0 it will be a marked improved.
 
this is great news, looking forward to trying out Control with the new DLSS with my 2080TI. and people clowned DLSS when the rtx cards first dropped
I always defended it... but let's not act like people didn't have good reason when the RTX cards first dropped. Many of the first implementations left a lot to be desired.. we should all be able to admit that. Battlefield 5 being the worst offender, and Metro in the beginning was terrible, but was made slightly less terrible with an update.

Still... none of the previous implementations compare to DLSS2.0. We're past that old stuff now and have an exciting future to look forward to. It's gonna keep getting better and better.
 

ZywyPL

Banned
and people clowned DLSS when the rtx cards first dropped

It's all NV's fault that they initially marketed DLSS as it will bump up the resolution even higher than the native one, thus giving superb AA and IQ, while after its actual release it turned out it does indeed bump the resolution, but from much lower resolution than the one you actually set, hence resulting in questionable/worse results. Untill version 2.0 introduced in W:YB, DLSS just didn't make much sense other than allowing for RT effects at playable FPS, but now? Even a quarter of native resolution gives already a slightly better IQ, while offering close to double the framerate. Now there's no reason to not to use DLSS whenever available.
 

Dontero

Banned
I also have a way to increase my fps like DLSS2.0. We do it even in same way. Lower resolution from 4k to something like 2k and then apply 4xmsaa or TAA and call it "4k"
 

Dural

Member
hmmm;




Will be interesting how they compare in real world scenarios later this year.


If the next gen consoles could do this then maybe we could get a true generational leap in graphics by rendering at 1080p and using this to get to 4k.



XSX:
49 TOPS 8bit
97 TOPS 4bit

2080TI:
227 TOPS 8bit
455 TOPS 4bit

int43.png

Ugh, that's a huge difference.
 
Does it work on yet unreleased games?
What is the length of training for the NN?
Upping FPS in a 6 month old game is nice, but what about 1 day old?
What the hell are you talking about? lol..

It works in the games it's released for....
It's now a generic model meaning training carries over from game to game..
Upping both is great... we're happy Control is getting the update... and any future games supporting DLSS will support this new DLSS2.0.
 

ZywyPL

Banned
Ugh, that's a huge difference.

Even worse, the XBX figures are calculated when all CUs will be sacrificed solely for AI calculations, which obviously will never be the case, not even close. So the question is, how effective DirectML will be, how little CUs can be used in order to give worthy results?
 

psorcerer

Banned
Oh so you think the model changes for each game?

It depends how they train it.
But in a generic case yes, it should.
You can teach model about real world, using real world data.
But games are "art world" which maybe close to real for "realistic" games. But generally - far from it.
How a model suddenly knows how to upscale say Spiderman or Valkyria Chronicles after seeing Control and Mechwarrior?
There is no way.
It will look subpar, unless developers train the model while developing the game.
Then it will be fine indeed.
 
fantastic. for that time in the future when i stumble upon a job that gives me about $2000 extra bucks a month for enough months for me to save without spending it on something else. Then it's Area 51m all the way~
 
Last edited:

sendit

Member
PR and marketing says a lot of things.
How exactly does it work?

Probably a general algorithm to identify good:bad. Out come may not be optimal. Gamers aren’t going to be implementing this in games. Developers will, and assuming Nvidia Is flat out lieing while showing samples isn’t good for customer relation.
 

psorcerer

Banned
Probably a general algorithm to identify good:bad.

But what's "good" is a pretty artistic process.
Artists will kill you. :messenger_tears_of_joy:
Like I know people who did some auto-imagespace sharpening, gamma correction and tone-map.
And artists were trying to lynch that person. :messenger_tears_of_joy:

P.S. if DLSS can generally improve games with a subpar art - that would be nice!
 
Last edited:

Kenpachii

Member
I think this is exactly how NV wants to introduce RT to low and mid-end GPUs in their upcoming Ampere lineup - by making DLSS pretty much "open", not tied to specific game/engine implementation, so people will be able to render at just a quarter of the targeted resolution, that is a mere 540p for FullHD and 720p for QHD. I fully expect DLSS to be sooner or later build-in into the drivers, so it can be running 24/7 even in Windows desktop, like for example FXAA.

Watch these nvidia fucks launch GPU's with it enabled permanently and sell it as higher versions.

Want to have a non dlss card? oh u need to buy our titan now. because we brute force it on all our other cards because it saves us money.
 
Last edited:
It depends how they train it.
But in a generic case yes, it should.
You can teach model about real world, using real world data.
But games are "art world" which maybe close to real for "realistic" games. But generally - far from it.
How a model suddenly knows how to upscale say Spiderman or Valkyria Chronicles after seeing Control and Mechwarrior?
There is no way.
It will look subpar, unless developers train the model while developing the game.
Then it will be fine indeed.
How about YOU don't worry about it... and leave it to the professionals?

They don't train it anymore using images on a per game basis. It uses motion vector data from the game in conjunction with their own AI research NN data. The ONLY data they need from the game is vector data...

It's a single model used across all games.. that they are going to continue to improve generation to generation. All games that support DLSS2.0 should benefit from any updates to their NN model through driver updates.
 

psorcerer

Banned
They don't train it anymore using images on a per game basis. It uses motion vector data from the game in conjunction with their own AI research NN data. The ONLY data they need from the game is vector data...

It works like temporal AA then.
It's ok. May work actually for all games.
Quality won't be good for all of them, but probably pretty ok.
 
Top Bottom