[TechPowerUp] AMD FSR 2.0 Quality & Performance Review - The DLSS Killer

will you use raytracing? if the answer is yes then Nvidia is basically 1 generation ahead in terms of performance
When I ask, it seems like for productive applications the only answer I get is Nvidia even beyond Raytracing. The vast majority of YouTubers talk about Nvidia vs AMD around gaming.
 

elliot5

Member
I'm not an expert on motion vectors, so I'll remain hopeful they can implement it. FSR also requires devs to implement it, but Valve gets around that because they're using a compatibility layer.

Side note: VRS on Deck really isn't VRS, it's just half-rate shading. Maybe it's lacking the pertinent info to do actual VRS and that's what will prohibit proper FSR 2 as well? I'm not even sure Deck can do VRS *edit* without using Windows. I tried it in Riftbreaker and it didn't work.
I don’t think motion vectors and depth info is exposed outside of the game engine, which is why it requires game compatibility. Fsr 1.0 is an injectable thing bc its just an integer scaler basically. Same with VRS half rate shading as Valve implemented.
 

OmegaSupreme

advanced basic bitch
Again you just chose the technologies NVidia use in their marketing. AMD has a lot less money for this stuff so they can't afford to jump on them too early and sacrifice their products performance because of them.
I don't understand the point you're trying to make. Nvidia pioneered those features. Features used a great deal today. Amd followed them with cheaper and inferior versions. What has amd done that Nvidia has followed?
 
Yes, it's coming to Xbox as well (and likely PS5 too), so consoles can save on performance using this. I would like to see a 60fps and even 120fps mode in more games when using this, since the performance mode can almost double the framerate in some cases. Ultra performance mode has even bigger gains on top of that.
This. I hope this means that games don't have to compromise so heavily with ray tracing. It's almost a wasted feature because who wants to play at 30 fps with input lag vs 60 fps without RT but with much tighter and smoother gameplay ..

I'm playing ghostwire and cyberpunk now and it's not just the 30 fps ..rt in a lot of games comes with input lag ..control is another one
 

01011001

Gold Member
when will this patch come out for everyone? I'm guessing midnight US Central... or East... or sometime close to that...

I am really looking forward to look how it compares in person.
 
I don't understand the point you're trying to make. Nvidia pioneered those features. Features used a great deal today. Amd followed them with cheaper and inferior versions. What has amd done that Nvidia has followed?
These are some features, specifically the ones that were a part of NVidia marketing plan, not all features that were introduced over the years, that's my point. How would I know what AMD has done, all that matter is what NVidia has done that AMD hasn't.

FSR itself is an example of something AMD did first, they introduced something that does the job and doesn't require dedicated hardware only available on specific NVidia GPUs.
 
Last edited:

Tripolygon

Member
In stills, it is really impressive but in motion is where it needs to really prove itself against DLSS. Job well done by AMD still if it is as good as even DLSS 2.0.
 
HFW uses a TAAU implementation.. which is what FSR 2.0 is...

Maybe.. you.. strange missed it.

I'm gonna have to do my broken record routine about FSR 2.0 but I have a feeling a year from now we'll still see endless posts of people saying "add it to this game that already has TAAU" even if FSR 2.0 isn't outdoing other TAAU implementations.
If it's competing with Dlss then I'm pretty sure it is doing more than current tauu ...
 

OmegaSupreme

advanced basic bitch
These are some features, specifically the ones that were a part of NVidia marketing plan, not all features that were introduced over the years, that's my point. How would I know what AMD has done, all that matter is what NVidia has done that AMD hasn't.
You seem to be stanning for them so I thought you might actually have some knowledge. Nvidia is superior (and premium) that's the point.
 
Last edited:

OmegaSupreme

advanced basic bitch
Yea, I guess I'm the one stanning for AMD... dude, buy whatever you want.
? I didn't say anything controversial. Nvidia is objectively better at the top. With new features frequently coming first to Nvidia. Amd has the value market. That's why console makers use them.
 
Last edited:


This will be a major game changer on consoles. We could be seeing 30fps games become 60fps thanks to one software tool - no new PS5 Pro needed.

I don't expect it to be flawless, but when you also factor in VRR on both consoles, the experience should be mostly imperceptible from a native 4K 60fps.
No.

Developers already make use of reconstruction techniques for optimal performance on consoles.. FSR isn't going to change that. What you may see is a bit better image quality.
 

SenjutsuSage

Halo TV Series Promoter - Live from: Reach
What will change? This frees devs to do even more on the Series X/PS5 as well, it will have a bigger impact on those consoles if anything since they work with much higher resolutions already.

Series S has never been in competition with Series X or PS5. The moment people realize this the moment it becomes obvious what a fantastic decision it was for Microsoft to offer it as an entry level option to next gen console gaming.
 

Killer8

Member

Yes.



The article shows that AMD weren't talking out their ass when they promised these big gains.

The key to 60fps is the performance modes. 'Performance mode' alone could be enough for most circumstances. The gains make sense, since you are basically feeding a 1080p image into FSR for it to work its magic.

There are many games on console which offer a native 4K30fps mode, while also offering a 60fps mode at lower resolution. Games like GTAV, FF7 Remake, Star Wars Jedi, Uncharted Collection or Lost Judgement spring to mind. Essentially, these games could offer an additional 4K FSR performance mode on top of the existing modes, which would shoot for the reconstructed 4K with an unlocked framerate. As I said, even if the gain does not quite take you to a locked 60fps (maybe it fluctuates between 55-75fps for example), VRR would clean that experience up considerably. It already has in recently VRR patched PS5 games like Ratchet and Spider-Man. It's all about getting a 'close enough' experience - both to native 4K and to a very smooth FPS that feels perceptually in the 60fps ballpark. You would need to be some sort of human FRAPS and be sat inches from your TV to really tell the difference.

I can also see developers potentially swapping out their often ugly dynamic res setups if FSR works well enough (looking at Metro Exodus). Of course developers could run into bottlenecks like being CPU limited. The article shows Deathloop peaks at 102fps no matter what res you run it in. But unless they are doing a lot of complex simulations, most games right now are going to be GPU limited.
 

Fafalada

Fafracer forever
And you just know Sony will let everyone get themselves banned for the next however long it takes for them to add FSR 2.0 support to their console.
This one isn't 'added to a console' - it'll be added to 'game'(s). And it's OS so there's fuck all platform holders need to do about it to get used anyway.

it all comes down to how it looks in motion, that's where the issues will show and that's where DLSS so far was ahead of the pack
high quality GIF

I mean don't get me wrong - I like DLSS output in some games that use it - but motion clarity is not an attribute I've come to associate with it. It's just that we've gotten so used to low-temporal-resolution after 2 decades of motion-blur abuse, LCD, OLED and temporal AA(ok that last one is just 1 decade) that standards are really low for motion-clarity in general in everything but VR software (and even there some studios decided to shit the bed - I'm looking at you ID software).
 

DaGwaphics

Member
Wish they would have done some images with more foliage and things like that, and we need to see how well it does in motion, but amazing results so far.
 

ChiefDada

Member
HFW uses a TAAU implementation.. which is what FSR 2.0 is...

Maybe.. you.. strange missed it.

I'm gonna have to do my broken record routine about FSR 2.0 but I have a feeling a year from now we'll still see endless posts of people saying "add it to this game that already has TAAU" even if FSR 2.0 isn't outdoing other TAAU implementations.

My original comment has nothing to do with FSR 2.0 which makes your reply absolutely useless. Make sure to bring your reading comprehension skills along for the next thread you hop into.
 

01011001

Gold Member
I mean don't get me wrong - I like DLSS output in some games that use it - but motion clarity is not an attribute I've come to associate with it. It's just that we've gotten so used to low-temporal-resolution after 2 decades of motion-blur abuse, LCD, OLED and temporal AA(ok that last one is just 1 decade) that standards are really low for motion-clarity in general in everything but VR software (and even there some studios decided to shit the bed - I'm looking at you ID software).

it literally is better in motion than most modern AA methods and all that while up-sampling the image as well.

other methods like TSR, which is what came closest to DLSS2.x so far, also looks noticeably worse in motion than DLSS2.x
 
Last edited:

GreatnessRD

Member
I wonder what this will mean going forward in the DIY GPU space? If FSR 2.0 is the gospel like the article is saying, does this mean Nvidia may lose mindshare going forward since it won't be just "DLSS DLSS *Orgasm*".

Fun times ahead in the GPU and console markets. Only better for us, the consumers.
 

Mister Wolf

Member
it literally is better in motion than most modern AA methods and all that while up-sampling the image as well.

other methods like TSR, which is what came closest to DLSS2.x so far, also looks noticeably worse in motion than DLSS2.x

Its crazy that some people in here are downplaying "better in motion" as if we stand still while playing these games. The games are always in motion.
 

elliot5

Member
I wonder what this will mean going forward in the DIY GPU space? If FSR 2.0 is the gospel like the article is saying, does this mean Nvidia may lose mindshare going forward since it won't be just "DLSS DLSS *Orgasm*".

Fun times ahead in the GPU and console markets. Only better for us, the consumers.
DLSS is still a little better + other things like DLAA, plus raytracing is leaps ahead of AMD still (without dlss taken into account), so no
 

01011001

Gold Member
Can't they implement this at a driver level instead of a per game basis? Would do wonders for myRX 580

reconstruction methods like these need motion vectors from the game and other data in order to minimize artifacting.
if you took out all of the data it needs in order to work you would have a terrible image.

so no.

FSR1.0 worked on a driver level because it was literally just a slightly better sharpening filter
 

MHubert

Member
If I want to make 3D rendering/modeling Is there any benefit to going with AMD, or Nvidia still is the best option?
There is no benefits of going AMD if you are aiming at that kind of productivity. In fact I would wager that the total number of professional animators or 3d artists that use an AMD gpu is practically zero. In that regard Nvidia is better supported software wise, and on top of that they simply make the better chip. That also holds true for gaming in some cases, although as we can see here AMD is not sleeping behind the wheel and that gap is closing - for now. I dream of making a complete AMD build but as things stand, with Nvidia chips being so much better at productivity while also pulling ahead in gaming, it is not a reasonable investment for a high end PC.
 

Fafalada

Fafracer forever
it literally is better in motion than most modern AA methods and all that while up-sampling the image as well.

other methods like TSR, which is what came closest to DLSS2.x so far, also looks noticeably worse in motion than DLSS2.x
The only contact I've had with TSR so far is Matrix Demo and that has some of the worst motion I've seen in the last 5 years. Granted - I can't be sure TSR is the main/only culprit - but the temporal IQ in that demo is absolutely dreadful.
As for DLSS - it may generate temporally stable edges but the temporal-artifacting and ghosting in the hallmark games is still quite noticeable/bad. Frankly this is one where I don't really trust tech-channels on anymore either, as recent track record with observing temporal artifacts/quality has been poor. Most have talked about Matrix seamless LOD, DF literally has voice-over praising that as the video shows multiple large popups (and really, demo is full of this). Also HZFW was mentioned has having short '1-frame history TAA' - but ghosting I see is easily 5+ frames long (and can be seen in video coverage too).
 

GreatnessRD

Member
DLSS is still a little better + other things like DLAA, plus raytracing is leaps ahead of AMD still (without dlss taken into account), so no
That is true with the Raytracing bit. But we're starting to see AMD up the ante a little. Their next gen GPUs with the MCM chips might gain parity in Raytracing with Nvidia's 40 series, but only time will tell that part. We'll see in a few months.
 

metaverse

Member
Again you just chose the technologies NVidia use in their marketing. AMD has a lot less money for this stuff so they can't afford to jump on them too early and sacrifice their products performance because of them, by the time they become relevant AMD usually has their answer out already.

So they're first to the market because they have developed the tech and funded the research to do so. This isn't a marketing gimmick. AMD is still playing catch up to Nvidia. Unlike Intel, Nvidia isn't sitting with thumb in ass letting AMD sneak in for a reach around.

Additionally FSR 2.0 doesn't hold a candle to DLSS, every comparison on the site is worse with FSR.
 

01011001

Gold Member
The only contact I've had with TSR so far is Matrix Demo and that has some of the worst motion I've seen in the last 5 years. Granted - I can't be sure TSR is the main/only culprit - but the temporal IQ in that demo is absolutely dreadful.
As for DLSS - it may generate temporally stable edges but the temporal-artifacting and ghosting in the hallmark games is still quite noticeable/bad. Frankly this is one where I don't really trust tech-channels on anymore either, as recent track record with observing temporal artifacts/quality has been poor. Most have talked about Matrix seamless LOD, DF literally has voice-over praising that as the video shows multiple large popups (and really, demo is full of this). Also HZFW was mentioned has having short '1-frame history TAA' - but ghosting I see is easily 5+ frames long (and can be seen in video coverage too).

the thing is, of course DLSS has artifacting in motion, I think the misunderstanding on this comes from the fact than when people say it is great in motion they usually compare it to the available Antialiasing the game offers at native res.
and modern games almost always only give you the option between TAA or FXAA (or some variation of that)

you basically have the choice between flickering with FXAA or a softer image with artifacts using TAA.

DLSS usually either looks as good in motion as the TAA of most games or even better in some cases. if also sometimes has better edge treatment than some TAA settings in games.

for example, I played Death Stranding on my TV at 4K a few weeks ago. and with the game's TAA the power lines near the city had horrible flicker and in motion you could see ghosting on many objects.
with DLSS not only did the image look sharp, but also less aliased and it had less ghosting.

Death Stranding in particular had issues with artifacts when birds or other small objects were moving, but that has since been fixed in the Director's Cut and you can just put the .dll of the newer DLSS versions into the original release and also fix it that way.


edit: P.S.: the doghsit image quality in Matrix is not only due to TSR, it's most likely a mix of Lumen, the Denoising and TSR that makes it looks like ass.
there is a build someone made with DLSS, but of course crudely implemented most likely, and that looked better (I made comparison shots in another thread) but still like shit

edit2:

here's my post. look in the bottom images, behind the character's back in motion you can see super weird artifacting with TSR, almost looks like macroblocking. that is greatly reduced with DLSS, but in actual motion both still look like ass

wow, this new version really runs way better. BUT DLSS also looks pretty bad still. this has to be either an issue with Lumen or with motion vectors I assume.
it looks better than TSR, but not by much

here TSR:


and here with DLSS Quality:


notice the RIDICULOUS performance boost with DLSS quality over TSR while also looking better in motion than the TSR results!
literally a 35% increase in performance... like... damn...


my setup again:
Ryzen 5600X
RTX 3060Ti (TUF Gaming)
16GB DDR4 @3200mhz

Dell Monitor: 1440p 144hz


EDIT:

I tried getting some matched motion shots. since this has unavoidable camera motion blur, I had to find a spot where I can exactly time a screenshot while walking sideways, and HOLY HELL, I didn't expect I could line it up so well! LOL


I'm not gonna tell you which is which, one is TSR the other DLSS Quality. like I said, I think neither are properly implemented here as the amount of artifacting on display is crazy.

If you want to find out which is which you can look in the URL names, one is called "citysampledlssmotion2skbo.png and the other citysampletsrmotion2mkq5.png
so if you wanna see if you can tell which is which, and then see if you are right, just look at the url :)



again, how crazy well did I line this up? FIRST TRY TOO! xD I mean I was pretty good at Guitar Hero back in the day, maybe my timing is still trained from that lol
in both shots I walked all the way right against the wall first, and then tried to time my screenshot exactly in the moment that rail in the back left overlapped with the left lamp post

Specular highlights look cleaner imo with DLSS, but both have bad artifacts. it seems like DLSS almost acts like a secondary denoiser on top of the normal RT denoising going on, resulting in less flicker and less specular shimmering

also with DLSS your character has less of a trail behind her compared to TSR, where it looks like heat distortion or some shit behind her lol
 
Last edited:

YCoCg

Member
What has amd done that Nvidia has followed?
AMD became the "standard" in a sense of features because of their open source ways, what do you think VRR on TV's evolved from? That was built on top of AMD's FreeSync and became absorbed as the industry standard via HDMI, which Nvidia had to support as well as GSync. I'm not denying that Nvidia doesn't run in first with a good idea, but because of how they treat everything as propitiatory that leaves AMD time to do their version, keep it open source, and then make an actually bigger impact long term.
 

adamsapple

Or is it just one of Phil's balls in my throat?
This is good news for PS5 and Xbox Series S|X which is what I really care about.

I mean, both yes and no ...

consoles already have a lot of custom upscaling technologies depending on developer. This is probably not gonna replace many of those, but yes in general this is good because this is a (seemingly) very good version of upscaling from a lower resolution to a convincing higher resolution that can be used on consoles without developers needing to research their own methods.
 

Killer8

Member
No.

Developers already make use of reconstruction techniques for optimal performance on consoles.. FSR isn't going to change that. What you may see is a bit better image quality.

Not all of them, and it depends on the developer and circumstances. Many games which I listed earlier offer native 4K30 modes with no reconstruction at all. What i'm saying is i'd like to see a 4K FSR performance mode on top of that, with an unlocked framerate. I'm under no illusion that this is a magic 60fps button ("I don't expect it to be flawless"), but a large performance gain from this software would be very welcome when combined with VRR.

I'd also like to see how this pans out for developers already hitting 60fps. As you say, you can end up with better image quality. If the performance is already good, then any image quality wins to get it closer to '4K looking' works for me too. A game I have in mind is Metro Exodus, which uses both dynamic res and 4A's own temporal reconstruction, and together can create some ugliness in heavier scenes. It would be very interesting and welcome if FSR 2.0 could provide an upgrade over their in-house temporal technique. An example of that happening is Resident Evil Village on PC when it got FSR 1.0 patched in. It offered a tangible visual upgrade over Capcom's own checkerboard technique (used on consoles), at a level of performance which I think the latest FSR 2.0 could meet (while looking even better):


This. I hope this means that games don't have to compromise so heavily with ray tracing. It's almost a wasted feature because who wants to play at 30 fps with input lag vs 60 fps without RT but with much tighter and smoother gameplay ..

I'm playing ghostwire and cyberpunk now and it's not just the 30 fps ..rt in a lot of games comes with input lag ..control is another one

Like with DLSS, it would actually make the RT quality lower res since it's using a lower base resolution to upscale from. But it would improve framerates for sure. Ray tracing at 60fps could definitely become more viable for developers. Control is one game I have in mind. DF found through a photo mode trick that it has a lot of headroom above 30fps, but Remedy chose to cap it at 30. The RT mode runs at native 1440p which is then temporally upscaled to 4K:


I posit that if they ran this in 4K performance FSR instead of native 1440p, and with VRR enabled, you could get a very nice experience that is not far away from the feeling of native 4K60.
 
Last edited:
I mean, both yes and no ...

consoles already have a lot of custom upscaling technologies depending on developer. This is probably not gonna replace many of those, but yes in general this is good because this is a (seemingly) very good version of upscaling from a lower resolution to a convincing higher resolution that can be used on consoles without developers needing to research their own methods.
This is beneficial in the same way that Unreal Engine is beneficial, especially for smaller developers. Less time to make their own engine (in the case of UE) or upscaling algorithm (in the case of FSR 2.0) now and more time to focus on other aspects of product development.

Additionally, I'd be pretty hard pressed to believe that MS (and Sony as well) didn't at some point take FSR 2.0 into consideration when choosing their hardware designs considering they were quite adamant about being fully DX12U and RDNA 2.0 compliment. FSR 2.0 would be another example of "the tools are getting better" trope I see here so much.
 
Last edited:

Otre

Gold Member
Wonder how the Yuzu Switch emulator and the PS3 one, RPCS3 should benefit from this since it uses FSR 1.0. Lossless scaling on Steam lets you apply FSR to almost any game too.

Im most happy for consoles, DLSS is great and this is finally a worthy competitor, at least on Deathloop.
 
Last edited:
Top Bottom