• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD FidelityFX Super Resolution 2.0 - FSR 2.0 vs Native vs DLSS - The DF Tech Review

Killer8

Member
FSR 2 is worse than DLSS universally. At detail rendering and stability in motion, at hair, at transparency, at animation movement, at particles and the cost on AMD cards is almost double than that of Ampere. Its a good solution for lower end cards, but DLSS is better in every way and runs better. Maybe watch the video and less fanboyism. The only reason that AMD did this is because of nvidia. They cant release proprietary tech because their market share is non existent. They're not good guys doing you a favor, nvidia forced their hand. Just look at their cpu's if you feel like staning for AMD. They were behind intel for near 20 years and not a single second had passed after they reached parity in gaming (they never were ahead) and they stoped releasing budget cpus and they raised the prices for every single model and almost refused to allow the cpus on older mobos. After intel again wiped the floor with them with Alder Lake, only then they released budget cpus. Chose another company to be a fan of, cause AMD isnt it

I don't think many people will seriously argue that FSR 2.0 is better than DLSS universally.

The win here is that it's a platform agnostic technique with good results and a big performance gain in some modes. We finally have something that's 'good enough', easy to implement, can run on all modern GPUs, and most importantly can also be used on next-gen consoles (a market which only use AMD hardware). I know DLSS is fun to talk about on PC in terms of the high-end AMD vs Nvidia dick measuring, but that whole argument is just autistic noise for hundreds of millions of console users, as well as AMD / older GPU owning people wanting something for their hardware. Bear in mind that if you couldn't use DLSS and FSR wasn't a thing, a lot of games were stuck using outdated techniques like checkerboard rendering, ugly sharpening algorithms and dynamic resolution scaling. I pointed out in another thread that when Resident Evil Village implemented FSR on PC, it got an instant image quality boost over Capcom's own checkerboard solution - and that was just the shitty 1.0 version of FSR.

FSR 2.0 is just much more exciting from the point of view of practicality, not so much in terms of quality. It's an elegant solution to many problems.
 
Last edited:

//DEVIL//

Member
If the last 2 years anything goes by, Nvidia will be releasing DLSS 3.0 this year.

FSR2 still can't beat DLSS 2.3 ( not saying its horrible. but I don't see the point of using it if you have any Nvidia card from the past 5 years 2k series card and up )
 

RoboFu

One of the green rats
Maybe one day I will be to to buy a new GPU and try out FSR but DLSS is still a hit or miss for me as I am very prone to seeing atifacts and weirdness in images.

DLSS really still has trouble with some repeating patterns specially if there are repeating lines in the texture.
 

FireFly

Member
Some of you really seem ill equiped to follow a simple thing as a forum conversation. Me pointing out the results from Alex's video is me being hurt ? Who exactly didnt believe these results were possible ? Its exactly how solutions like checkerboarding or insomniacs solution works on console or Epic's TSR in Unreal 5. It does good job, but it also comes with a number of flaws, neither small nor few in number, but thats up to each individual with what compromises he wants to live with to get extra performance.

Lets just hope there wont be blockage for DLSS like amd liked to do in its sponsored games because dlss still is better in every point if you have a gpu for it
Checkerboarding in itself doesn't use temporal accumulation and Alex explicitly calls it a 1st generation technique in his video because it is designed to achieve acceptable results at 1/2 resolution scale, where as FSR is designed to do the same at 1/4 scale. So if you think it works the same you are disagreeing with Alex.

In addition, Alex seems to have a more positive appraisal than you do. He describes the technique as "extremely" competent in quality mode at 4K, and better than the native presentation. He calls it a "resounding success" for AMD. The main benefit is the extremely good reconstruction of static detail and the lack of ghosting in movement. Where as from what I have seen TSR can cause substantial ghosting. (Indeed Alex highlighted the poor resolution in motion as the biggest downside of the Ghostwire implementation. By comparison FSR 2.0 seems to do much better in the motion tests). The main downsides of FSR 2.0 are artifacts in animations and poor treatment of particle effects. But having best in class algorithmic upscaling for other elements of the image that comes close to DLSS in quality mode, is highly worthwhile and not 'just another' TAAU solution.

They were behind intel for near 20 years and not a single second had passed after they reached parity in gaming (they never were ahead) and they stoped releasing budget cpus and they raised the prices for every single model and almost refused to allow the cpus on older mobos. After intel again wiped the floor with them with Alder Lake, only then they released budget cpus. Chose another company to be a fan of, cause AMD isnt it
It wasn't in their interest to release new budget CPUs due to the chip shortage. At launch prices they weren't amazing value compared to the 10th generation Intel processors, but not terrible value either. So no, AMD isn't some saint and primarily cares about profits, like any company. But you have to compare to years of technology stagnation under Intel and their anti-competitive business practices. We should cheer for AMD, not because they are the "good guys", but because they bring much needed competition. Just like we should cheer for Intel entering the GPU space. Or we should cheer for a technology that allows users of old AMD and Nvidia GPUs to extend their lifespan free of charge.
 
Last edited:

01011001

Banned
Checkerboarding in itself doesn't use temporal accumulation and Alex explicitly calls it a 1st generation technique in his video because it is designed to achieve acceptable results at 1/2 resolution scale, where as FSR is designed to do the same at 1/4 scale. So if you think it works the same you are disagreeing with Alex.

FSR quality mode is not 1/4 scale. its 67% on each axis (4k Quality Mode = ~1440p, if it was 1/4 it would be 1080p)
and once you go down to 1/4 you have usually worse quality than checkerboarding
 
Last edited:

FireFly

Member
FSR quality mode is not 1/4 scale. its 67% (4k Quality Mode = ~1440p, if it was 1/4 it would be 1080p)
and once you go down to 1/4 you have usually worse quality than checkerboarding
I was talking about performance mode, as Alex was in his comparison with checkerboarding. Again, the comparison is Alex's, not mine.
 

YCoCg

Member
How do i install dlss 2.3? Im up to date on Geforce drivers. I thought i already had the latest version of DLSS.
You don't really install it via the driver. A game ships with a given version of DLSS in a DLL file (nvngx_dlss.dll), you can update the version by dropping a newer version in the game directory. Deathloop ships with 2.3.0, latest version is 2.4.0.
The latest version as of May 13th is Version 2.4.3, this website keeps an archive of all the versions so you can go back and test older ones or indeed update other games to the current version.
 

YCoCg

Member
DLSS 2.3
sexy black and white GIF


VS

FSR 2.0
sad fat woman GIF



FSR 2 has a looooong way to go still. Too much flickering and too much artifacting.
Also comparisons like this are hyperbole af it's shown on the tests that FSR2.0 Quality Mode (like DLSS2.3) produces a better image than native + TAA, that's already above and beyond most implementations out there and provides better image quality for the end user. Granted when dropping to Balanced or Performance FSR2.0 doesn't hold up as much compared to DLSS2.3 but for most Quality is what's chosen.
 
They were behind intel for near 20 years and not a single second had passed after they reached parity in gaming (they never were ahead) and they stoped releasing budget cpus and they raised the prices for every single model and almost refused to allow the cpus on older mobos. After intel again wiped the floor with them with Alder Lake, only then they released budget cpus. Chose another company to be a fan of, cause AMD isnt it

It wasn't in their interest to release new budget CPUs due to the chip shortage. At launch prices they weren't amazing value compared to the 10th generation Intel processors, but not terrible value either. So no, AMD isn't some saint and primarily cares about profits, like any company. But you have to compare to years of technology stagnation under Intel and their anti-competitive business practices. We should cheer for AMD, not because they are the "good guys", but because they bring much needed competition. Just like we should cheer for Intel entering the GPU space. Or we should cheer for a technology that allows users of old AMD and Nvidia GPUs to extend their lifespan free of charge.

This is so annoying, people keep demanding that AMD (because "it's the good guy") need to make better CPUs and then sell to them for as cheap as they want (same thing for the GPUs), like if AMD had to work for free for them.
I'm glad that AMD increased prices, not only it was deserved but also needed to survive and them be able to continue developing good products. People should be glad that AMD could increase their prices because if not it wouldn't have a future.
 

bbeach123

Member
DLSS 2.3 was better but FSR look good enough .

But how easy it can be implement though . Hope it easier than DLSS and more game had it .
 

SlimySnake

Flashless at the Golden Globes
This is so annoying, people keep demanding that AMD (because "it's the good guy") need to make better CPUs and then sell to them for as cheap as they want (same thing for the GPUs), like if AMD had to work for free for them.
I'm glad that AMD increased prices, not only it was deserved but also needed to survive and them be able to continue developing good products. People should be glad that AMD could increase their prices because if not it wouldn't have a future.
TBH, I am not a fan of how AMD increased their CPU prices ABOVE the intel CPUs as soon as they started beating them in thermals. The benchmarks were actual roughly on par, but because of the thermals and power consumption, they had a clear lead so they decided to sell them for a higher price.

It is what it is. I bought the power hungry hot as fuck i7-11700k. Took me weeks to cool it down but now i love it because the higher 5.0 ghz clocks really help in cybperunk ray tracing modes and especially in the CPU bound UE5 Matrix demo. The fact that I saved $50-75 bucks on the cheaper intel because AMD got greedy is my gain and their loss.
 

SlimySnake

Flashless at the Golden Globes
The latest version as of May 13th is Version 2.4.3, this website keeps an archive of all the versions so you can go back and test older ones or indeed update other games to the current version.
Thanks. So since this is game by game, i have to find the dlss dll in my game folder and replace it? I will try this in cyberpunk.
 

YCoCg

Member
Thanks. So since this is game by game, i have to find the dlss dll in my game folder and replace it? I will try this in cyberpunk.
Yep, easiest way to do it is to go to your steamapps/common folder, and in the search box type "nvngx_dlss.dll" and that way it will show you all the installed games that have DLSS, you can then just drag this new DLL over the old one to replace it.
 

Shmunter

Member
I’m not sure if I missed it, but did Battlestarglia evaluate the thin fsr2 shimmer with sharpening off??
 
Last edited:

sachos

Member
I really like Alex's conclusion statement, basically FSR 2.0 is really good and way better than FSR 1 but it is still nowhere near a DLSS killer. His conclusion feels like a dig at that TechPowerUp article lol
 

Shmunter

Member
He only tested with shimmering on because its set by default.
Dlss2 is sharpening off by default. Trying to match settings, especially if under user control should have taken place.

Obvious being, sharpening in all forms adds to this effect.
 
Last edited:

hlm666

Member
Dlss2 is sharpening off by default. Trying to match settings, especially if under user control should have taken place.

Obvious being, sharpening in all forms adds to this effect.
He had a side by side of dlss and fsr2 with defaut (10) sharpening and 0 at the end. He left it at default so it didn't lose detail in the textures if he dropped it to 0, then people would have said why didn't he leave it at AMD and the developers tuned settings. Apparently AMD worked very close with the devs here so the defaults are what AMD and the devs think are best.
 

I Master l

Banned
So the question is, how the fuck with Nvidia dedicating so much silicon to tensor and RT cores, they still can hold or beat AMD even are pure rasterization? That’s the question.

Nvidia GPU dies are so much bigger than AMD for similar performance, In
pure rasterization i would say AMD is better
 

ZywyPL

Banned
Props for AMD for catching up with DLSS so fast.

While DLSS is still a bit superior tech, it's limited only to Nvidia GPUs, so completely not applicable on consoles, which I why I believe FSR will become the most common used tech in the near future.

The performance gains speak for themselves, the only question is whether the devs will use that to run their games at 60FPS, or to maintain 30 and crank up the visuals even further, that's really my only concern with the tech. As always, time will tell.
 
Last edited:

winjer

Gold Member
I think there is a bug with FSR 2.0 implementation, regarding those transparencies and alpha texture effects.
This is something that is a limitation with these upscalers. For example, nvidia admitted that one of the thigs they want to improve with future iterations of DLSS, is adding the ability to upscale transparencies, alpha textures, volume fog and RT reflections.
Currently DLSS can't upscale these effects. So the devs have to offset the render scale of these effects, to adjust their quality to the output resolution.
This is something similar to what devs have to do with offsets to MipMaps and other LODs based on render resolution.
So my guess, is that FSR 2.0, can't upscale transparencies, alpha textures and RT reflections, just like DLSS.
And someone forgot to set the offset for those effects in Deathloop, for FSR. The same way with DLSS and that never ending ghosting bug.
 

Kumomeme

Member
i see some people still held on about 'this brand solution still cant beat the other brand solution' and vice versa to the point some of them completely ignored the result.

this is not all about who winning or losing. doesnt matter.

whats most important here is that we get alternative solution that good enough for the job and glimpse of what possible in future. thats all.
 
Last edited:

Rudius

Member
I really like Alex's conclusion statement, basically FSR 2.0 is really good and way better than FSR 1 but it is still nowhere near a DLSS killer. His conclusion feels like a dig at that TechPowerUp article lol
They didn't say it was a killer because of better quality, but because it is similar while working on basically everything.
 
It's pretty cool that AMD even got close to DLSS quality without the need for specific hardware like tensor cores. Doubly cool that FSR2.0 will be available for GTX1060 or RX580 level cards too. It may not reach the same quality or performance gains as DLSS but I think it's overall a great development for games and gaming.
 

Elios83

Member
It's great that they're improving this considerably and quickly.
Getting close to latest version of DLSS without dedicated cores is impressive and it's going to benefit everyone.
 
Last edited:

Mister Wolf

Member
If you ever wanted to know what Nvidia's Tensor Cores were doing for DLSS just take a look at FSR 2.0's terrible performance mode upscaling from 1/4 the resolution.
 

Bojji

Member
Much better than FSR 1.0 but still behind DLSS in some ways, definitely not "DLSS killer".

I hope it will come to console games soon, I don't like seeing native 1080p games on 4K screen.
 

Buggy Loop

Member
Nvidia GPU dies are so much bigger than AMD for similar performance, In
pure rasterization i would say AMD is better

3080 vs 6800 XT

680 mm^2 vs 520 mm^2, +21% difference

28.3M vs 26.8M transistors, +5.6% difference

Samsung’s 8nm for ampere was estimated at 61.2 MTr/mm^2.

TSMC’s N7P can give a whooping 96.5 MTr/mm^2.

-57.7% density

Approximately 20~25% of the ampere silicon is dedicated to RT and tensor cores.

And pretty much all launch bench’s that were resumed from all sites still had 3080 beat a 6800XT by 3% at 1440p and 4K.

So please, tell me how RDNA 2 is better at rasterization than Ampere?
 
Last edited:

winjer

Gold Member
3080 vs 6800 XT

680 mm^2 vs 520 mm^2, +21% difference

28.3M vs 26.8M transistors, +5.6% difference

Samsung’s 8nm for ampere was estimated at 61.2 MTr/mm^2.

TSMC’s N7P can give a whooping 96.5 MTr/mm^2.

-57.7% density

Approximately 20~25% of the ampere silicon is dedicated to RT and tensor cores.

And pretty much all launch bench’s that were resumed from all sites still had 3080 beat a 6800XT by 3% at 1440p and 4K.

So please, tell me how RDNA 2 is better at rasterization than Ampere?

These numbers are not correct.
The 3080 uses the GA102 that has 628 mm², but the 3080 does not have the full die enabled. The card that has the full GA102 enabled it the 3090Ti. And it has 28,300 million transistors.
The 6800XT uses the Navi 21 die, that has 520 mm², but it also doesn't use the full die. That would be the 6900XT. And it has 26,800 million transistors.

TSMC 7nm is somewhat denser than Samsung 8nm, but there is another reason, the L3 cache on Navi32.
dram cells are significantly denser than logic cells. But even within logic cells, density can vary a lot.
 
Last edited:
You must think that going of the rails instead of staying on topic makes you sound smart. It doesnt, just emphasises the clownery. Like i said and the aspects i presented, instead of watching more amateuristic and surface level articles, especially the PR piece from techpowerup, watch Alex's video with visual proof. As in, not someones opinion, but hard facts presented in front of your eyes.

You know what would be unhealthy ? To buy an amd card that runs FSR 2 twice slower than Ampere. Look at the video. Alex was so baffled by this that he used two different testing methods to make sure
There were areas on all reviews where FSR looked better than DLSS.
The only way you would be playing this down is if you are fanboying for Nvidia.
You could actually say that it's a kick in the nuts to Nvidia that AMD got this close to them without even needing ML.
Reality is that FSR algorithms will be improved upon and those slight areas of flicker and break up will be fixed.
It's a really good solution and one that really puts a question on the need to invest in dedicated ML cores and training on super computers when the results are this close, and it works on every GPU brand.
 

SlimySnake

Flashless at the Golden Globes
I don't think many people will seriously argue that FSR 2.0 is better than DLSS universally.

The win here is that it's a platform agnostic technique with good results and a big performance gain in some modes. We finally have something that's 'good enough', easy to implement, can run on all modern GPUs, and most importantly can also be used on next-gen consoles (a market which only use AMD hardware). I know DLSS is fun to talk about on PC in terms of the high-end AMD vs Nvidia dick measuring, but that whole argument is just autistic noise for hundreds of millions of console users, as well as AMD / older GPU owning people wanting something for their hardware. Bear in mind that if you couldn't use DLSS and FSR wasn't a thing, a lot of games were stuck using outdated techniques like checkerboard rendering, ugly sharpening algorithms and dynamic resolution scaling. I pointed out in another thread that when Resident Evil Village implemented FSR on PC, it got an instant image quality boost over Capcom's own checkerboard solution - and that was just the shitty 1.0 version of FSR.

FSR 2.0 is just much more exciting from the point of view of practicality, not so much in terms of quality. It's an elegant solution to many problems.
I love what this means for consoles. There are too many native 4k 30 fps games on the ps5 that struggle to run at 60 fps without massive downgrades. This should fix that.
 

Naru

Member
The ~25% FPS increase in Quality Mode for the Vega 64 is less than I was hoping for but it's still an improvement. I'll take it. Hope they can reduce the flickering a bit and that it gets wide support.
 
Last edited:

adamsapple

Or is it just one of Phil's balls in my throat?
I love what this means for consoles. There are too many native 4k 30 fps games on the ps5 that struggle to run at 60 fps without massive downgrades. This should fix that.

Oh, I don't think FSR is going to literally double the performance on consoles. Especially as many console developers use their own versions of upscaling already (TAA Injection etc).
 

VFXVeteran

Banned
That's all AMD needed to do, to have something close enough that it doesn't really matter. Since their approach works for everything it will become the standard.
Completely disagree. The differences in movement and transparency is so apparent. I'm not sure if people are seeing the same video that I'm seeing. It's not "close enough". It's better in some ways to native, but it is indeed SLOWER than DLSS and it is inferior to the final image to DLSS - like visibly so.

I can already see the hyperbole arguments that FSR 2.0 is equal to DLSS and so now "we have a stalemate with image reconstruction". Frametimes and movement are significant yet will be considered "equal".
 
Last edited:

winjer

Gold Member
Curious that PCGH shows very different performance results.
The 6800XT gains a bigger percentage with FSR 2.0, than the 3080, also with FSR 2.0
They did use the latest AMD QHQL drivers, Adrenalin 22.5.1
I have no idea what drivers DF used for their test.

Gl2KXQV.jpg
GIbMe8M.jpg
 

yamaci17

Member
Curious that PCGH shows very different performance results.
The 6800XT gains a bigger percentage with FSR 2.0, than the 3080, also with FSR 2.0
They did use the latest AMD QHQL drivers, Adrenalin 22.5.1
I have no idea what drivers DF used for their test.

Gl2KXQV.jpg
GIbMe8M.jpg
interesting results

if its true, a huge chunk of his video is invalidated lmao. sad because it seems he poured tons of effort into it.

i also think nvidia should've asked deadloop dev to enable dlss sharpness by default. one having no sharpening and other having sharpened to max muddies the entire discussion

i dont like sharpening at all so i tend to turn it off. wish someone did some no sharpen vs no sharpen comparisons
 

DukeNukem00

Banned
interesting results

if its true, a huge chunk of his video is invalidated lmao. sad because it seems he poured tons of effort into it.

i also think nvidia should've asked deadloop dev to enable dlss sharpness by default. one having no sharpening and other having sharpened to max muddies the entire discussion

i dont like sharpening at all so i tend to turn it off. wish someone did some no sharpen vs no sharpen comparisons

In what way is his video thats running in front of your eyes invalidated, lol ? Look at computerbase's various results in games at random. They never match with anyone elses.
 

winjer

Gold Member
Also consider that AMD measured the FSR 2.0 impact on RDNA2 GPUs at around 1.5 to 0.5 ms.
But Alex was measuring much higher values. Sometimes around the 4ms.

[IMG]


[IMG]
 

winjer

Gold Member
Alex pulling an NXG. Maybe something wrong with his PC setup?

I have no idea of what is going on.
If they are using different drivers, different settings, different places on the game to bench, different Windows, background software, etc...
It could even be a bug in the game that only triggers with some set-ups.
 
Last edited:

elliot5

Member
Also consider that AMD measured the FSR 2.0 impact on RDNA2 GPUs at around 1.5 to 0.5 ms.
But Alex was measuring much higher values. Sometimes around the 4ms.

[IMG]


[IMG]
AMD's measuring could be different than what Alex's methodology was. Different scene, different game, etc. As Alex mentioned, this is just one data point (Deathloop) and implementation on other games is necessary to come to a better understanding.
 

yamaci17

Member
In what way is his video thats running in front of your eyes invalidated, lol ? Look at computerbase's various results in games at random. They never match with anyone elses.
half of the video is him talking about the milliseconds costs fsr and dlss and how it fares across different brands

if fsr actually takes less cost on amd brand with a newer driver, then those parts of the video is invalidated, no? he made tons of comparisons and calculations to draw the conclusion of FSR being faster on NV hardware
 

LordOfChaos

Member
Is this technology available on PS5/XBSX?


It doesn't need special hardware and doesn't rely on machine learning even, so yes. Even runs on Nvidia hardware. XSX support is announced and while I don't see it explicitly for PS5, there were already FSR 1.0 games on it and again 2.0 doesn't require any extra hardware so game developers can implement it.


Edit: Actually, FSR 2.0 on PS5 already happening


Excited about the second wind in performance gains/GPU power available for other stuff this can give to the consoles
 
Last edited:

DukeNukem00

Banned
half of the video is him talking about the milliseconds costs fsr and dlss and how it fares across different brands

if fsr actually takes less cost on amd brand with a newer driver, then those parts of the video is invalidated, no? he made tons of comparisons and calculations to draw the conclusion of FSR being faster on NV hardware

They received the same code that Alex did at the same time supposedly, so the drivers must be the same. But, even discounting that Computerbase's numbers are ALWAYS different than every one elses. Look at that guy's conclusion:

"As far as image reconstruction is concerned, there is a tie in Ultra HD, while WQHD and Full HD show slight advantages for DLSS for small objects, but these are only noticeable in a direct comparison. In terms of image stability, there is also a stalemate for the most part: depending on the object, FSR 2.0 and DLSS 2.0 sometimes produce a calmer image."

I mean, we have visual proof that this is not true, yet this is what the german dude saw. These outlets are not doing as indepth analysis as Alex did because you can't look at all the issues Alex showed that FSR has and then conclude that DLSS, which has none of them is about the same thing at 4k.
 

SlimySnake

Flashless at the Golden Globes
So is it official that it will be supported on consoles especially PS5 ?
Yes it is supported on all gpus. Rdna 1, 2, vega and even Polaris.

Devs just have to add support for it. It’s not on the os level.

From what i understand, basically an algorithm api that you can plug into your code.
Oh, I don't think FSR is going to literally double the performance on consoles. Especially as many console developers use their own versions of upscaling already (TAA Injection etc).
Fsr performance almost doubles the framerate, but Quality offers up to a 35% boost. Something like horizon that runs at native 4k 30 fps can now run at 4k 40 fps at the same resolution. With fsr performance they should easily be able to hit 60 fps at 1440p quality fsr.
 
Top Bottom