• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

GDC 2022 - AMD FidelityFX Super Resolution 2.0

Amiga

Member
FidelityFX Super Resolution 2.0 is AMD's next generation upscaling technology, delivering similar or better than native image quality using temporal data.
In this talk, Colin and Thomas will first discuss how this new technology is able to significantly increase image quality, diving into some of the algorithmic aspects which enable FSR 2.0 to reconstruct upscaled fine details from low-resolution input images. Then, multiple optimizations implemented during development will be presented, including how FSR 2.0 manages cross-platform performance. Lastly, game developers will learn valuable insight on how to integrate FSR 2.0 to allow for the highest visual quality upscaling at optimal performance.



kK06nW6.jpg






tRwe9lW.jpg


Can't wait to check the samples of 360p upscaling. Steam Deck just got better with a software upgrade. And other RDNA2 APUs can become full time PC "good enough" rigs that budget gamers can sit on for a longer time.
aKs4v1z.jpg
WaEqAfi.jpg
 
Last edited:

KungFucius

King Snowflake
If AMD can rival NVidia's RT tech this coming gen, I might try to get their "7900XT" to replace my 3090. I prefer open source tech to closed source, even when the closed source is superior because universal adoption is much more important.
 

ethomaz

Banned
Interesting… let’s see the results.

It just seems AMD is always late to the party… I mean temporal reconstruction solution was talked in 2014.

And now 2022 AMD show their temporal retro construction solution.
 
Last edited:

01011001

Banned
I am really curious if this can look as good as the usual TAA treatments games get... because I certainly don't trust AMD's PR screenshots one bit, knowing how shit FSR1.0 is
 

mrcroket

Member
I am really curious if this can look as good as the usual TAA treatments games get... because I certainly don't trust AMD's PR screenshots one bit, knowing how shit FSR1.0 is
This is not the same as TAA, TAA is used to multi sample the image with previous frames to fix aliasing on edges and motion aliasing , FSR 2.0 use this sames samples to reconstruct the details of the image from lower to higher resolutions.
 

01011001

Banned
This is not the same as TAA, TAA is used to multi sample the image with previous frames to fix aliasing on edges and motion aliasing , FSR 2.0 use this sames samples to reconstruct the details of the image from lower to higher resolutions.

TAAU is a thing and is exactly what FSR2.0 is basically doing, and TAAU also isn't that amazing. like the upsampling in the Matrix Demo didn't fool anyone to think it's anything more than 1440p at best, and even then, it often looked 1080p-ish at most while being in the open world, with not so great artifacting as well.
so FSR2.0 must have some really good secret sauce to impress IMO...

like I said, FSR 1.0 is absolute shit and people expected wonders from that as well, let's see if they can turn the ship around like nvidia did with the jump to DLSS 2.0 or if it will be just like any other temporal upsamling out there and look middling at best.

because what FSR 2.0 needs to beat in order to be relevant is the already existing methods out there that most modern engines implement already and have been for years in order to look good on last gen systems.
will it look as good as what Call of Duty does for years? because that has a pretty damn good upsampling on console since WWII... or will it look better than what Insomniac does? or better than Unreal Engine's native TAAU? we'll see.
 

JeloSWE

Member
This is not the same as TAA, TAA is used to multi sample the image with previous frames to fix aliasing on edges and motion aliasing , FSR 2.0 use this sames samples to reconstruct the details of the image from lower to higher resolutions.
It's literarily stated at 2:22 in the GDC video that FSR 2.0 uses a temporal component to achieve an anti-aliased output.

TAAU is a thing and is exactly what FSR2.0 is basically doing, and TAAU also isn't that amazing. like the upsampling in the Matrix Demo didn't fool anyone to think it's anything more than 1440p at best, and even then, it often looked 1080p-ish at most while being in the open world, with not so great artifacting as well.
so FSR2.0 must have some really good secret sauce to impress IMO...

like I said, FSR 1.0 is absolute shit and people expected wonders from that as well, let's see if they can turn the ship around like nvidia did with the jump to DLSS 2.0 or if it will be just like any other temporal upsamling out there and look middling at best.

because what FSR 2.0 needs to beat in order to be relevant is the already existing methods out there that most modern engines implement already and have been for years in order to look good on last gen systems.
will it look as good as what Call of Duty does for years? because that has a pretty damn good upsampling on console since WWII... or will it look better than what Insomniac does? or better than Unreal Engine's native TAAU? we'll see.

What I don't get is how FSR 2.0 would differ or improve upon any game using it's own well implemented TAA solution. Perhaps it's aim is to making it more universally accessible for developer that aren't as technologically savvy to do their own.

FSR 1.0 is a complete joke, nigh any real difference from just regular spatial bi-linear upscaling. AMD always seems so late to the party and rarely spearheads real innovation in the way Nvidia does, but I like their non proprietary open platform approach at least.
 

ethomaz

Banned
This is not the same as TAA, TAA is used to multi sample the image with previous frames to fix aliasing on edges and motion aliasing , FSR 2.0 use this sames samples to reconstruct the details of the image from lower to higher resolutions.
FSR 2.0 uses temporal frames, aka previous frames.
It is the same solution Unreal Engine, Sony, Sierge, DIECE, etc are using since 2014… we just need to know how AMD solution compares to these others… I expect the LTTP to be at least better than these others.

Same but different gif.
 
Last edited:
i guess that's the only thing left for AMD, is to include ML cores in RDNA 3 to have some sort of DLSS upscaling.

FSR is open source, RSR is driver based. Perhaps Microsofts DirectML will be utilized in RDNA 3 ?!
 

Irobot82

Member
i guess that's the only thing left for AMD, is to include ML cores in RDNA 3 to have some sort of DLSS upscaling.

FSR is open source, RSR is driver based. Perhaps Microsofts DirectML will be utilized in RDNA 3 ?!
I believe AMD's cores are already capable of tensor math. They just don't have extra cores set only for that purpose.
 
Its highly likely with XsX Pro and PS5Pro, that they will achieve 8k 30fps possibly 8k60fps through upscaling and other efficiencies with ML cores along with FSR 3.0 release or higher, but just speculation at my end.
 

01011001

Banned
960p games incoming on ps5 and series x and reconstructed to 1440p with bells and whistles and solid 60fps I hope…..…..I’ll take it and have no problem with this If it looks good.

it won't look good tho when only using a standard temporal upsampler.

the best chance consoles have for a decent universal reconstruction method is Intel's XeSS, but that also first has to show some actual real world results, but given that it uses machine learning it at least seems to do more than the usual temporal upsampling seen in almost any modern engine.

what makes DLSS 2.0 so good is the combination of temporal data and its machine learning abilities to judge how the final image should look according to how it was trained.

without that final step there you will get artifacts and an overly soft image. you can of course try to sharpen the image but that will result in what FSR1.0 does and look like a really bad sharpening filter with flicker everywhere.

all the evidence points towards the fact that we already pushed temporal AA Upscaling to its limits, and FSR 2.0 will most likely look not even the slightest bit better than what we already have.
 

mckmas8808

Banned
it won't look good tho when only using a standard temporal upsampler.

the best chance consoles have for a decent universal reconstruction method is Intel's XeSS, but that also first has to show some actual real world results, but given that it uses machine learning it at least seems to do more than the usual temporal upsampling seen in almost any modern engine.

what makes DLSS 2.0 so good is the combination of temporal data and its machine learning abilities to judge how the final image should look according to how it was trained.

without that final step there you will get artifacts and an overly soft image. you can of course try to sharpen the image but that will result in what FSR1.0 does and look like a really bad sharpening filter with flicker everywhere.

all the evidence points towards the fact that we already pushed temporal AA Upscaling to its limits, and FSR 2.0 will most likely look not even the slightest bit better than what we already have.

I trust your judgement on this, but I'd love to see FSR 2.0 in motion first.
 

ToTTenTranz

Banned
i guess that's the only thing left for AMD, is to include ML cores in RDNA 3 to have some sort of DLSS upscaling.

FSR is open source, RSR is driver based. Perhaps Microsofts DirectML will be utilized in RDNA 3 ?!
Why would AMD include ML cores just for upscaling, if FSR 2.0 does the same with similar results?

RDNA3 is most probably not going to have ML cores. Not even CDNA2 has dedicated ML cores. At most they'll enable some kind of matrix multiplication boost like they did in CDNA, but even that I find hard to believe.
 

sankt-Antonio

:^)--?-<
The Nvidia method isn't using ML on the fly, right. Its a trained algorithm running on parts of the hardware but its not evolving or adapting per game in real time. So I guess AMD is also using an algorithm that has been created with ML just using different datasets while performing its task?
 
Last edited:

elliot5

Member
The Nvidia method isn't using ML on the fly, right. Its a trained algorithm running on parts of the hardware but its not evolving or adapting per game in real time. So I guess AMD is also using an algorithm that has been created with ML just using different datasets while performing its task?
No
 

Notabueno

Banned
The Nvidia method isn't using ML on the fly, right. Its a trained algorithm running on parts of the hardware but its not evolving or adapting per game in real time. So I guess AMD is also using an algorithm that has been created with ML just using different datasets while performing its task?

DLSS has to be trained with ML on each specific game, my understanding is FSR doesn't have to train on each game and can be applied on the fly to any.
 
Last edited:

Rudius

Member
960p games incoming on ps5 and series x and reconstructed to 1440p with bells and whistles and solid 60fps I hope…..…..I’ll take it and have no problem with this If it looks good.
They would rather reconstruct 1080p to 4K with the performance mode.
 
The 2.0 upgrade is fantastic. I'm just salivating at the possibilities of using dedicated ML cores in RDNA 3/4 architecture since DLSS 3.0 will be released soon, and it's bound to be better than DLSS 2.3. (yes I know DLSS is NVIDIA only)

Devs aren't chasing resolution, the future is RT implementation.

How does FSR\RSR\DLSS effect RT implementation with and without ML cores-without too much tech jargon?
 
Last edited:
V6LRIFV.jpg


DLSS 2.3 has more of a sharpening/crisp effect. FSR 2.0 is so damn good, it looks just as good as 4K native with less color saturation, but not as crisp/sharp as DLSS 2.3. I guess thats why they say better than native for DLSS? cause it sharpens edges without aliasing artifact?!
 
Last edited:

sircaw

Banned
Sorry for the daft question, born stupid.....

The statement "that it does not require machine learning", what are the benefits to this as to the console space?
 
Last edited:

ToTTenTranz

Banned
What's the latency of no-ML FSR2 vs ML-prepared DLSS2 ?
It depends on the hardware and internal resolution, but apparently their overhead is quite similar:
https://wccftech.com/nvidia-dlss-2-0-behind-the-scenes-how-the-magic-happens/



HPRIOui.jpeg


This table is for performance mode BTW, as "quality" takes about 10-15% more time from what I see.




DLSS has to be train with ML on each specific game, my understanding is FSR doesn't have to train on each game and can be applied on the fly to any.
DLSS 2.x runs the same neural network for all games. It doesn't need per-game training.
The downside is, of course, the fact that it's made in a way that requires dedicated tensor hardware and it's exclusive to one IHV that has no presence in home consoles.
 

Thirty7ven

Banned
How does FSR\RSR\DLSS effect RT implementation with and without ML cores-without too much tech jargon?

In practice Upscaling solutions always do the same thing, they are much less resource intensive than going native resolution, which allows devs to allocate more resources to other areas of a game, like RT features for example.

If you have dedicated cores, Tensor cores in the case of Nvidia, you get even better results. Similar to how having a dedicated decompressor saves CPU resources.
 

ethomaz

Banned
V6LRIFV.jpg


DLSS 2.3 has more of a sharpening/crisp effect. FSR 2.0 is so damn good, it looks just as good as 4K native with less color saturation, but not as crisp/sharp as DLSS 2.3. I guess thats why they say better than native for DLSS? cause it sharpens edges without aliasing artifact?!
I must be blind because even with that lower quality JPEG the FRS 2.0 doesn't look like native 4k.
Same for DLSS 2.3 that looks sharper but worst than native.
It is more noticeable if you look at the wall behind.
 
Last edited:

nemiroff

Gold Member
A temporal upscaler at the same level of quality and performance as an upsampler capable of reconstruction and running on optimized hardware..? Well.. I want to be wrong but we'll see. Competition is obviously great.
 

01011001

Banned
Standard?

they can act like they are doing something special, but until they show it with independent tests and not their own screenshots I expect this to look either the same or worse than Unreal's TAAU

so I expect this to be a very basic temporal upsampling
 

Fafalada

Fafracer forever
they can act like they are doing something special, but until they show it with independent tests and not their own screenshots I expect this to look either the same or worse than Unreal's TAAU
so I expect this to be a very basic temporal upsampling
I'm not saying anything about quality of FSR2.0 - but 'standard' would imply we had some kind of agreed common implementation already, which has not been my experience with Temporal image algorithms. Lots of different approaches, with different trade-offs, and even big engines haven't stuck to one 'standard' for long, so that doesn't count either.

That aside - at least AMD implementation has a publicly documented mathematical basis (not just pointing fingers at DLSS, TAAU and others are just as opaque in terms of what they 'actually' do), so verifying quality can be more... quantifiable and less of an exercise of opinions.
Also the openness of this means you could integrate the reconstruction with other things, such as for instance changing the pixel arrangements (eg. use CB instead of directly lower-res) offering considerably more flexibility than the two above mentioned 'standard' implementations.
 
Last edited:

Rudius

Member
I must be blind because even with that lower quality JPEG the FRS 2.0 doesn't look like native 4k.
Same for DLSS 2.3 that looks sharper but worst than native.
It is more noticeable if you look at the wall behind.
Both are not better in every aspect, but are superior in specific ways, basically less aliasing and better thin lines.
 

01011001

Banned
I must be blind because even with that lower quality JPEG the FRS 2.0 doesn't look like native 4k.
Same for DLSS 2.3 that looks sharper but worst than native.
It is more noticeable if you look at the wall behind.

which one of these do you think looks the best? I mixed them around and scaled it up 200%

iKPxKBX.png


sadly the og image has JPEG compression artifacts and is not really great quality... a high res png would have been better, but well...
 
Last edited:

sankt-Antonio

:^)--?-<
DLSS has to be train with ML on each specific game, my understanding is FSR doesn't have to train on each game and can be applied on the fly to any.
DLSS moved to a general algorithm as far as I know. Its not really using ML on the card per se, but a neural network, that's two different things. At least to my understanding.
 
Top Bottom