• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD FSR 3 FidelityFX Super Resolution Technology Detailed

Killer8

Member
Motion interpolation is basically a waste of time. You don't get the latency reduction in the controls that a higher framerate would give, and while it looks smoother, it tends to be prone to artifacting.

It's like the reverse of the '24fps is cinematic' meme.
 

winjer

Gold Member
The 60fps minimum input kinda limits things.

There isn't a base minimum 60 fps requirement. AMD just recommends it.
This is probably because at 30 fps base, each fake frame will be too long on the screen and people will start to notice artifacts.
33ms is probably too much for this kind of tech.
 

SomeGit

Member
Motion interpolation is basically a waste of time. You don't get the latency reduction in the controls that a higher framerate would give, and while it looks smoother, it tends to be prone to artifacting.

It's like the reverse of the '24fps is cinematic' meme.

I agree it’s still a bit confusing to me, it doesn’t work at that well low framerates and at high frame rate it invalidates on of the big pros of that framerate which is input latency. Not to mention the games where it’s outright broken like Portal RTX.

But to be fair I never really fell in love with DLSS/FSR like some people do, I keep seeing the “it’s better than native” meme all the time but whenever I use it it’s noticeably worse with the only pro being that it skips awful TAA implementations.
 
Last edited:

SmokedMeat

Gamer™
Cant believe ppl are actually complaining about FSR. The nerve of you twits. Without it , Nvidia would add another $200 onto of their already uber expensive but not so great selection just because they can.

We should be thanking AMD for this, we need the damn competition. The more open options we have, the better off we are.

The truth is we need better competition. Intel I believe in. It’s impressive what they’ve accomplished with their Arc GPUs since release, and their upscaling tech is great. Funny thing is even they’re rocking 16GB of VRAM, while Nvidia skimps.

AMD puts out nice cards, but they immediately fuck it up with the pricing. They had Nvidia by the balls and then botched the launch of their new cards.

It’s bad enough gamers are just programmed to blind buy Nvidia without doing their homework. I get it; they’re the first at releasing new features. But the truth is their drivers are no better, and they’re fucking the shit out of customers more than ever. 4070ti shouldn’t be 12GB of VRAM and a cut down memory bandwidth. 4070 will be a joke as will the 8GB 4060ti. Basically selling people the same cards as the two year old previous gen with a new number, and the same price.
 
Those slides were so depressing lol. Last year they made it sound like it could be ready for Q1.

Now its:
FSR 3 development going strong • Still too early to share details
The good news..
The bad news • We can’t rely on color clamping to correct color of outdated samples • Non linear motion interpolation is hard with 2D screen space motion vectors • Interpolating final frames means all postprocessing and UI needs to get interpolated
 
Last edited:

Fafalada

Fafracer forever
Can it be enabled only if frame-rate goes below 60fps?
It's open-source, so it's pretty much in the hands of developers how to use it.
Which makes this a bit more exciting as a development - maybe the same algorithm can be retrofitted to do extrapolation, and get actual latency improvements too.

What if the game reaches something like 45-50fps then it gets doubled and locked down to 60fps?
It can work as framerate-compensation - but you'd get the input latency of 30fps + interpolation overhead, everytime 'real' framerate drops below 60.
So probably a bit of an odd experience - visual 60, with fluctuating input latency.
 

FingerBang

Member
It can work as framerate-compensation - but you'd get the input latency of 30fps + interpolation overhead, everytime 'real' framerate drops below 60.
So probably a bit of an odd experience - visual 60, with fluctuating input latency.
I wonder how it feels. I still think that given the same input lag (let's say 40 fps), people would prefer the higher frame rate.

I personally only used DLSS3 to push to 4K/120hz and I have to say it's pretty good
 

01011001

Banned
Motion interpolation is basically a waste of time. You don't get the latency reduction in the controls that a higher framerate would give, and while it looks smoother, it tends to be prone to artifacting.

It's like the reverse of the '24fps is cinematic' meme.

well at high framerates the artifacts will not be obvious if implemented well.
and if you have the choice between 60fps, or 120fps with the same input lag either way, why not go for 120fps? you're not losing anything...

this is of course if the implementation works well. DLSS3 seems to be hit and miss, with some games working perfectly and other having some issues
 
Last edited:

RoadHazard

Gold Member
When TVs do frame interpolation we hate it, when GPUs do it we love it!

(Yes, I realize the GPU can do it better because it has more information about where the interpolated pixels are coming from.)
 
Last edited:
The truth is we need better competition. Intel I believe in. It’s impressive what they’ve accomplished with their Arc GPUs since release, and their upscaling tech is great. Funny thing is even they’re rocking 16GB of VRAM, while Nvidia skimps.

AMD puts out nice cards, but they immediately fuck it up with the pricing. They had Nvidia by the balls and then botched the launch of their new cards.

It’s bad enough gamers are just programmed to blind buy Nvidia without doing their homework. I get it; they’re the first at releasing new features. But the truth is their drivers are no better, and they’re fucking the shit out of customers more than ever. 4070ti shouldn’t be 12GB of VRAM and a cut down memory bandwidth. 4070 will be a joke as will the 8GB 4060ti. Basically selling people the same cards as the two year old previous gen with a new number, and the same price.
Reminder Nvidia is releasing a 8 gigabyte VRAM card in 2023.
Bunch of fucking clowns.
 

Fafalada

Fafracer forever
Motion interpolation is basically a waste of time. You don't get the latency reduction in the controls that a higher framerate would give, and while it looks smoother, it tends to be prone to artifacting.
I wouldn't write it off quite yet.
As I point out above the same algorithm should ostensibly map to extrapolation, which does have latency benefits.
 
Last edited:

bbeach123

Member
Motion interpolation is basically a waste of time. You don't get the latency reduction in the controls that a higher framerate would give, and while it looks smoother, it tends to be prone to artifacting.

It's like the reverse of the '24fps is cinematic' meme.
Tbh its not like game nowaday doesnt have artifact....

TAA cause everygame to be blurry mess and ghosting , and artifacting .
DLSS help but still very very visible .

If we're fine with taa I dont think interpolation could be any worse .
 
Last edited:

hlm666

Member
What if the game reaches something like 45-50fps then it gets doubled and locked down to 60fps?
I don't think you would lock it to 60hz right? It should be like dlss3 fg where you don't want to cap the framerate? so your 45-50fps should come out to like 90-100fps with the latency of 45-50fps with probably a few milliseconds ontop similar to frame generation. The 60fps talked about in the slide is talking about base frame rate input, saying they recommend at least 60fps for optimum results.
 
FSR 2 is still a long way off DLSS 2 so I'm not sure how this will turn out but hopefully it works well as AMD could do with a win or two.

Maybe it's the hardware maybe it's the AI model but DLSS is normally auto select now as 9.5/10 it looks better than native + TAA but I wouldn't use FSR 2 in RE4 Remake even if it quadrupled my FPS as it looks like absolute crap with awful stability on fine objects.

I'm not saying DLSS is perfect but when the standard AA mode is still TAA it is normally 'free' performance and 'better' IQ than native as TAA has all manner of problems itself some of which DLSS mitigates. I say 'better' as it depends what bothers you I guess, the TAA artifacts or the DLSS ones, but to me DLSS has the least annoying problems compared to native + TAA or FSR.
 

Imtjnotu

Member
Tbh its not like game nowaday doesnt have artifact....

TAA cause everygame to be blurry mess and ghosting , and artifacting .
DLSS help but still very very visible .

If we're fine with taa I dont think interpolation could be any worse .

i havent seen TAA ghosting like this in awhile
 

YCoCg

Member
"We copied Nvidia's homework, how does it look?"
- AMD, the company any time Nvidia does anything
You forgot "and made it open source so it works on anything except JUST the latest overpriced GPUs from Nvidia".

That's a good thing btw, why do you think we even have TVs with higher refresh rates now? VRR and the HDMI standard is an adaptation of FreeSync which AMD pushed for after GSync.
 

Boy bawang

Member
If you get more fps but don't improve the input latency, it's not really useful to the gameplay. It looks better, but like DLSS 3, it's more style over substance. DLSS 2.x was the real breakthrough in my opinion.
 

Bojji

Member
Just a leak, so take it with a grain of salt...



That would mean it won't be supported as much as dlss3, FSR2 gained popularity because it was platform agnostic, if it was amd only devs wouldn't give a fuck most of the time.
 

SF Kosmo

Al Jazeera Special Reporter
Gonna be interesting how well this works after nvidia saying it was too hard to get working without their updated ada optical flow accelerator.
What nVidia is doing is totally different. Interpolated frames are nothing new -- it's why your grandma's TV makes everything look like soap opera. Oculus has used motion vector based frame interpolation to cover for dropped frames for years.

But, as will upscaling, nVidia's AI-based approach can do this in a way that is basically undetectable. And while AMD can run deep learning based algos like XeSS, the lack of hardware designed for the task means the performance benefits aren't really there.
 

JonSnowball

Member
Frame interpolation in games for FSR and DLSS is somewhat amusing in a way....that's the exact feature anyone with an interest in video would disable immediately on any media player or television. Sometimes I go to Olive Garden, pour oil all over myself, and slide around avoiding security.
 

alucard0712_rus

Gold Member
I can't imagine just how bad it will be IQ- wise.
Those super resolution and motion interpolation methods was here in TV's for ages. The point in what's NVIDIA was doing is using AI to improve image quality in the end. And it worked because of AI training and other advancements in AI. There is no other advancements in super resolution or motion interpolation beside AI.
What AMD doing is just introducing same technologies but made without AI with completely destroys the goal.
FSR2 looks like crap compared even to performance and ultra-performance presets with AI despite same preset name and performance levels.
Expect even worse picture quality than in latest Jedi game.
 
Last edited:

Xcell Miguel

Gold Member
I can't imagine just how bad it will be IQ- wise.
Those super resolution and motion interpolation methods was here in TV's for ages. The point in what's NVIDIA was doing is using AI to improve image quality in the end. And it worked because of AI training and other advancements in AI. There is no other advancements in super resolution or motion interpolation beside AI.
What AMD doing is just introducing same technologies but made without AI with completely destroys the goal.
FSR2 looks like crap compared even to performance and ultra-performance presets with AI despite same preset name and performance levels.
Expect even worse picture quality than in latest Jedi game.
It's not just AI, DLSS3 also get motion vectors from the game engine, things that TVs and media players using frame interpolation do not have and can only guess by comparing pixels, that's why DLSS3 FG gives a higher quality than simple frame interpolation.

That's why it has to be implemented by the devs in the engine, it's not a thing just put on top of the final rendered image.

If FSR3 is only driver side, would it have motion vectors from the engine ? I guess not, and it would not be able to separate UI from 3D scene easily either ?
If it's driver side it could only use final images, thus the game could still report x FPS while the driver doubles it, it could work on any game but with a worse quality than if it was baked in the game engine, I guess.
 

winjer

Gold Member
Aight my bad then, its still FSR so don't see why 3 cant come.

A few games on the PS5 and Series S/X already use FSR2. But that is impossible for the Switch, as the overhead from a temporal upscaler is too high for such a weak console, resulting in a loss in performance.
FSR3 probably can be done in the current gen consoles. Maybe on the GCN consoles. But very unlikely for the Switch.
 

alucard0712_rus

Gold Member
It's not just AI, DLSS3 also get motion vectors from the game engine, things that TVs and media players using frame interpolation do not have and can only guess by comparing pixels, that's why DLSS3 FG gives a higher quality than simple frame interpolation.

That's why it has to be implemented by the devs in the engine, it's not a thing just put on top of the final rendered image.

If FSR3 is only driver side, would it have motion vectors from the engine ? I guess not, and it would not be able to separate UI from 3D scene easily either ?
If it's driver side it could only use final images, thus the game could still report x FPS while the driver doubles it, it could work on any game but with a worse quality than if it was baked in the game engine, I guess.
Yeah, I forgot about motion vectors from the engine. That might improve quality a bit.
Also this is going crazy: DLSS2, FSR2, XeSS, DLSS3, FSR3, XeSS2...Poor devs! :messenger_grinning_smiling:
 

Hugare

Member
Meme Reaction GIF by MOODMAN


If they manage to make it work properly, it would be a breakthrough

But this is AMD that we are talking about.

I've seen FSR 1 looking better than FSR 2.

FSR 2 looks like shit 90% of the time.

And now they will put frame gen on top of that

I'm curious for this train wreck
 

Fafalada

Fafracer forever
If FSR3 is only driver side, would it have motion vectors from the engine ? I guess not, and it would not be able to separate UI from 3D scene easily either ?
No to either.

Oculus has used motion vector based frame interpolation to cover for dropped frames for years.
Well - not quite. What Oculus has access to is motion inputs from the headset - no motion vectors for the pixels 'inside' the frame, or any other buffers (like depth etc).
Ie. it's still RGB only operation - a lot like the TV.
However - the more important thing is what VR drivers do isn't interpolation - it's reprojection based on predicted motion & position data. Or if you want to use 'one' word - it's extrapolation. So the resulting motion-to-photon latency is equivalent to the native framerate target, not 'more than double the native' that DLSS3 produces.
 
Last edited:

Holammer

Member
If you get more fps but don't improve the input latency, it's not really useful to the gameplay. It looks better, but like DLSS 3, it's more style over substance. DLSS 2.x was the real breakthrough in my opinion.
If we add additional frames to your typical Sony walking simulators or Nintendo games players won't notice much and it'll only look smother, because of the control and camera sluggishness such games have. Not all games are CSGO.
 

SF Kosmo

Al Jazeera Special Reporter
Well - not quite. What Oculus has access to is motion inputs from the headset - no motion vectors for the pixels 'inside' the frame, or any other buffers (like depth etc).
That was true when it launched, but I'm pretty sure they added a more robust interpolation algorithm that does use motion vectors.

But, of course, the Oculus implementation is limited by the news for extreme low latency. Where DLSS3 (and like FSR 3) are using two frames to create an in-between, Oculus is just using the start point and motion vectors to create the in-between while the next frame is still rendering.
 

Fafalada

Fafracer forever
That was true when it launched, but I'm pretty sure they added a more robust interpolation algorithm that does use motion vectors.
They improved the algorithm to do spatial reprojection in addition to rotational - but it's not using anything other than RGB as input from the game, it's a driver-level reprojection.

Where DLSS3 (and like FSR 3) are using two frames to create an in-between, Oculus is just using the start point and motion vectors to create the in-between while the next frame is still rendering.
There's not really an in-between. Every frame is reprojected in VR - the only difference is how big the delta-time is.
And as you note - there's no second data point, only current rendered frame + motion-input from the headset, so - not interpolation.
 
Top Bottom