• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD FidelityFX Super Resolution (FSR) launches June 22nd

twilo99

Member
To me this seems to be a stop gap for older gen chips until they get out RDNA3 so they can actually use ML hardware to directly compete with DLSS. Its something, but its not the real deal.
 

M1chl

Currently Gif and Meme Champion
So lack of reading comprehension skills, it hurts.

When your effect improves lines, adds blurs/loses texture finesse, it will excel where you have lines, but lack any texture that you can detect loss off detail in. As in, wait for it, IN THE VERY EXAMPLE just presented above.

And, the best of it, this is an example of how great the glorified TAA derivative in question is.
In my eyes the CAS looks like shit and DLSS 2.0

I am also working pretty heavily now, I just didn't want to pass on the fun.
 
Honestly...Looks horrible.

giphy.gif


There's a very obvious difference in quality in the side-by-side, which is why they probably didn't do direct scene comparison and left it as side-by-side as it'd look much worse and even more noticeable doing a swipe. DLSS looks substantially better. Very underwhelmed considering how long they've been cooking it. Hopefully this is just like Nvidia's first implementation of DLSS where the next part of the tree actually does what they intend.
 
Last edited:
Is it sarcasm?
I hope it is sarcasm.

The bush in question is during a plainly noticeable different time of day. The foliage is fatter in the cas image and thinner in the image. The rocks image quality in terms of lines and textures are better in the DLSS version. So the bush in question is hard to see the details of in a darker time of day on the DLSS version but you specifically cut that bit out to make a point. Feels deceptive.
 

llien

Member
The bush in question is during a plainly noticeable different time of day.
That is fine.
The previous version of the explanation that I've heard was along the lines of "it actually looks the same".

I call it progress.

image quality in terms of lines
Is better (as expected)

and textures are
Worse.

PS
Dude, not to go into nowhere land for arguments, on this very forum, green GPU owner challenged me to guess which of the two pics was the 1440p to 4k upscale (also known as "4k DLSS quality", chuckle).

And, guess what, it was easy peasy: the pic that added blur and lost texture detail was the upscaled one.
Shocking eh? :messenger_beaming:
 
Last edited:

llien

Member
There's a very obvious difference in quality in the side-by-side, which is why they probably didn't do direct scene comparison and left it as side-by-side as it'd look much worse and even more noticeable doing a swipe.
I agree with that (blur and loss in detail is very clearly visible), but take into account that "quality" in green world is called "ultra quality" in this presentation.

Overall, AMD just needs "good enough" solution.
It doesn't need to beat anything on the market, that 1060 and 1080Ti users cannot use anyhow.
 
Last edited:

Kenpachii

Member
I agree with that (blur and loss in detail is very clearly visible), but take into account that "quality" in green world is called "ultra quality" in this presentation.

Overall, AMD just needs "good enough" solution.
It doesn't need to beat anything on the market, that 1060 and 1080Ti users cannot use anyhow.

Pretty much, people bitching about its not as good as DLSS isn't important when u can't get it on cards that don't support DLSS anyway. It might as well not exist.

on the 22 AMD will have a solution available nvidia does not.

Honestly for most people they see performance gain, on lower end, the less detail isn't going to matter much. Also open source can only get better.

So honestly i am a bit confused by people trying to slam it.
 
Last edited:

//DEVIL//

Member
I agree with that (blur and loss in detail is very clearly visible), but take into account that "quality" in green world is called "ultra quality" in this presentation.

Overall, AMD just needs "good enough" solution.
It doesn't need to beat anything on the market, that 1060 and 1080Ti users cannot use anyhow.
Yup. Quality on amd is probably same level as middle settings on dlss and not quality settings ... which look close .

I do not expect this to beat dlss out of the gate but this is an open source and it can only get better . Specially when most games are being developed Under Xbox / PlayStation / amd hardware.

honestly if it’s not head to head YouTube video zoomed in it’s very hard for me as a hardcore gamer to even notice . But for most gamers out there they will see as welcome enhancement for their gaming experience
 

assurdum

Banned
All fanboys from both sides do that, parroting some nonsense that they read somewhere. XBOX is using hardware ML to do auto HDR on old games that do not have HDR.
What exactly you mean for hardware? Because ps5 also has hardware ML (it's not clear to me whether it's part of the GE or just a Navi feature) but it's totally different to pretend to say it's like to have dedicate chips/core as MS let intend to their fans who blindly believe it.
It's started the countdown to the Riky laugh emoticon
 
Last edited:
Yup. Quality on amd is probably same level as middle settings on dlss and not quality settings ... which look close .

I do not expect this to beat dlss out of the gate but this is an open source and it can only get better . Specially when most games are being developed Under Xbox / PlayStation / amd hardware.

honestly if it’s not head to head YouTube video zoomed in it’s very hard for me as a hardcore gamer to even notice . But for most gamers out there they will see as welcome enhancement for their gaming experience

Eh TSR is a better solution in that case. Uses less resources and looks better than this thing. You know how DLSS 1.0 was garbage? It was because it did things the wrong way same way this does. I expect a complete overhaul of how this works.
 
That is fine.
The previous version of the explanation that I've heard was along the lines of "it actually looks the same".

I call it progress.


Is better (as expected)


Worse.

PS
Dude, not to go into nowhere land for arguments, on this very forum, green GPU owner challenged me to guess which of the two pics was the 1440p to 4k upscale (also known as "4k DLSS quality", chuckle).

And, guess what, it was easy peasy: the pic that added blur and lost texture detail was the upscaled one.
Shocking eh? :messenger_beaming:


Its easy to pick out because the DLSS one has better lines. You just look for text and instantly tell if it is DLSS or not.
 

//DEVIL//

Member
Eh TSR is a better solution in that case. Uses less resources and looks better than this thing. You know how DLSS 1.0 was garbage? It was because it did things the wrong way same way this does. I expect a complete overhaul of how this works.
But we don’t know yet . We don’t know how good this is because we didn’t see a real high quality mod yet in a proper head to head. Come June 23 or whenever the release date is and you won’t still see a proper head to head because my gut is telling me those selected amd games day one won’t have a dlss option enabled. ( I don’t think godfall does have it )
 
But we don’t know yet . We don’t know how good this is because we didn’t see a real high quality mod yet in a proper head to head. Come June 23 or whenever the release date is and you won’t still see a proper head to head because my gut is telling me those selected amd games day one won’t have a dlss option enabled. ( I don’t think godfall does have it )
Doesnt Godfall use unreal engine? You can probably use tsr there.
 

FrankieSab

Member
What exactly you mean for hardware? Because ps5 also has hardware ML (it's not clear to me whether it's part of the GE or just a Navi feature) but it's totally different to pretend to say it's like to have dedicate chips/core as MS let intend to their fans who blindly believe it.
It's started the countdown to the Riky laugh emoticon

What exactly you mean for hardware? Because ps5 also has hardware ML (it's not clear to me whether it's part of the GE or just a Navi feature) but it's totally different to pretend to say it's like to have dedicate chips/core as MS let intend to their fans who blindly believe it.
It's started the countdown to the Riky laugh emoticon
I never even talked about PS5 in my post? Never said it was not present on Sony's side.
When Sony announced PS5 specs, they didn't mention Machine Learning (ML) capabilities. Microsoft made it one of the big features to focus on with XSX's marketing campaigns. Now it's MS fans fault to believe in simple facts?
 
Epic's comparison:

Native 4k:




FSR from 1080p:


very interesting, there is a loss of detail but considering comes from 1080p its a very good improvement vs quality lost and the result is not that bad(I am generally against this type of tech), a curious detail is the background, it tries to reconstruct detail from blurred background, I wonder if a rework of how depth of field is implemented along some tweaks in LOD textures and Mip maps can help games using this techniques
 
Last edited:
very interesting, there is a loss of detail in FSR but considering comes from 1080p its a very good improvement vs quality lost and the result is not that bad(I am generally against this type of tech), a curious detail is the background, it tries to reconstruct detail from blurred background, I wonder if a rework of how depth of field is implemented along some tweaks in LOD textures and Mip maps can help games using this techniques
That isnt FSR.
 

MonarchJT

Banned
What exactly you mean for hardware? Because ps5 also has hardware ML (it's not clear to me whether it's part of the GE or just a Navi feature) but it's totally different to pretend to say it's like to have dedicate chips/core as MS let intend to their fans who blindly believe it.
It's started the countdown to the Riky laugh emoticon
everyone can run ml through gpu the problem is how much fast ....PS5 doesn't have int4 or int8...at least from their presentation and from what PS5 engineer said
 
Last edited:

Killer8

Member
I've read their method won't take into account motion vectors, so it will probably be comparable to DLSS 1.0. Which famously stank until 2.0 came along.

Still, if it works on my GTX1080, and potentially make some games I can brute force ray-tracing run at a playable 30fps, that's a win.
 

ahtlas7

Member
This is pretty awesome. DLSS took a while to get up to it’s current state. FSR May take a bit of practical use to get up to speed.

How does this get implemented onto a Nvidia card?
 
Last edited:

Rikkori

Member
AMD has basically given up on the GPU space with this move, at least for a few more generations. They've chosen mass adoption, hence a simple and weak solution for upscaling, but in return DLSS will wipe the floor with them both from an image quality and a performance standpoint. So now not only do they have a massive deficit to make up for on the RT front, but they're still 3+ years behind on upscaling tech, and because they're too weak to add something without having it for consoles too that means there's little hope they'll change course. Nvidia's going to finally hit that magic 90% discrete GPU market share.

GG no re. After 21 years of Radeon it looks like I have to move on.

Halloween Rip GIF by Lucie Mullen
 

Kenpachii

Member
This is pretty awesome. DLSS took a while to get up to it’s current state. FSR May take a bit of practical use to get up to speed.

How does this get implemented onto a Nvidia card?

Probably same as cas, implemented on the game itself.
 
AMD has basically given up on the GPU space with this move, at least for a few more generations. They've chosen mass adoption, hence a simple and weak solution for upscaling, but in return DLSS will wipe the floor with them both from an image quality and a performance standpoint. So now not only do they have a massive deficit to make up for on the RT front, but they're still 3+ years behind on upscaling tech, and because they're too weak to add something without having it for consoles too that means there's little hope they'll change course. Nvidia's going to finally hit that magic 90% discrete GPU market share.

GG no re. After 21 years of Radeon it looks like I have to move on.

Halloween Rip GIF by Lucie Mullen
AMD send to be going more for mainstream. Not only PC, but consoles, phones, etc. Nvidia is becoming more broad, but at the same time more specialized with their tech. If you want performance and quality, go green. If you want a basic solution, go red. Although I love AMD cpu's.
 
AMD send to be going more for mainstream. Not only PC, but consoles, phones, etc. Nvidia is becoming more broad, but at the same time more specialized with their tech. If you want performance and quality, go green. If you want a basic solution, go red. Although I love AMD cpu's.
There is no such thing as market segmentation when product of any sort is unavailable or sells out instantly due to an ongoing worldwide shortage.

Right now, AMD cards cost the same or more than Nvidia cards either due to retailer price inflation or scalping. In this market, you buy what you can get. It's unfortunate that if you only have the option of an AMD card you will be paying the same for less, but until the shortages ease in 2022/2023 that's how it's going to be.
 

Buggy Loop

Member
Wow, it keeps getting worse.

So not even on a game implementation basis, it would have to also be driver implemented & optimized by each vendors?

Yea I doubt this will fly with Nvidia..
 

assurdum

Banned
I never even talked about PS5 in my post? Never said it was not present on Sony's side.
When Sony announced PS5 specs, they didn't mention Machine Learning (ML) capabilities. Microsoft made it one of the big features to focus on with XSX's marketing campaigns. Now it's MS fans fault to believe in simple facts?
Cerny mentioned ps5 ML capability on Road to the ps5. Now I never said series X is not ML capable. But it's not capable as Xbox fans think because it hasn't dedicated cores about it. So if you want heavy ML features, the hardware resources useful for other stuff will be sacrificed just for that. It's simple logic.
 
Last edited:

Buggy Loop

Member
Devs will use the AMD solution because it's free and has the widest support. Nvidia will pay for exclusives and try to milk DLSS as much as possible, until it becomes untenable just like gsync.

Free? Please tell me the cost of DLSS
Both solutions now require to be implemented on a driver level, game by game basis. What’s different. It’s the same rollout.
 
Last edited:

assurdum

Banned
AMD has basically given up on the GPU space with this move, at least for a few more generations. They've chosen mass adoption, hence a simple and weak solution for upscaling, but in return DLSS will wipe the floor with them both from an image quality and a performance standpoint. So now not only do they have a massive deficit to make up for on the RT front, but they're still 3+ years behind on upscaling tech, and because they're too weak to add something without having it for consoles too that means there's little hope they'll change course. Nvidia's going to finally hit that magic 90% discrete GPU market share.

GG no re. After 21 years of Radeon it looks like I have to move on.

Halloween Rip GIF by Lucie Mullen
I still trying to understand what's the true point of your posts but I'm starting to think you are just here to troll about AMD and nothing more. Outside AMD is doomed I barely see any other argumentation.
 
Last edited:

Kenpachii

Member
Devs will use the AMD solution because it's free and has the widest support. Nvidia will pay for exclusives and try to milk DLSS as much as possible, until it becomes untenable just like gsync.

Gsync = double the price for the screen.
Gains? hard to explain
Freesync = free
gains? fuck ton of money and every screen under the sun supporting it for no additional cost.

DLSS = NVIDIA DLSS Will Be Added to Unity Engine Before the End of 2021. ... Apparently, it'll take Unity developers only a few clicks to enable NVIDIA DLSS in their games
Add urneal engine probably with it before that one launches.

Yea i am sure DLSS will die out when time progresses. That's just wishful thinking. Unless nvidia's market crashes and burns in the upcoming years or FSR gives superior results, DLSS is here to stay.
 

Rikkori

Member
I still trying to understand what's the true point of your posts but I'm starting to think you are just here to troll about AMD and nothing more. Outside AMD is doomed I barely see any other argumentation.
If you're so lacking in basic reading comprehension then use the ignore button, I can't spell things out simpler than I already have.

AMD will be fine, CPU division will carry the day in a poetic reversal of fortune.

Your concern trolling is noted though.
 

99Luffy

Banned
AMD has basically given up on the GPU space with this move, at least for a few more generations. They've chosen mass adoption, hence a simple and weak solution for upscaling, but in return DLSS will wipe the floor with them both from an image quality and a performance standpoint. So now not only do they have a massive deficit to make up for on the RT front, but they're still 3+ years behind on upscaling tech, and because they're too weak to add something without having it for consoles too that means there's little hope they'll change course. Nvidia's going to finally hit that magic 90% discrete GPU market share.

GG no re. After 21 years of Radeon it looks like I have to move on.

Halloween Rip GIF by Lucie Mullen
The gpu market is now dictated by the laptop market, which is decided by what OEMs offer. Nvidia has been pretty much unchallenged in this market for the past, well since discrete laptop gpus were a thing.
But did you see the conference? Its pretty much a given that AMD will at least double their marketshare, simply cause OEMs may seem to actually be offering AMD in their lineup now.
 
Last edited:

llien

Member
You know how DLSS 1.0 was garbage? It was because it did things the wrong way same way this does.
You know what DLSS 1 was? A true AI solution, with per game training.
Nothing else on this planet, including DLSS 2, is in any way "the same wa"y as DLSS 1.
DLSS 2 is 90% TAA, with some AI to it.
Tensor cores is just a lame excuse to ban Pascal.

Will never happen, so its a dud then.
It is an open source product that Intel intends to grab and optimize.
If NV does not, well, selber schuld.
It is bananas to expect AMD to optimize that code that they have SHARED WITH COMPETITORS with concrete lineup of the competitor.
DLSS is here to stay.
Only if supporting it is really a very low effort. (which I doubt)
 

llien

Member
a massive deficit to make up for on the RT front
Are you reading into green sponsored pre -RDNA games too much?

How do 6000 series win in WoW RT, Dirt 5, Fortnight?
Or have a close call in Godfall, only somewhat behind in the latest Resident Evil?
Where is that "massive deficit"? (it's a rhetorical question)
 

llien

Member
Both solutions now require to be implemented on a driver level, game by game basis
Dude, are you for real?
Did AMD show that 1060 demo running patched NV drivers? :))

Where did the "game by game bases" even come from, FFS.

Jeez you are biased.
 
You know what DLSS 1 was? A true AI solution, with per game training.
Nothing else on this planet, including DLSS 2, is in any way "the same wa"y as DLSS 1.
DLSS 2 is 90% TAA, with some AI to it.
Tensor cores is just a lame excuse to ban Pascal.

Eh what.

DLSS 2.0 Selectable Modes

One of the most notable changes between the original DLSS and the fancy DLSS 2.0 version is the introduction of selectable image quality modes: Quality, Balanced, or Performance — and Ultra Performance with 2.1. This affects the game's rendering resolution, with improved performance but lower image quality as you go through that list.


With 2.0, Performance mode offered the biggest jump, upscaling games from 1080p to 4K. That's 4x upscaling (2x width and 2x height). Balanced mode uses 3x upscaling, and Quality mode uses 2x upscaling. The Ultra Performance mode introduced with DLSS 2.1 uses 9x upscaling and is mostly intended for gaming at 8K resolution (7680 x 4320) with the RTX 3090. While it can technically be used at lower target resolutions, the upscaling artifacts are very noticeable, even at 4K (720p upscaled). Basically, DLSS looks better as it gets more pixels to work with, so while 720p to 1080p looks good, rendering at 1080p or higher resolutions will achieve a better end result.


How does all of that affect performance and quality compared to the original DLSS? For an idea, we can turn to Control, which originally had DLSS 1.0 and then received DLSS 2.0 support when released. (Remember, the following image comes from Nvidia, so it'd be wise to take it with a grain of salt too.)



Nvidia DLSS



Control at 1080p with DLSS off (top), the DLSS 1.0 on (middle) and DLSS 2.0 Quality Mode on (bottom) (Image credit: Nvidia)
One of the improvements DLSS 2.0 is supposed to bring is strong image quality in areas with moving objects. The updated rendering in the above fan image looks far better than the image using DLSS 1.0, which actually looked noticeably worse than having DLSS off.

DLSS 2.0 is also supposed to provide an improvement over standard DLSS in areas of the image where details are more subtle.



Nvidia DLSS



Control at 1440p using the original DLSS (top) and DLSS 2.0 Quality Mode (bottom) (Image credit: Nvidia)
Nvidia promised that DLSS 2.0 would result in greater game adoption. That's because the original DLSS required training the AI network for every new game needed DLSS support. DLSS 2.0 uses a generalized network, meaning it works across all games and is trained by using "non-game-specific content," as per Nvidia.

For a game to support the original DLSS, the developer had to implement it, and then the AI network had to be trained specifically for that game. With DLSS 2.0, that latter step is eliminated. The game developer still has to implement DLSS 2.0, but it should take a lot less work, since it's a general AI network. It also means updates to the DLSS engine (in the drivers) can improve quality for existing games. Unreal Engine 4 and Unity have both also added DLSS 2.0 support, which means it's trivial for games based on those engines to enable the feature.

How Does DLSS Work?​


Both the original DLSS and DLSS 2.0 work with Nvidia's NGX supercomputer for training of their respective AI networks, as well as RTX cards' Tensor Cores, which are used for AI-based rendering.


For a game to get DLSS 1.0 support, first Nvidia had to train the DLSS AI neural network, a type of AI network called convolutional autoencoder, with NGX. It started by showing the network thousands of screen captures from the game, each with 64x supersample anti-aliasing. Nvidia also showed the neural network images that didn't use anti-aliasing. The network then compared the shots to learn how to "approximate the quality" of the 64x supersample anti-aliased image using lower quality source frames. The goal was higher image quality without hurting the framerate too much.


The AI network would then repeat this process, tweaking its algorithms along the way so that it could eventually come close to matching the 64x quality with the base quality images via inference. The end result was "anti-aliasing approaching the quality of [64x Super Sampled], whilst avoiding the issues associated with TAA, such as screen-wide blurring, motion-based blur, ghosting and artifacting on transparencies," Nvidia explained in 2018.


DLSS also uses what Nvidia calls "temporal feedback techniques" to ensure sharp detail in the game's images and "improved stability from frame to frame." Temporal feedback is the process of applying motion vectors, which describe the directions objects in the image are moving in across frames, to the native/higher resolution output, so the appearance of the next frame can be estimated in advance.



Nvidia DLSS



DLSS 2.0 (Image credit: Nvidia)
DLSS 2.0 gets its speed boost through its updated AI network that uses Tensor Cores more efficiently, allowing for better framerates and the elimination of limitations on GPUs, settings and resolutions. Team Green also says DLSS 2.0 renders just 25-50% of the pixels (and only 11% of the pixels for DLSS 2.1 Ultra Performance mode), and uses new temporal feedback techniques for even sharper details and better stability over the original DLSS.

Nvidia's NGX supercomputer still has to train the DLSS 2.0 network, which is also a convolution autoencoder. Two things go into it, as per Nvidia: "low resolution, aliased images rendered by the game engine" and "low resolution, motion vectors from the same images — also generated by the game engine."

DLSS 2.0 uses those motion vectors for temporal feedback, which the convolution autoencoder (or DLSS 2.0 network) performs by taking "the low resolution current frame and the high resolution previous frame to determine on a pixel-by-pixel basis how to generate a higher quality current frame," as Nvidia puts it.


The training process for the DLSS 2.0 network also includes comparing the image output to an "ultra-high-quality" reference image rendered offline in 16K resolution (15360 x 8640). Differences between the images are sent to the AI network for learning and improvements. Nvidia's supercomputer repeatedly runs this process, on potentially tens of thousands or even millions of reference images over time, yielding a trained AI network that can reliably produce images with satisfactory quality and resolution.


With both DLSS and DLSS 2.0, after the AI network's training for the new game is complete, the NGX supercomputer sends the AI models to the Nvidia RTX graphics card through GeForce Game Ready drivers. From there, your GPU can use its Tensor Cores' AI power to run the DLSS 2.0 in real-time alongside the supported game.


Because DLSS 2.0 is a general approach rather than being trained by a single game, it also means the quality of the DLSS 2.0 algorithm can improve over time without a game needing to include updates from Nvidia. The updates reside in the drivers and can impact all games that utilize DLSS 2.0.
 

assurdum

Banned
If you're so lacking in basic reading comprehension then use the ignore button, I can't spell things out simpler than I already have.

AMD will be fine, CPU division will carry the day in a poetic reversal of fortune.

Your concern trolling is noted though.
Most of your post it's just about what a mess, what a disaster, Nvidia will destroy them and I'm lacking in basic reading comprehension and I'm trolling because I pointed out it's better to stop with this childish unnecessary argumentation? :messenger_hushed: It's not like you can predict every possible evolution so early and with so scarce tech details available. If I'm not wrong DLSS 1.0 was considered a massive failure and a disaster, not so many years ago.
 
Last edited:

DaGwaphics

Member
Honestly...Looks horrible.

giphy.gif


There's a very obvious difference in quality in the side-by-side, which is why they probably didn't do direct scene comparison and left it as side-by-side as it'd look much worse and even more noticeable doing a swipe. DLSS looks substantially better. Very underwhelmed considering how long they've been cooking it. Hopefully this is just like Nvidia's first implementation of DLSS where the next part of the tree actually does what they intend.

What needs to be seen is how it looks compared to the base resolution it is running at. If it looks better than the base res and has a limited performance cost, that's a win.

I thought the demos looked fine for what was shown. I wonder when we'll see supported titles release so that the YT tech lot can give it a good once-over.
 
Last edited:

llien

Member
random FUD about DLSS
Exactly the same shit was repeated (as exactly the same stupid shit keeps popping from folks who think of technology as some sort of magic) and I grew tired repeating all the links all the time.
Regardless, "random FUD", eh? This cannot get any more pathetic, can it.

I am sorry if it caused you butthurt.
 
Last edited:
Top Bottom