• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD’s Equivalent to DLSS, FidelityFX Super Resolution Is Coming to PS5 & Xbox Series Consoles

Clear

CliffyB's Cock Holster
Well, not if it's hardware accelerated. Then it's also a physical problem.

Not entirely, however the result is achieved its going to run on the GPU, so its always going to be hardware accelerated technically speaking. The question is how fast it can you do it without resorting to things like customization for 4/8/16/32-bit packed math. AMD may simply be building for a system where the processing can occur using older hardware that has some packed-math support but at a performance penalty in addition to something more on-par with Nvidia's offerings on RDNA2-level hardware.

It does make sense to go this way as basically any sort of upscaling technique is intended as a performance-saver versus native resolution rendering, and as such the lower end the hardware the greater the need for the boost.
 

MonarchJT

Banned
Yep. It's the perfect resume of your posts. And you called personal act a post who puts some doubt about the ML stuff on series X because legitimately asks for more evidence about it? Your sense of loyalty for MS is admirable.
tenor.gif
 
I don't know if it's a "Sony implementation" at all. Guerrilla engine uses their own software solution for checkerboard, it's not hardware tied. And in any case call "ridiculous" a 4k checkerboard comparable to a 1800p native sharpness, it's a bit overdramatic.
Yep exactly. On Pro they don't even use the custom hardware (ID Buffer). I think the best CBR techniques are only seen on Pro with games that use the ID Buffer like Days gone, God of War and maybe Detroit.
 
Last edited:

Ascend

Member
DLSS is no joke. that shit is black magic as far as i'm concerned. i hope for AMDs sake this is good.
It's more advertised as black magic than that it really is. And make sure to read the comments;



DLSS is useful if you want to play at 4K with a weak card. It is not better than native. It does not make RT more useful, and just like pretty much all other proprietary nVidia tech, it will end up either dead, or being replaced by an open source alternative.
 
It's more advertised as black magic than that it really is. And make sure to read the comments;



DLSS is useful if you want to play at 4K with a weak card. It is not better than native. It does not make RT more useful, and just like pretty much all other proprietary nVidia tech, it will end up either dead, or being replaced by an open source alternative.


Honestly, with some games, the results I have are excellent, near what I have with native resolution. Don't take CP2077 as a trend for this technology.
 

hlm666

Member
It's more advertised as black magic than that it really is. And make sure to read the comments;



DLSS is useful if you want to play at 4K with a weak card. It is not better than native. It does not make RT more useful, and just like pretty much all other proprietary nVidia tech, it will end up either dead, or being replaced by an open source alternative.


Not sure the comments in that amd sub red thread proved whatever you were trying to prove, they were arguing over him not using quality dlss because auto used a lower dlss setting. But anyway Cyberpunk suffers the same problem as nioh 2 and a few other dlss supported games (like cod) where they are using the wrong lod bias.



And a reddit thread

 

Bo_Hazem

Banned
Speaking to Sebastian of Linus Tech Tips, Linus confirmed the reasoning as to why AMD’s FidelityFX Super Resolution has yet to release

unknown.png


Nvidia may have the clear advantages over AMD thanks to it’s DLSS, but it seems that the company is holding back as it has been stated that they are launching their FidelityFX Super Resolution feature when it’s ready. This includes cross-support for the latest next-gen consoles, the PS5 and Xbox Series.

"Apparently rather than rushing it [FidelityFX Super Resolution] out the door on only one new top-end card, they want it to be cross-platform in every sense of the word. So they actually want it running on all of their GPUs, including the ones inside consoles, before they pull the trigger."




It would’ve been nice to have it ready by the time the cards launched, but as we saw with Nvidia’s DLSS 1.0 versus DLSS 2.0, it could be for the best.



This is great news, hope it's as good as advertised.
 

regawdless

Banned
It's more advertised as black magic than that it really is. And make sure to read the comments;



DLSS is useful if you want to play at 4K with a weak card. It is not better than native. It does not make RT more useful, and just like pretty much all other proprietary nVidia tech, it will end up either dead, or being replaced by an open source alternative.


I think that Cyberpunk has some specific issue when turning on DLSS regarding removed sharpness. It does appear blurrier, but I don't think that it's just because of DLSS, because it's way better in other games. It's not a big deal though. I'm using very easy, light reshade sharpening and it looks at least on par with native.

Judge for yourself, native vs DLSS with light reshade. I don't think it's such big of a deal and very acceptable for the huge fps boost.

u9E0MZa.jpg

Swloant.jpg


Of course one should not be forced to use reshade. But in this case, I think it's an oversight from the devs.
 

TonyK

Member
Its not out. So on what basis you are saying that its sharpening filter? Seriously you people have brains or no? Just post whatever without thinking or waiting to see how it will work when its out.



Bunch of arm chair expects here lol
I thought it was the AMD Fidelity FX CAS (Contrast-Adaptive Sharpening) you can see for example in Shadow of the Tomb Raider. If it's not the case I will wait to see the results.
 
It's you who are obsessed to war dude CBR it's not all playstation stuff. Anyway we haven't see a shit run on series X about ML and already you think it will be superior blablabla because whatever MS tell to you it's the holy bible. Prepare to have a rude awakening because ML is not an infinite resource and if they use it they have to sacrifice other graphic stuff. I call it
Just stay calm my friend all will be well in the morning.
 
It's more advertised as black magic than that it really is. And make sure to read the comments;



DLSS is useful if you want to play at 4K with a weak card. It is not better than native. It does not make RT more useful, and just like pretty much all other proprietary nVidia tech, it will end up either dead, or being replaced by an open source alternative.

He says he's playing at 2K, which means at his DLSS settings, the image is getting reconstructed from a 1153x626 native resolution. I don't know what he expected lmao.
 

IntentionalPun

Ask me about my wife's perfect butthole
I like DLSS but I feel like I notice little glitches when I turn it on; flickering in the corners of the screen, that sort of thing.

Very first scene in Cyberpunk had it and I turned it off real quick lol
 
Last edited:

TheContact

Member
i really did not expect rdna2 to support this, that's really awesome. i didn't think it would come out until rdna3. this is awesome for console owners!
 

Ascend

Member
I think that Cyberpunk has some specific issue when turning on DLSS regarding removed sharpness. It does appear blurrier, but I don't think that it's just because of DLSS, because it's way better in other games. It's not a big deal though. I'm using very easy, light reshade sharpening and it looks at least on par with native.

Judge for yourself, native vs DLSS with light reshade. I don't think it's such big of a deal and very acceptable for the huge fps boost.

u9E0MZa.jpg

Swloant.jpg


Of course one should not be forced to use reshade. But in this case, I think it's an oversight from the devs.
Too bad the images are .jpg rather than .png, but, it's indeed hard to tell the difference without going pixel by pixel. But most of the artifacts are seen while in motion. So screenshots in a way, look better than what it actually looks like. One of the issues is 'ghosting'.Some people would notice, some would not. If you can live with it, by all means, use it.




And then there's this;



Look how it looks in motion, with all that shimmering, and when he freezes the footage, suddenly everything looks a lot better for DLSS. He goes on to compare the screenshots and how quality DLSS looks better than native. But it didn't look better in motion because of the additional shimmering. And what kills me, he never mentions that, and only talks about the positives the whole video.

He says he's playing at 2K, which means at his DLSS settings, the image is getting reconstructed from a 1153x626 native resolution. I don't know what he expected lmao.
Yeah. The lower your resolution, the worse it will perform, obviously. That's why it's best used for 4k.
 

regawdless

Banned
Too bad the images are .jpg rather than .png, but, it's indeed hard to tell the difference without going pixel by pixel. But most of the artifacts are seen while in motion. So screenshots in a way, look better than what it actually looks like. One of the issues is 'ghosting'.Some people would notice, some would not. If you can live with it, by all means, use it.




And then there's this;



Look how it looks in motion, with all that shimmering, and when he freezes the footage, suddenly everything looks a lot better for DLSS. He goes on to compare the screenshots and how quality DLSS looks better than native. But it didn't look better in motion because of the additional shimmering. And what kills me, he never mentions that, and only talks about the positives the whole video.


Yeah. The lower your resolution, the worse it will perform, obviously. That's why it's best used for 4k.


True. There's shimmering and especially on very thin lines of light for example, it's very visible.

Regarding that ghosting, Cyberjunk is a weird game. A friend of mine plays on a 1080ti, obviously no DLSS. He plays at medium settings for high fps. He complained heavily about ghosting. So I don't know what's up with that.
 

Zathalus

Member
True. There's shimmering and especially on very thin lines of light for example, it's very visible.

Regarding that ghosting, Cyberjunk is a weird game. A friend of mine plays on a 1080ti, obviously no DLSS. He plays at medium settings for high fps. He complained heavily about ghosting. So I don't know what's up with that.
Really bad TAA implementation.
 

reksveks

Member
Don't expect Super Resolution to be as good, i don't think it's being trained on individual games so it probably won't be but that should mean its application is more universal which means that it might be applied in theory more easily. There might be some licensing/legal shit that needs to be done with between the platform holders and the game pub's/devs (especially for the case of consoles) but that's a possibility.
 

Zathalus

Member
Don't expect Super Resolution to be as good, i don't think it's being trained on individual games so it probably won't be but that should mean its application is more universal which means that it might be applied in theory more easily. There might be some licensing/legal shit that needs to be done with between the platform holders and the game pub's/devs (especially for the case of consoles) but that's a possibility.
DLSS 2.0 is no longer trained on individual games either. DLSS 1.0 was, but that was terrible.
 

MonarchJT

Banned
"Let's discuss about tech stuff on gaf"
Clowns GIF by memecandy

"You know ML be better and superior of CBR, yoh pal I see some panels dude, it's not because is MS behind it bro but serious tech argumentation lolz your leg shake right because it's just on xbox? Go away warrior, I have a serious argumentation here"
" Cool Xbox broh high fives"
si, si, you know I'm right. stomp your feet for as long as you want.
 

IntentionalPun

Ask me about my wife's perfect butthole
Except, it's basically TAA with some NN post processing on top. With all the strength and weaknesess of.... (taadaaa)... surely you've guessed it: of TAA.

Not entirely accurate. It is similar to TAAU, but is not TAAU + something on top of it. It performs the final steps differently, and IIRC uses more samples during the upscaling part too.
 
Last edited:

Clear

CliffyB's Cock Holster
Not entirely accurate. It is similar to TAAU, but is not TAAU + something on top of it. It performs the final steps differently, and IIRC uses more samples during the upscaling part too.
Is it more than nominally based on machine learning any more? I've been wondering if the actual yield of the machine learning phase wasn't some sort of blockchain-esque set of magic numbers that produce the most accurate transform results for a given pixel grid across frames?
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
Is it more than nominally based on machine learning any more? I've been wondering if the actual yield of the machine learning phase wasn't some sort of blockchain-esque set of magic numbers that produce the most accurate transform results for a given pixel grid across frames?
You can see an explanation here:



About 28 minutes in he talks about issues with TAAU

At 38 minutes in he details how DLSS 2.0 differs.

It does all the same sampling of multiple frames at lower resolutions, but the re-construction is totally different (the last step.)

DLSS 1.0 I think used more DL/ML? It was training on every individual game, and doing ML on single low-res samples of frames. It took longer to run because of it's heavier use of ML.
 
Last edited:
I'd be very surprised if it comes to consoles.
Mainstream loves console. Mainstream $$$ guide technology.

DLSS enabled Switch 2 is apparently coming this year and if Nintendo can build a 720p docked version of Zelda that upscales to 4K DLSS? That's an absolute guarantee some software iteration of this WILL hit PS5 and XSX/S. I give it 18 months.
 
Last edited:
Damn Nvidia people really love their brand don't they?

I'm interested to see how this turns out. Though on the console front a bunch of games already use checkerboard rendering which helps with image quality already.
 

Buggy Loop

Member
Mainstream loves console. Mainstream $$$ guide technology.

DLSS enabled Switch 2 is apparently coming this year and if Nintendo can build a 720p docked version of Zelda that upscales to 4K DLSS? That's an absolute guarantee some software iteration of this WILL hit PS5 and XSX/S. I give it 18 months.
I really doubt AMD’s solution will be AI/ML based (rumours are saying this for a while now). I’m expecting a refinement of checkerboard and sharpening

To begin with, RDNA 2 don’t have dedicated cores for ML, they have the capabilities to do 4-bit and 8-bit int maths in the rasterization pipeline, but again, it’s a battle of how much ressources you dedicate to have a DLSS with minimal milliseconds impact (very important) and sacrifices your rasterization because you dedicated too many CUs for it in AMD’s case.

Microsoft is looking into AI upscaling for Xbox, regardless of AMD solution I believe, but it’s not going to be a magic flip switch of good image quality and sudden huge gain in performances if they have to eat away process time from rasterization. Without tensor cores, it’s quite a lot of math raw crunching performance gone.
 

Clear

CliffyB's Cock Holster
You can see an explanation here:



About 28 minutes in he talks about issues with TAAU

At 38 minutes in he details how DLSS 2.0 differs.

It does all the same sampling of multiple frames at lower resolutions, but the re-construction is totally different (the last step.)

DLSS 1.0 I think used more DL/ML? It was training on every individual game, and doing ML on single low-res samples of frames. It took longer to run because of it's heavier use of ML.


Seems like I was basically correct in that its used the results of deep learning to act as a data for a selectively correct strategy for sample discards/integrations.
Basically the gist of the section you indicated is that using a fixed heuristic approach you get artifacting and image degradation, so by using a more selective approach you side-step the issues that multi-frame reconstruction typically causes. On top of that they simplify some of the TAA post process work but instead run it against the reconstructed full-res image so the denoising is more effective.
 
Last edited:

99Luffy

Banned
I really doubt AMD’s solution will be AI/ML based (rumours are saying this for a while now). I’m expecting a refinement of checkerboard and sharpening
I guess it all depends what you would consider AI/ML.
Rememember that DLSS 1.0 used tensor cores, but then nvidia went all shaders for DLSS 1.9. Then back to tensor/shaders for DLSS 2.0
Was DLSS 1.9 an AI/ML based upscaling solution? I would think so if nvidia still called it 'deep learning super sampling.'
Should also be noted that 1.9 had better performance than 2.0.
 
Last edited:

MrSec84

Member
For those that are interested, RDNA1's Whitepaper mentions Integer Scaling below FP16, down to FP8 and 4 across some of the Stream Processors, Microsoft's comments about Series X featuring Machine Learning are identical to the RDNA1 Whitepaper.


"Some variants of the dual compute unit expose additional mixed-precision dot-product modes in the ALUs, primarily for accelerating machine learning inference. A mixed-precision FMA dot2 will compute two half-precision multiplications and then add the results to a single-precision accumulator. For even greater throughput, some ALUs will support 8-bit integer dot4 operations and 4-bit dot8 operations, all of which use 32-bit accumulators to avoid any overflows."

Any RDNA GPU will be able to use ML to handle this Fidelity FX Super Resolution technique, seems that rather than having specific processing tech do it, it just requires the GPU to slot smaller integer calculations into the available hardware on the GPU.
It could even be possible for Vulkan to use a software based version for older GPUs, just depends on how the API developers write new versions of Vulkan or modify iterations of the API.
 

MonarchJT

Banned
1.0 was true ML based solution.
Lot's of investment.
Years of development.
Outcome? Failed miserably.

Fast forward to 2.0 and oh oh, so cool.
Except, it's basically TAA with some NN post processing on top. With all the strength and weaknesess of.... (taadaaa)... surely you've guessed it: of TAA.
tadaaa no is a mix of both
 
Last edited:
It's more advertised as black magic than that it really is. And make sure to read the comments;



DLSS is useful if you want to play at 4K with a weak card. It is not better than native. It does not make RT more useful, and just like pretty much all other proprietary nVidia tech, it will end up either dead, or being replaced by an open source alternative.

It's in CP2077 DLSS is blurry because internal DLSS sharpening is not being used.
Try:

[DLSS]
EnableCustomMipBias = true
Sharpness = 0.2

Also I use negative LOD bias in NV Inspector.
 
I ha
I guess it all depends what you would consider AI/ML.
Rememember that DLSS 1.0 used tensor cores, but then nvidia went all shaders for DLSS 1.9. Then back to tensor/shaders for DLSS 2.0
Was DLSS 1.9 an AI/ML based upscaling solution? I would think so if nvidia still called it 'deep learning super sampling.'
Should also be noted that 1.9 had better performance than 2.0.
I have heard that 1.9 stuff before ages ago but i struggle to find it nowadays. Got a source?
 
I really doubt AMD’s solution will be AI/ML based (rumours are saying this for a while now). I’m expecting a refinement of checkerboard and sharpening

To begin with, RDNA 2 don’t have dedicated cores for ML, they have the capabilities to do 4-bit and 8-bit int maths in the rasterization pipeline, but again, it’s a battle of how much ressources you dedicate to have a DLSS with minimal milliseconds impact (very important) and sacrifices your rasterization because you dedicated too many CUs for it in AMD’s case.

Microsoft is looking into AI upscaling for Xbox, regardless of AMD solution I believe, but it’s not going to be a magic flip switch of good image quality and sudden huge gain in performances if they have to eat away process time from rasterization. Without tensor cores, it’s quite a lot of math raw crunching performance gone.
God of War used an impressive checkerboard upscale, but the technology feels old now. The Red Dead 2 checkerboard was atrocious. Refinement is necessary. Expected now, even.

"Sharpening" reminds me of the old cathode days when sharpening made everything worse than when you started, so I just get nervous whenever I see references.

That being said, DLSS 2.0 has minor issues with ghosting, but it's otherwise pretty astonishing. But it isn't coming to PS5 or XSX. Ever. I get that. Dedicating the entirety of the CPU to an alternative is counter intuitive. I get that too. There's got to be a half decent parity option in the pipeline though.
 

assurdum

Banned
si, si, you know I'm right. stomp your feet for as long as you want.
I know you think to be right. The real issue it's to have a conversation of an echo chamber of MS PR because outside repeat their marketing spin, i can't catch much more unfortunately.
 
Last edited:
Top Bottom