Ghosting can't be solved, it's the nature of the technology (and of all temporal AA solutions that will ever exist). If you don't understand that then you don't understand what the parameters of DLSS are. Go watch this a couple of times so you know:
Moreover, even if that weren't the case, because since DLSS 2.0 the ML model is a generalised one rather than trained for each game individually that means there's even less of a chance you'd have that magic combination of getting all the weights & heuristics right.
AMD tech is new and still not complete. It took Nvidia years to make DLSS good. It was not this good at launch.
AMD solution just launched.
No, it hasn't. It reduced ghosting in some games while exacerbating other artefacts, and then in other games it didn't do even that. And if we look at the latest version of it in RDR 2 it's a complete mess. So don't be so eagerly premature in your celebration of "solved".That is 2.0. Ghosting has been solved with 2.6 and upwards.
EDIT: Sorry 2.2 and upwards.
DLSS 2.2 Reported to Reduce Ghosting in Games
The new version addresses DLSS-related graphical anomalies in popular games like Cyberpunk 2077, Death Stranding, and Metro Exodus.www.player.one
Yes, given infinite computing resources (and likely time) that's theoretically possible to at least a visually subjective lossless degree. Just like if we had infinite resources for curing cancer, fusion energy, intergalactic travel and world peace. Meanwhile, in the real world...The model being generalized is why it can solve things like ghosting. Because ghosting is a repeating pattern that the AI can learn to recognize and eliminate during reconstruction, by recognizing particularly spaced clusters of pixels matching specific criteria for opacity and underlying motion data. Since it no longer tries to learn on each individual game, this kind of general problem that arises in very particular circumstances regardless of what game it's in, becomes eminently solvable.
As a side note, it might not be a great idea to base your idea of a technology solely on an Nvidia-sponsored outlet like DF.Really?
No, it hasn't. It reduced ghosting in some games while exacerbating other artefacts, and then in other games it didn't do even that. And if we look at the latest version of it in RDR 2 it's a complete mess. So don't be so eagerly premature in your celebration of "solved".
Yes, given infinite computing resources (and likely time) that's theoretically possible to at least a visually subjective lossless degree. Just like if we had infinite resources for curing cancer, fusion energy, intergalactic travel and world peace. Meanwhile, in the real world...
As a side note, it might not be a great idea to base your idea of a technology solely on an Nvidia-sponsored outlet like DF.
As a side note, it might not be a great idea to base your idea of a technology solely on an Nvidia-sponsored outlet like DF.
If you're playing at a lower resolution, you're upscaling anyway. FSR gives better results than just letting your monitor upscale the image. It's getting silly at this point.I'd rather just play with a lower native resolution than try and upscale, if those are the results. DLSS 2.2 is just other-worldly good at this point.
This is crazy. Somebody needs to explain how DLSS works, how can it look better than native res?
Also good thing is that FSR does not need to be used at all, because it produces quality, which you can achieve cheaply by droping res.The good thing about FSR, its that it can be implemented in consoles.
As long I can play games running at 60fps with FSR quality, DLSS doesnt matter.
No, it hasn't. It reduced ghosting in some games while exacerbating other artefacts, and then in other games it didn't do even that. And if we look at the latest version of it in RDR 2 it's a complete mess. So don't be so eagerly premature in your celebration of "solved".
They look the same.
—
Sent from my Nokia 3310
There is this one version (2.2.6?) that Nvidia put in R6 that solves almost all issues with DLSS. Why there are "newer" versions in games that that works like previous DLSS builds is beyond me but WE HAVE this one near perfect version that can be implemented in all DLSS 2.0 games, just need to CTRL+C.
LOL, this is quite common pattern for many "tech experts" here, watching comparisons/screens on lower resolution displays and saying:
Dlss isn't nearly as good in rdr2. I think it's more the engine though.There is this one version (2.2.6?) that Nvidia put in R6 that solves almost all issues with DLSS. Why there are "newer" versions in games that that works like previous DLSS builds is beyond me but WE HAVE this one near perfect version that can be implemented in all DLSS 2.0 games, just need to CTRL+C.
LOL, this is quite common pattern for many "tech experts" here, watching comparisons/screens on lower resolution displays and saying:
Better to go with unbiased Youtubers like Hardware Unboxed and Morons Law is DeadHahahahaha
General public won’t see it ..
Dlss isn't nearly as good in rdr2. I think it's more the engine though.
General public won’t see it ..
It's just the first release. Avengers looks bad, but Necromunda does not, so it has potential, I guess. Hopefully being open source will make FSR improve quickly.
I believe it does not need to be exactly as good or better than DLSS. If it starts producing great results and can be used more easily, better TAA will be developed with FSR in mind. New engines might factor this during development. If it gets to, say, 80% the IQ of DLSS at the same framerates, I'd say it's good enough, if that means any game can launch with FSR support on any platform without much tinkering, unlike DLSS, unless I got it wrong. (btw, I love what nvidia is doing with DLSS! Black magic stuff. I just want both to succeed!)It depends on how well TAA is implemented in game. In games where it's not so great... like Control, Death Stranding, Avengers DLSS can produce better results than TAA even with lower internal resolution. FSR is based on image with already implemented TAA so in games like that that are already not so good with TAA in native resolution FSR will just make everything look worse.
Necomunda use UE4 that has quite good TAA implementation.
Yeah except FSR is software based vs DLSS which is hardware based so don't expect FSR to catch up.The DLSS is mature, the result is spectacular after several versions.
The FSR is only at its first iteration, AMD will improve it for sure so wait and see.
I believe it does not need to be exactly as good or better than DLSS. If it starts producing great results and can be used more easily, better TAA will be developed with FSR in mind. New engines might factor this during development. If it gets to, say, 80% the IQ of DLSS at the same framerates, I'd say it's good enough, if that means any game can launch with FSR support on any platform without much tinkering, unlike DLSS, unless I got it wrong. (btw, I love what nvidia is doing with DLSS! Black magic stuff. I just want both to succeed!)
Yeah except FSR is software based vs DLSS which is hardware based so don't expect FSR to catch up.
Red Dead Redemption 2 DLSS 2.2.10.0 Benchmarks
Yesterday, Rockstar released the highly anticipated DLSS Patch for Red Dead Redemption 2 so we've decided to benchmark it.www.dsogaming.com
"All in all, we are really disappointed by the DLSS implementation in Red Dead Redemption 2. Contrary to other games, DLSS does not bring a big performance boost in Red Dead Redemption 2. And even though it uses the latest 2.2.10.0 version, it brings a lot of aliasing at both 1080p and 1440p. Therefore, we strongly recommend avoiding it at these low resolutions. As for 4K, we can only recommend DLSS Quality (and certainly not the other modes) to those that have performance issues but do not want to lower their in-game settings. However, and if you can hit 60fps at all times, you should simply avoid using DLSS!"
Is RDR2's DLSS really that bad??
I've read it's a rage engine issue with transparency. So it can't be "fixed" by dlss.Red Dead Redemption 2 DLSS 2.2.10.0 Benchmarks
Yesterday, Rockstar released the highly anticipated DLSS Patch for Red Dead Redemption 2 so we've decided to benchmark it.www.dsogaming.com
"All in all, we are really disappointed by the DLSS implementation in Red Dead Redemption 2. Contrary to other games, DLSS does not bring a big performance boost in Red Dead Redemption 2. And even though it uses the latest 2.2.10.0 version, it brings a lot of aliasing at both 1080p and 1440p. Therefore, we strongly recommend avoiding it at these low resolutions. As for 4K, we can only recommend DLSS Quality (and certainly not the other modes) to those that have performance issues but do not want to lower their in-game settings. However, and if you can hit 60fps at all times, you should simply avoid using DLSS!"
Is RDR2's DLSS really that bad??
The hair is what shocked me the most. It's literally the total opposite example of the Avengers quality:It's less impressive than other games, but I've been enjoying playing it at 4K in Performance mode. Looks and runs better than what I was doing earlier (AA + reduced internal rendering resolution).
The hair is what shocked me the most. It's literally the total opposite example of the Avengers quality:
Left: TAA Right: DLSS
Or a cheap method that doesn’t require specialized hardware.Deep Learning looks better than sharpening filter, shocking.
FSR is a lazy effort by AMD
Based on those pics, the only difference I can notice are the brigde cables.DLSS being better than native is crazy to me.
Have they fixed all artifacts/bugs yet?
That makes no sense. Lowering resolution is not going to improve picture quality. The whole point is to increase resolution and still maintain high FPS.People linking Hardware Unboxed. Can't wait for the time, this source is going to be banned here. Massive Radeon fanboy, no objectivity.
Also good thing is that FSR does not need to be used at all, because it produces quality, which you can achieve cheaply by droping res.
It's really amazing AMD Radeon engineering, truly groundbraking.
DLSS destroys a lot more details than FSR & Native, but the fanboy brigade is never eager to point those out, instead only showing wires. Don't forget that FSR is not a form of AA unlike DLSS, which means you can pick and choose your poison. Like I said, there's pros & cons to all these methods.
...and since when FSR improves picture quality?That makes no sense. Lowering resolution is not going to improve picture quality. The whole point is to increase resolution and still maintain high FPS.
But that is scientifically worse - because the reference image is what they are supposed to exactly look like.It's even bitch slapping native rendering.
you guys can keep your image reconstruction. Ive never had a great experiuence with it in any form and will continue to use native even if i have to knock down some settings.
Yes they're speakers.Good post.
Whereas the bridge images show DLSS being better, your pics show DLSS looking like shit. What's are those black boxes? Speakers? DLSS is a flat surface! lol
Actually, no. The "reference image" in this case is not the native render. The "reference" in this case does not exist, it's a theoretical image rendered at infinite quality from the 3D scene presented by the game. DLSS works towards that theoretical "reference" using machine-learned 'guesses' to plug gaps and build in the native render, while FSR just works to not lose details from the native render.But that is scientifically worse - because the reference image is what they are supposed to exactly look like.
From the female face picture set, FSR 1.0 is a much better quality reproduction of the native image. I prefer the DLSS 2.2 image, but if you changed the context of the things it is altering compared to native, then important detail would be getting lost, by comparison to FSR 1.0.
DLSS destroys a lot more details than FSR & Native, but the fanboy brigade is never eager to point those out, instead only showing wires. Don't forget that FSR is not a form of AA unlike DLSS, which means you can pick and choose your poison. Like I said, there's pros & cons to all these methods.
Those speakers look like hot trash native or fsr and actually look like a speaker grill in dlss. The dlss is way way way above in that speaker pic like it's no contest at all. Native and fsr the grill is just sparse and hot trash.Good post.
Whereas the bridge images show DLSS being better, your pics show DLSS looking like shit. What's are those black boxes? Speakers? DLSS is a flat surface! lol
I just checked nvidia's blurb, and it does seem like they are doing more than their original DLSS - not least by no longer training the algorithm per game, and now using the same domain problem neural network to do all the inference for all games.Actually, no. The "reference image" in this case is not the native render. The "reference" in this case does not exist, it's a theoretical image rendered at infinite quality from the 3D scene presented by the game. DLSS works towards that theoretical "reference" using machine-learned 'guesses' to plug gaps and build in the native render, while FSR just works to not lose details from the native render.
Where previously they were doing per game DLSS, the one neural net fits all situation seems to place DLSS into more extreme results at best and worse - going by the speaker and RDR2 DLSS advice in another comment.Yes they're speakers.
The native render and the FSR are actually the result of artifacting. Real speakers with mesh or cloth grilles usually look like the DLSS render, unless the grille is really sparse, really thin, or has light shining directly into it.
I understand if you've never seen such, the 'open' style of speakers are pretty popular nowadays, but they exist, and they do look like that.
edit: actually looking closer, the mesh on those speakers is indeed pretty sparse, but my overall point still stands. To me the image presented by DLSS looks neater and more realistic than the native-render one. However, it's possible that that result is unintended, so I'll admit it could be an issue that should be fixed.
It also requires a measurement metric. Compared to the native render as a point of reference, and using the things you listed - higher fidelity resembling supersampled rendering and reduced noise in details - as the metric, the DLSS render is better.the term "better" requires a point of reference