• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

NVIDIA announces DLSS 3.5 with Ray Reconstruction, launches this fall.

PaintTinJr

Member
PSNR isn't a particularly good way to asses differences. Video encoding has shown for years how gameable it is. You can encode a video for example in H264/HEVC in PSNR mode, which should surely be the most accurate, and yet it can look perceptually worse than the encoder providing its own optimizations.

The whole point of models like PSNR and SSIM is also to try to provide a way of predicting human perception. That comes first and foremost, not the other way around.




In other words, how accurate these models are is judged against human being's opinions. The models aren't meant to be used as a refutation of human judgement.

So yeah, I think people's opinions are definitely a very valid way of assessing the image performance of DLSS vs FSR, particularly if you build a consensus around it.
I agree, but most of these comparison are done with still images, and you can still use PSNR in a meaningful way when done across successive frames, which is what the internal algorithms in video codecs are doing and sacrificing total PSNR for a signal that preserves frame transitions in preference to individual frame PSNR..

The problem with the current opinions is that Nvidia dwarf AMD in marketing and market share and these opinions aren't being done in controlled conditions, and divergence from native isn't being considered as noise when the viewer finds the image pleasing.
 
Last edited:

amigastar

Member
😁 My 4090 is ready. I can’t wait to see what this does for CP2077 in person. It already looks insane with path tracing but now the reflections are gonna be improved even more which is crazy when you think about it.
I'm curious but how many Watts does your Power Supply have for the 4090?
 

DonkeyPunchJr

World’s Biggest Weeb
Has there even been a case of a developer/artist complaining of DLSS/FSR producing a “phony image that’s not consistent with their vision” or something like that? Or that FSR is somehow more faithful to native than DLSS even though everyone unanimously agrees it looks worse than DLSS?

What a weird position to fall back to.
 

SF Kosmo

Al Jazeera Special Reporter
Too bad Nvidia doesn't have x86 licence... I bet they'll be able to make something really good for next gen consoles too.

Switching to arm might be the only way for that to happen. I'm sick of amd at this point.
I don't think it's x86 that is the issue, but x64 which is owned by AMD and shared with Intel. I think it would be pretty hard for nVidia to get around that unless they perhaps partnered with Intel or something.
 
Sheet GIF
 

Killer8

Member
The problem with the current opinions is that Nvidia dwarf AMD in marketing and market share and these opinions aren't being done in controlled conditions,

I agree it would be ideal to do a proper study on it. I think there are already plenty of comparisons out there for people to come to their own conclusions though and the majority appear to have decided that DLSS, in most circumstances, is better than FSR.

Now you could say that that could be Nvidia marketing at work or is Nvidia owners being defensive of their favorite company. I guess that is a fair concern when deciding on whether a consensus being made online is fair. But I don't think those factors are so influential as to fully discredit DLSS being viewed as superior by most. We'd basically be arguing over something that isn't provable at that point, though.

Personally speaking, as an Nvidia user I think DLSS is the better of the two based on all of the comparisons i've seen. Given that a lot of games are now shipping with a choice of DLSS, FSR or XeSS, it would be in my best interest to want to know which is superior because that is the one i'd want to use. If FSR was better, it would make no sense for me to stick to an inferior algorithm out of brand loyalty.

Fanboyism only makes some semblance of sense if the algorithms were locked to each vendor's cards - DLSS to Nvidia, FSR to AMD and XeSS to Intel. But FSR is platform agnostic and XeSS can be used via DP4a instructions. The Nvidia user therefore has the least reason to stay invested in one over the others because they have access to all three of them. The way I see it, if anything it's the AMD user who has more at stake in proving FSR's superiority because they don't have the option of being able to use DLSS instead.

and divergence from native isn't being considered as noise when the viewer finds the image pleasing.

In terms of comparisons, a spanner that is thrown in the works of PSNR is that the mission of these algorithms is not just image reconstruction but also image enhancement. In that regard an image could be considered as diverging from the native 'ground truth' more, but if it's doing so in a more pleasing way then how could that not be considered an advantage?

The latest DLSS versions for example aim to reduce things regarded as artifacts such as ghosting and the breakup of fine visual elements like chainlink fencing. If a native 4K image with TAA applied suddenly had none of the infamous TAA ghosting and some parts of the screen were resolving more akin to a supersampled image, it could technically get a worse PSNR score, but would that really be fair if it's now a less flawed image? I think what the viewer deems as more image pleasing is a very valid factor in determining superiority.

FSR2.0 itself looks not just equivalent to native but better in some comparisons. In that regard it's also diverging but in a pleasing way.
 

PaintTinJr

Member
I agree it would be ideal to do a proper study on it. I think there are already plenty of comparisons out there for people to come to their own conclusions though and the majority appear to have decided that DLSS, in most circumstances, is better than FSR.

Now you could say that that could be Nvidia marketing at work or is Nvidia owners being defensive of their favorite company. I guess that is a fair concern when deciding on whether a consensus being made online is fair. But I don't think those factors are so influential as to fully discredit DLSS being viewed as superior by most. We'd basically be arguing over something that isn't provable at that point, though.

Personally speaking, as an Nvidia user I think DLSS is the better of the two based on all of the comparisons i've seen. Given that a lot of games are now shipping with a choice of DLSS, FSR or XeSS, it would be in my best interest to want to know which is superior because that is the one i'd want to use. If FSR was better, it would make no sense for me to stick to an inferior algorithm out of brand loyalty.

Fanboyism only makes some semblance of sense if the algorithms were locked to each vendor's cards - DLSS to Nvidia, FSR to AMD and XeSS to Intel. But FSR is platform agnostic and XeSS can be used via DP4a instructions. The Nvidia user therefore has the least reason to stay invested in one over the others because they have access to all three of them. The way I see it, if anything it's the AMD user who has more at stake in proving FSR's superiority because they don't have the option of being able to use DLSS instead.



In terms of comparisons, a spanner that is thrown in the works of PSNR is that the mission of these algorithms is not just image reconstruction but also image enhancement. In that regard an image could be considered as diverging from the native 'ground truth' more, but if it's doing so in a more pleasing way then how could that not be considered an advantage?

The latest DLSS versions for example aim to reduce things regarded as artifacts such as ghosting and the breakup of fine visual elements like chainlink fencing. If a native 4K image with TAA applied suddenly had none of the infamous TAA ghosting and some parts of the screen were resolving more akin to a supersampled image, it could technically get a worse PSNR score, but would that really be fair if it's now a less flawed image? I think what the viewer deems as more image pleasing is a very valid factor in determining superiority.

FSR2.0 itself looks not just equivalent to native but better in some comparisons. In that regard it's also diverging but in a pleasing way.
(fyi I've exclusively bought nvidia desktop gpus since the ATI Radeon 980 Pro)

Let say it wasn't video games using upscaling, but TV sets, would it be acceptable for a TV set reviewer to say that an 8K source on an 8K tv was inferior to a 1080p source being upscaled by that TV's upscaler when it diverged from native?

I don't think anyone would accept that as having merit because the source material - unlike games - isn't at the mercy of being bandwidth limited at the creation stage, because it is capturing reality. The reviewer and the review would just look odd IMO, so under no circumstances would I accept that upscaled can be superior to its target, even if it fixes undesired issues in the target. For me the obvious reason is that the upscaler has to have a "target". Which second guessing what consumers collectively may or may not find to be better is not a firm "target".

People might point to TV scalers doing that, but the difference is that they have a firm target, say when Sony's X1 Extreme or XR scalers decompose a image to identify objects then upscale based on inference of that object (eg a tiger) the target is still a higher bandwidth version of the captured reality in the video - but they are then limited below reality by having to have the inference blend with everything else which is why the algorithm has manual controls. By contrast games just can't have better than native because native is ground truth(the highest bandwidth representation, even with artefacts), you can't even blindly tessellate enhance games without the developer leaving guidance in the renderer, because tessellating a facetted object to smooth on an old game could be the exact opposite of the creators intention or might unbalance the image in a way that makes the image worse.
 
Last edited:
Top Bottom