The problem with the current opinions is that Nvidia dwarf AMD in marketing and market share and these opinions aren't being done in controlled conditions,
I agree it would be ideal to do a proper study on it. I think there are already plenty of comparisons out there for people to come to their own conclusions though and the majority
appear to have decided that DLSS, in most circumstances, is better than FSR.
Now you could say that that could be Nvidia marketing at work or is Nvidia owners being defensive of their favorite company. I guess that is a fair concern when deciding on whether a consensus being made online is fair. But I don't think those factors are so influential as to fully discredit DLSS being viewed as superior by most. We'd basically be arguing over something that isn't provable at that point, though.
Personally speaking, as an Nvidia user I think DLSS is the better of the two based on all of the comparisons i've seen. Given that a lot of games are now shipping with a choice of DLSS, FSR or XeSS, it would be in my best interest to want to know which is superior because that is the one i'd want to use. If FSR
was better, it would make no sense for me to stick to an inferior algorithm out of brand loyalty.
Fanboyism only makes some semblance of sense if the algorithms were locked to each vendor's cards - DLSS to Nvidia, FSR to AMD and XeSS to Intel. But FSR is platform agnostic and XeSS can be used via DP4a instructions. The Nvidia user therefore has the
least reason to stay invested in one over the others because they have access to all three of them. The way I see it, if anything it's the AMD user who has more at stake in proving FSR's superiority because they don't have the option of being able to use DLSS instead.
and divergence from native isn't being considered as noise when the viewer finds the image pleasing.
In terms of comparisons, a spanner that is thrown in the works of PSNR is that the mission of these algorithms is not just image reconstruction but also image enhancement. In that regard an image could be considered as diverging from the native 'ground truth' more, but if it's doing so in a more pleasing way then how could that not be considered an advantage?
The latest DLSS versions for example aim to reduce things regarded as artifacts such as ghosting and the breakup of fine visual elements like chainlink fencing. If a native 4K image with TAA applied suddenly had none of the infamous TAA ghosting and some parts of the screen were resolving more akin to a supersampled image, it could technically get a worse PSNR score, but would that really be fair if it's now a less flawed image? I think what the viewer deems as more image pleasing is a very valid factor in determining superiority.
FSR2.0 itself looks not just equivalent to native but better in some comparisons. In that regard it's also diverging but in a pleasing way.