• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

NVIDIA announces DLSS 3.5 with Ray Reconstruction, launches this fall.

timothet

Member
Now I´m waiting for them to implement neural radiance cache (NRC) to reduce pathtraced noise even more before running denoiser:
AcroRd32_2Nw5bT0TqR.jpg



Here is link to blog about it if anybody is curious about it:
https://developer.nvidia.com/blog/n...on-with-one-tiny-neural-network-in-real-time/
 
Clearly it's all about the RT and performance the tensor cores give you with the Nvidia suite features.

I used to bastion Intel in terms of the CPU versus AMD but that was just due to raw performance. Nvidia has the performance and features that you cannot best.

Until there is a point in time where we can see something different, Nvidia is the place to be when it comes to graphics if you are an enthusiast pushing for all the features and not necessarily the high-end graphics but overall performance.
Agreed and 80% of the market agrees as well. It's a shame but AMD has fumbled way too many times in the last 17 years.
 

Arsic

Loves his juicy stink trail scent
Wish I could go back in time and tell my dumbass to not buy a 3080 and get a 4090.

3000 series got shafted so fucking hard in anything worthwhile.
 

Arsic

Loves his juicy stink trail scent
Every RTX card supports DLSS 3.5

NVIDIADLSS35NewRayRe.png
I’m confused. I thought this was just an upgrade to the CP2077 frame generation stuff that lets you run a game like that on max settings and get 100+ fps.

Will this allow me to finally play CP2077 max,4k, RT on on a 3080 with 60fps ?
 

hlm666

Member
I’m confused. I thought this was just an upgrade to the CP2077 frame generation stuff that lets you run a game like that on max settings and get 100+ fps.

Will this allow me to finally play CP2077 max,4k, RT on on a 3080 with 60fps ?
It's not really about performance, there's a small bump but that was probably because the denoising in ray reconstruction is faster because it's now using the tensor cores.

 

winjer

Gold Member

However, as DLSS 3.5 comes with improved upscaling algorithms, some gamers have already swapped out libraries and reported notable improvements in image quality, along with a reduction in ghosting in games such as Last of Us or Need for Speed Unbound. Nevertheless, it’s essential to bear in mind that merely swapping the DLL won’t activate ray reconstruction. Developers must take deliberate steps to incorporate this feature into their games.
 

Skifi28

Gold Member
That's very unfortunate as there seems to be a general indifference in updating older games with new versions of DLSS/FSR despite the large improvements they can offer. Hell, I believe even some new games release with old versions for whatever reason.
 
Last edited:

CGNoire

Member
Damn, I thought the slow reactivity of GI and reflections in raytracing was going to be something that wasnt solved for a long time. that cyberpunk example with the colored signage spilling into the alleyway was a huge leap in quality.
Same. Super pleased with the results there showing. It was my biggest complaint bye far.

More and more is my decision to wait to upgrade my PC and thus delay playing this game looking like a great idea. The game looks so much better now than on release by a huge margin.
 
Last edited:

Buggy Loop

Member
That's very unfortunate as there seems to be a general indifference in updating older games with new versions of DLSS/FSR despite the large improvements they can offer. Hell, I believe even some new games release with old versions for whatever reason.

The upscaler part is still helping for swaps. The ray tracing reconstruction of course probably needs a new input from the game that are not present unless the dev enabled it. Games with ray reconstruction will likely have swap feature for future iterations of DLSS 3.5

I’m curious about improvements on upscaler though
 

raul3d

Member
So, does any of this get dev support for older titles and what will it do for users of 2060 going forward?
It will do the same for 2060 owners as for all other RTX card owners: Better raytracing quality for similar or slightly better performance.

Support for old titles is up to the developers. Unfortunately, I do not see this being as being easy to mod into existing games with DLSS/raytracing support.
 

winjer

Gold Member
Something cool from the new DLSS3.5

"DLSS 3.5 also adds Auto Scene Change Detection to Frame Generation. This feature aims to automatically prevent Frame Generation from producing difficult-to-create frames between a substantial scene change. It does this by analyzing the in-game camera orientation on every DLSS Frame Generation frame pair.

Auto Scene Change Detection eases integration of new DLSS 3 titles, is backwards compatible with all DLSS 3 integrations, and supports all rendering platforms. In SDK build variants, the scene change detector provides onscreen aids to indicate when a scene change is detected so the developer can pass in the reset flag."
 

PaintTinJr

Member
Frame Generation is not arbitrarily blocked from the 3000 series. There have been mods to enable it on the 3000 series and it turned out terrible. The white paper makes it clear that the new OFA capabilities of Ada makes it possible with very little image quality loss.

Do you think DLSS can run on the 1000 series as well?
Well that's sort of ironic IMO because the generalised compute setup of RDNA - that is superior to Nvidia's setup - means it is probably arbitrarily blocked from AMD's GPUs. Which is why I continue to believe Nvidia's strategy is old hat and wrong, and still completely pointless compared to native or a good enough open solution - that FSR is becoming - on a much, much wider variety of GPUs from all suppliers.
 

SABRE220

Member
Agreed and 80% of the market agrees as well. It's a shame but AMD has fumbled way too many times in the last 17 years.
Well the thing is back by early ps4 gen they actually had better offerings quite often and were easily the best value but nvidias brand pull was just too much. Even when nvidia released terrible cards that were inferior in everyway people still bought nvidia. We voted with our wallets and amd basically gave up and is basically content with just surviving at this point, they sure as hell havent been serious about competing in software or hardware this gen.
 

Zathalus

Member
Well that's sort of ironic IMO because the generalised compute setup of RDNA - that is superior to Nvidia's setup - means it is probably arbitrarily blocked from AMD's GPUs. Which is why I continue to believe Nvidia's strategy is old hat and wrong, and still completely pointless compared to native or a good enough open solution - that FSR is becoming - on a much, much wider variety of GPUs from all suppliers.
As long as FSR continues to have worse quality and performance DLSS will never be pointless.

The day FSR matches DLSS in both is the day Nvidia can either abandon DLSS or make it open source. But nothing AMD has demonstrated yet shows that to be the case.

As a temporal scaler FSR is worse, as a RT denoiser it is worse (as it totally lacks the capabilities), as frame generator is is likely worse as well (no ML models to enhance the image and it relies on taking away Async compute from games to achieve the performance instead of using OFA or something similar).

Consoles can continue to use it, but Nvidia users will continue to use DLSS and the hilarious bit is that PC AMD users shouldn't use FSR either, as XeSS is superior.
 
Last edited:
Well the thing is back by early ps4 gen they actually had better offerings quite often and were easily the best value but nvidias brand pull was just too much. Even when nvidia released terrible cards that were inferior in everyway people still bought nvidia. We voted with our wallets and amd basically gave up and is basically content with just surviving at this point, they sure as hell havent been serious about competing in software or hardware this gen.
No what happened is AMD released bulldozer and that was a massive flop, they lost most of their enterprise market share and were an inch away from bankruptcy. AMD was sustained by their GPU business because bulldozer made no money so they grabbed all GPU money and invested it in their CPU division to build Zen. The GPU division was starved of R&D budget so they stopped releasing full next gen GPU lines instead opting for 1 tweaked chip like the Hawaii/Polaris lines and refreshes of the HD7000 series with new names. Radeon has not recovered since.


What's important to understand is that AMD was no is a CPU company through and through. They bought ATI sure but they treated Radeon like a second cousin, this led to them falling further and further behind Nvidia. Now Intel may soon eclipse AMD, I wouldn't be surprised if Intel Druid will be all around better than RDNA6.
 

LiquidMetal14

hide your water-based mammals
ELI5. Does any of this matter if all the PC games that keep coming out run like shit even on 4090?
This may shock you but, yes, it does matter.

This is about how devs are using the resources they have and, in some cases, misusing the resources or simply brute forcing performance. Then we start getting instabilities or unoptimized games that stutter or simply crash more.

There's more at play but DLSS has been a Godsend given it's been better than native 4k. That's amazing.
 

PaintTinJr

Member
As long as FSR continues to have worse quality and performance DLSS will never be pointless.

The day FSR matches DLSS in both is the day Nvidia can either abandon DLSS or make it open source. But nothing AMD has demonstrated yet shows that to be the case.

As a temporal scaler FSR is worse, as a RT denoiser it is worse (as it totally lacks the capabilities), as frame generator is is likely worse as well (no ML models to enhance the image and it relies on taking away Async compute from games to achieve the performance instead of using OFA or something similar).

Consoles can continue to use it, but Nvidia users will continue to use DLSS and the hilarious bit is that PC AMD users shouldn't use FSR either, as XeSS is superior.
but comparatively, at each product tier it has significantly better utilisation of the silicon - in a card - because its async isn't taping together GPU parts, so proportionally each has more async compute to give to the task.

People that can replace their GPU every time Nvidia partition off a niche new update of their tech can run that games at native - which is superior for lag and IQ and typically do so, because why have fake when you are paying that amount for top tier performance? All while FSR is being useful on a handheld console 3gens old performance terms along with every other console and newer GPU than that.

When Nvidia can build a console platform or Steam market place and drop a algorithm update that can be used by even 40M gamers running AA-AAA games, then it becomes something that isn't pointless IMO
 

DonkeyPunchJr

World’s Biggest Weeb
but comparatively, at each product tier it has significantly better utilisation of the silicon - in a card - because its async isn't taping together GPU parts, so proportionally each has more async compute to give to the task.

People that can replace their GPU every time Nvidia partition off a niche new update of their tech can run that games at native - which is superior for lag and IQ and typically do so, because why have fake when you are paying that amount for top tier performance? All while FSR is being useful on a handheld console 3gens old performance terms along with every other console and newer GPU than that.
What kind of ridiculous logic is this? There are many, many games that even a 4090 can’t max out at 4K 120FPS native. And there are some games that are so demanding (e.g. Cyberpunk with RT Overdrive) they aren’t even playable at native.

I have no idea where you got this idea that people who buy high end Nvidia cards are only interested in native resolution/framerate. Seems like bullshit you made up. DLSS is very relevant even at the highest end, and is a huge reason why someone would buy Nvidia over a similarly priced Radeon card.
When Nvidia can build a console platform or Steam market place and drop a algorithm update that can be used by even 40M gamers running AA-AAA games, then it becomes something that isn't pointless IMO
I’m glad there are options that are vendor agnostic and support older GPUs. Nobody is saying those shouldn’t exist as well. Doesn’t make DLSS pointless.

Just ask AMD how pointless DLSS is. They’re the ones getting their asses kicked in discrete GPU sales in large part because DLSS is 1-2 years ahead.
 

Zathalus

Member
but comparatively, at each product tier it has significantly better utilisation of the silicon - in a card - because its async isn't taping together GPU parts, so proportionally each has more async compute to give to the task.

People that can replace their GPU every time Nvidia partition off a niche new update of their tech can run that games at native - which is superior for lag and IQ and typically do so, because why have fake when you are paying that amount for top tier performance? All while FSR is being useful on a handheld console 3gens old performance terms along with every other console and newer GPU than that.

When Nvidia can build a console platform or Steam market place and drop a algorithm update that can be used by even 40M gamers running AA-AAA games, then it becomes something that isn't pointless IMO
I'm not sure what your point is about Async, AMD says it uses Async to drive FSR3 but that in doing so it doesn't perform as well in games that heavily rely on Async. Those games will thus perform better with DLSS frame generation.

Every feature bar frame generation can currently be used by almost 50% of the GPUs on Steam. That's almost 70 million GPUs. Since DLSS was created almost 5 years ago only a single feature (frame generation) has required updating to the latest generation of GPUs. DLSS->DLSS 2->DLSS 3.5 all work fine on GPUs released back in 2018. Updating games to use the latest version of DLSS is so simple all it requires is a .dll swap.

Nobody who has a Turing or better GPU has any reason to use FSR. Nobody who owns a Ada GPU or better has any reason to use FSR 3 either.

It's great that FSR is there as an option, but saying it makes DLSS pointless when it is clearly the inferior solution makes no sense. You don't stop using the better product just because the competition is open source.
 
Last edited:

PaintTinJr

Member
What kind of ridiculous logic is this? There are many, many games that even a 4090 can’t max out at 4K 120FPS native. And there are some games that are so demanding (e.g. Cyberpunk with RT Overdrive) they aren’t even playable at native.

I have no idea where you got this idea that people who buy high end Nvidia cards are only interested in native resolution/framerate. Seems like bullshit you made up. DLSS is very relevant even at the highest end, and is a huge reason why someone would buy Nvidia over a similarly priced Radeon card.

I’m glad there are options that are vendor agnostic and support older GPUs. Nobody is saying those shouldn’t exist as well. Doesn’t make DLSS pointless.

Just ask AMD how pointless DLSS is. They’re the ones getting their asses kicked in discrete GPU sales in large part because DLSS is 1-2 years ahead.
The logic that image quality snobs don't want fake IQ at all, and everyone else has no issue with FSR being used if it means less latency in popular shooters

DLSS is in the same place as AA and anisotropic filtering was years ago,. It isn't a game changer on console, and the uptake on PC is limited versus other options on finite hardware, and certainly not to the tune of regular PC gamers dropping another £500 every two years for a new GPU to use an incremental improvement.

Nvidia don't even state signal to noise ratio of the techniques probably because it just explains that the picture isn't the reference image, and because when you compare versions of DLSS to each other and FSR the improvements numerically are small - or maybe not even numerical improvements, but just preferred visuals with more noise versus native.

People keep projecting that DLSS is important, yet are even 10M gamers in steam actively relying on it for majority of their AAA gaming on PC? I have my doubts.

Also, why arbitrarily choose 4K as the target resolution for native?
 

PaintTinJr

Member
I'm not sure what your point is about Async, AMD says it uses Async to drive FSR3 but that in doing so it doesn't perform as well in games that heavily rely on Async. Those games will thus perform better with DLSS frame generation.

Every feature bar frame generation can currently be used by almost 50% of the GPUs on Steam. That's almost 70 million GPUs. Since DLSS was created almost 5 years ago only a single feature (frame generation) has required updating to the latest generation of GPUs. DLSS->DLSS 2->DLSS 3.5 all work fine on GPUs released back in 2018. Updating games to use the latest version of DLSS is so simple all it requires is a .dll swap.

Nobody who has a Turing or better GPU has any reason to use FSR. Nobody who owns a Ada GPU or better has any reason to use FSR 3 either.

It's great that FSR is there as an option, but saying it makes DLSS pointless when it is clearly the inferior solution makes no sense. You don't stop using the better product just because the competition is open source.
You keep saying it is inferior, yet never provide PSNR numbers to support that statement.
 

DonkeyPunchJr

World’s Biggest Weeb
The logic that image quality snobs don't want fake IQ at all, and everyone else has no issue with FSR being used if it means less latency in popular shooters

That’s not logic. That’s you arbitrarily dividing gamers up into two imaginary groups in order to fit your narrative. As if you must either be an IQ snob who can tolerate nothing but native res/framerate, or else FSR ought to be good enough for you.

Give me a break. If you are willing to “settle” for upscaling/frame generation, why WOULDN’T you prefer the tech that gives you the biggest performance boost in exchange for the smallest hit to IQ? Can you not imagine that some gamers do care about IQ but are willing to trade a small hit in IQ for a big boost to performance?

“B-b-but Nvidia doesn’t even publish their S/N ratio!!!” Dude who gives a shit, what does that have to do with anything? You can look at it and see with your own eyes that DLSS looks better than FSR.
 

PaintTinJr

Member
Reputable tech sites and thousands of images and YouTube videos exist. DLSS is superior.

You can use your own eyes:

2022-10-26-image.jpg


2022-10-26-image-2.jpg


2022-10-26-image-4.jpg


17-big-chernobylite-test.jpg


e1LmRBv_d.webp



And without using scientific measurement at what point do you infer that the inference of DLSS - even when guessing wrong - is superior to native if we take your suggestion to full conclusion?

We had an entire PS3/360 generation where DF couldn't identify a broken sRGB gamma conversion on the 360, or missing exponent fog and had everyone believe the opposite of scientific measurement, so having lots of people parrot something that has no scientific basis to back it up is not new in gamer "trust me bro" land.

The reason why empirical measurement is necessary is because the size of the problem domain is beyond definition, and despite the thousands of images and opinions, we have no way of know how it will infer the next situation. just say it started inferring genitals on characters, when it was just supposed to be an outline, a little like my first hand experience with ripping a Trolls film soundtrack getting inference that recovers the mixed out f### word in a track, multiple times despite the album being from a kids film.

Without measuring and sampling the error rate you cannot say that DLSS will perform better on all new games based on your visual preference for how it "looks" on games you've experienced now. That needs numerically backed and justified.
 
Last edited:

Bojji

Member
And without using scientific measurement at what point do you infer that the inference of DLSS - even when guessing wrong - is superior to native if we take your suggestion to full conclusion?

We had an entire PS3/360 generation where DF couldn't identify a broken sRGB gamma conversion on the 360, or missing exponent fog and had everyone believe the opposite of scientific measurement, so having lots of people parrot something that has no scientific basis to back it up is not new in gamer "trust me bro" land.

The reason why empirical measurement is necessary is because the size of the problem domain is beyond definition, and despite the thousands of images and opinions, we have no way of know how it will infer the next situation. just say it started inferring genitals on characters, when it was just supposed to be an outline, a little like my first hand experience with ripping a Trolls film soundtrack getting inference that recovers the mixed out f### word in a track, multiple times despite the album being from a kids film.

Without measuring and sampling the error rate you cannot say that DLSS will perform better on all new games based on your visual preference for how it "looks" on games you've experienced now. That needs numerically backed and justified.

I can just test it myself in any game having FSR2 and dlss2. Result? Dlss looks much better most of the time, only difference is RDR2 where they broke implemation and FSR2 (added much later) looks better somehow.

Everyone with Nvidia GPU can see the difference, I don't know what you trying to do here.
 

Mister Wolf

Gold Member
I can just test it myself in any game having FSR2 and dlss2. Result? Dlss looks much better most of the time, only difference is RDR2 where they broke implemation and FSR2 (added much later) looks better somehow.

Everyone with Nvidia GPU can see the difference, I don't know what you trying to do here.

It's really simple. FSR falls apart in motion. DLSS does not.
 

Zathalus

Member
And without using scientific measurement at what point do you infer that the inference of DLSS - even when guessing wrong - is superior to native if we take your suggestion to full conclusion?

We had an entire PS3/360 generation where DF couldn't identify a broken sRGB gamma conversion on the 360, or missing exponent fog and had everyone believe the opposite of scientific measurement, so having lots of people parrot something that has no scientific basis to back it up is not new in gamer "trust me bro" land.

The reason why empirical measurement is necessary is because the size of the problem domain is beyond definition, and despite the thousands of images and opinions, we have no way of know how it will infer the next situation. just say it started inferring genitals on characters, when it was just supposed to be an outline, a little like my first hand experience with ripping a Trolls film soundtrack getting inference that recovers the mixed out f### word in a track, multiple times despite the album being from a kids film.

Without measuring and sampling the error rate you cannot say that DLSS will perform better on all new games based on your visual preference for how it "looks" on games you've experienced now. That needs numerically backed and justified.
But you can clearly see that DLSS is superior to FSR. I'm making no claims as to DLSS vs native, I'm comparing FSR to DLSS and we have dozens of games where DLSS is better and none where FSR is. And I'm not the only one making such claims, very nearly everyone is.

Here are some further gifs that demonstrate the stark difference between the two:



FSR:


DLSS:


FSR:


DLSS:


FSR:


DLSS:


Asking for scientific evidence to prove that DLSS is better then FSR is the same as asking for scientific evidence that water is wet. Sure you can request it, but it's obvious that it is.
 

PaintTinJr

Member
But you can clearly see that DLSS is superior to FSR. I'm making no claims as to DLSS vs native, I'm comparing FSR to DLSS and we have dozens of games where DLSS is better and none where FSR is. And I'm not the only one making such claims, very nearly everyone is.

Here are some further gifs that demonstrate the stark difference between the two:



FSR:


DLSS:


FSR:


DLSS:


FSR:


DLSS:


Asking for scientific evidence to prove that DLSS is better then FSR is the same as asking for scientific evidence that water is wet. Sure you can request it, but it's obvious that it is.

Difference does not mean better when the point of reference is native. Graphics are supposed to be deterministic to the developers vision, not some AI guess that decides to change things, and in those further shots none of them look artistically great, as examples of good looking game graphics, even spiderman in the previous shots looks old and tired, and the so called ghosting - on a LCD/LED - with less motion resolution will actually be providing a Sony MotionFlow style interpolation that enhances the viewers ability to interpolate motion in their brain, just like subtle ghosting of a tennis ball travelling at 100miles per hour across a 25metre diagonal on a court being displayed on 120hz panel helps the viewer.

The problem with comparing DLSS, etc - other than all the Nvidia cheerleading - is that it is being done unscientifically and by people that don't understand rudimentary physics to provide good insight and to not give a guessing machine algorithm a free pass because they prefer the pseudo-image compared to native.

Both Elden Ring and Zelda have sold more than 10m copies on consoles using FSR - along with hundreds of millions of other games sold using FSR - and I'm yet to see an example in either of those games where the image is bad that they needed super hero DLSS guess machine to massively get the image closer to native.
 
Last edited:

Buggy Loop

Member
And without using scientific measurement at what point do you infer that the inference of DLSS - even when guessing wrong - is superior to native if we take your suggestion to full conclusion?

We had an entire PS3/360 generation where DF couldn't identify a broken sRGB gamma conversion on the 360, or missing exponent fog and had everyone believe the opposite of scientific measurement, so having lots of people parrot something that has no scientific basis to back it up is not new in gamer "trust me bro" land.

The reason why empirical measurement is necessary is because the size of the problem domain is beyond definition, and despite the thousands of images and opinions, we have no way of know how it will infer the next situation. just say it started inferring genitals on characters, when it was just supposed to be an outline, a little like my first hand experience with a Trolls film soundtrack getting inference that recovers the mixed out f### word in a track, multiple times despite the album being from a kids film.

Without measuring and sampling the error rate you cannot say that DLSS will perform better on all new games based on your visual preference for how it "looks" on games you've experienced now. That needs numerically backed and justified.

Are you using chatGPT? You read like a caricature of an autistic Sheldon. By your same conclusion, FSR can never be proven to perform better than XeSS or DLSS, no matter the iteration. Bravo.

Meanwhile



Manages to be 50/50 better than native according to AMD unboxed, errr, hardware unboxed.

That's with the game's version of DLSS implementation. The DLSS swapper can basically make it 100% in favor of DLSS over Native.

All your mumble about architecture superiority on AMD side is bullcrap. The RT blocks are simplified on AMD side but they do not participate at computation while idle. Nvidia made huge strides since Turing to actually make the tensor cores / RT cores asynchronous with the shader pipeline. AMD's Hybrid approach is, exactly as per their patent, supposedly to simplify and save on silicon area, for more rasterization.

Seems like a fine engineering decision right? Right? I'm an engineer, and that would fully make sense if it were brought up as an alternative.

Yet here we are, a 4080 with 379 mm^2 competes with a 7900XTX's GCD 306mm^2
  • With 20~25% of silicon dedicated to RT/ML
  • Without taking into account the memory controllers (how much you want out? 150~190 mm^2?)
  • Without taking into account the huge cache upgrades Ada got. How much area, who knows, but cache is typically not space savy.

I removed the MCDs on RDNA 3, which includes the cache, just to showcase how stupid this architecture is. You're left with nearly a raw GCD chip of 306mm^2 of pure hybrid RT/ML to optimize the area towards more rasterization, as per patent.

Yet we're talking a 2~4% RASTERIZATION performance difference for nearly a 60W more power consumption advantage on AMD side. When you go heavy RT it goes to shit on AMD side.

RT’s solution on AMD already is super heavy on cache and latency and queue ordering. Sprinkling ML on top of that + shader pipeline is a big no no as of now.

Everything has to be scheduled inline on AMD side. Anything chaotic like path tracing is a big nope with that architecture. Anything with too much dynamic shaders and inline craps the bed too and becomes slower than DXR 1. RDNA's 2 synchronization barrier prevents GPU from using async work to keep execution units busy while the BVH is in the compute queue. This has very poor occupancy when it happens and is latency bound. L2 latency is likely to be a limiting factor when building the BVH, because when occupancy is poor, L0/L1 cache hitrates are also poor. This has been micro benched.

rdna2_jigjig_longest_rt_call_stats.png


Our testing also shows that the RX 6900 XT’s L2 cache has a load-to-use latency of just above 80 ns. While that’s good for a multi-megabyte GPU cache, it’s close to memory latency for CPUs. In the end, the 6900 XT was averaging 28.7 billion box tests and 3.21 billion triangle tests per second during that call, despite being underclocked to 1800 MHz. AMD says each CU can perform four box tests or one triangle test per cycle, so RT core utilization could be anywhere from 7.2% to 29% depending on whether the counters increment for every intersection test, or every node.


Ultimately, i don't even need to pull up graphs to explain how you're wrong about the superiority of AMD's arrchitecture, because we go back to basics : HOW IN THE WORLD is AMD not crushing Nvidia into dust in rasterization after their hybrid approach? uh? Before you run off to chatGPT, you have to answer that basic question.

What Nvidia managed with a 379mm^2 silicon vs 529mm^2 is a monumental schooling on architecture. The ASIC nature of the RT is not a problem apparently, for all the surface area it gobbles up, it leaves rasterization room to compete against AMD and when fully utilized in heavy ray tracing, it simply bitchslaps the hybrid approach. Your arousal on architecture optimization falls flat on its face if the optimization path taken didn't give a crushing victory. Its basic engineering. They have to go back to the drawing board.

Back to topic,

Speaking of the DLSS swapper, the upscaler for 3.5.0 looks nuts



That's the upscaler only, nothing to do with ray reconstruction yet. They made the upscaler, which was already the best, even better by a huge margin. Mind blowing.

Asking for scientific evidence to prove that DLSS is better then FSR is the same as asking for scientific evidence that water is wet. Sure you can request it, but it's obvious that it is.

BuT WhAt aBoUt tHe sIgNaL To nOiSe rAtIo? i lEaRnEd a fEw tErMiNoLoGiEs oF SiGnAlS At sChOoL, lOoK At mE MaMmA

Mocking Spongebob Squarepants GIF
 
Last edited:

DonkeyPunchJr

World’s Biggest Weeb
Difference does not mean better when the point of reference is native. Graphics are supposed to be deterministic to the developers vision, not some AI guess that decides to change things, and in those further shots none of them look artistically great, as examples of good looking game graphics, even spiderman in the previous shots looks old and tired, and the so called ghosting - on a LCD/LED - with less motion resolution will actually be providing a Sony MotionFlow style interpolation that enhances the viewers ability to interpolate motion in their brain, just like subtle ghosting of a tennis ball travelling at 100miles per hour across a 25metre diagonal on a court being displayed on 120hz panel helps the viewer.

The problem with comparing DLSS, etc - other than all the Nvidia cheerleading - is that it is being done unscientifically and by people that don't understand rudimentary physics to provide go insight and to not give a guessing machine algorithm a free pass because they prefer the pseudo-image compared to native.

Both Elden Ring and Zelda have sold more than 10m copies on consoles using FSR - along with hundreds of millions of other games sold using FSR - and I'm yet to see an example in either of those games where the image is bad that they needed super hero DLSS guess machine to massively get the image closer to native.
Seriously? That’s what you’re falling back to now?

“Literally every head to head comparison concluded DLSS looks better than FSR but we don’t REALLY know until someone does a scientific quantitative analysis….oh yeah and some games that use FSR sold really well!!”

Are you getting paid by AMD?

Or maybe you’re getting paid by Nvidia to make AMD fanboys look like raving lunatics.
 

Buggy Loop

Member
Seriously? That’s what you’re falling back to now?

“Literally every head to head comparison concluded DLSS looks better than FSR but we don’t REALLY know until someone does a scientific quantitative analysis….oh yeah and some games that use FSR sold really well!!”

Are you getting paid by AMD?

Or maybe you’re getting paid by Nvidia to make AMD fanboys look like raving lunatics.

What is even reference he's referring to. "Native" has been with TAA for years now, because nothing "raw" about a 4k output looks anywhere good enough. What about TAA signal to noise ratio? (/s)

What about the TAA algorithm? Is it fully deterministic to the developers vision? They fully control the algorithm for every motion vectors to make sure everything looks good to their vision? (/s)

That motion ghosting artifact? Pure art, totally wanted. Looks cinematic. (/s)

He went full retard

tropic-thunder-robert-downey-jr.gif
 

Gaiff

SBI’s Resident Gaslighter
Difference does not mean better when the point of reference is native. Graphics are supposed to be deterministic to the developers vision, not some AI guess that decides to change things, and in those further shots none of them look artistically great, as examples of good looking game graphics, even spiderman in the previous shots looks old and tired, and the so called ghosting - on a LCD/LED - with less motion resolution will actually be providing a Sony MotionFlow style interpolation that enhances the viewers ability to interpolate motion in their brain, just like subtle ghosting of a tennis ball travelling at 100miles per hour across a 25metre diagonal on a court being displayed on 120hz panel helps the viewer.

The problem with comparing DLSS, etc - other than all the Nvidia cheerleading - is that it is being done unscientifically and by people that don't understand rudimentary physics to provide good insight and to not give a guessing machine algorithm a free pass because they prefer the pseudo-image compared to native.

Both Elden Ring and Zelda have sold more than 10m copies on consoles using FSR - along with hundreds of millions of other games sold using FSR - and I'm yet to see an example in either of those games where the image is bad that they needed super hero DLSS guess machine to massively get the image closer to native.
That's just being disingenuous.

I also don't think Elden Ring uses FSR or any image upscaling technique. As for Zelda, it could certainly use better image quality but what sells games is their quality. Resolution, graphics, and even performance are secondary to gameplay and fun.
 
Last edited:

DonkeyPunchJr

World’s Biggest Weeb
What is even reference he's referring to. "Native" has been with TAA for years now, because nothing "raw" about a 4k output looks anywhere good enough. What about TAA signal to noise ratio? (/s)

What about the TAA algorithm? Is it fully deterministic to the developers vision? They fully control the algorithm for every motion vectors to make sure everything looks good to their vision? (/s)

That motion ghosting artifact? Pure art, totally wanted. Looks cinematic. (/s)

He went full retard

tropic-thunder-robert-downey-jr.gif
Yup, DLSS sucks because it’s not deterministic to the developer’s vision, but also FSR is just wonderful because look how many copies Zelda TotK sold!
 

Zathalus

Member
Difference does not mean better when the point of reference is native. Graphics are supposed to be deterministic to the developers vision, not some AI guess that decides to change things, and in those further shots none of them look artistically great, as examples of good looking game graphics, even spiderman in the previous shots looks old and tired, and the so called ghosting - on a LCD/LED - with less motion resolution will actually be providing a Sony MotionFlow style interpolation that enhances the viewers ability to interpolate motion in their brain, just like subtle ghosting of a tennis ball travelling at 100miles per hour across a 25metre diagonal on a court being displayed on 120hz panel helps the viewer.

The problem with comparing DLSS, etc - other than all the Nvidia cheerleading - is that it is being done unscientifically and by people that don't understand rudimentary physics to provide good insight and to not give a guessing machine algorithm a free pass because they prefer the pseudo-image compared to native.

Both Elden Ring and Zelda have sold more than 10m copies on consoles using FSR - along with hundreds of millions of other games sold using FSR - and I'm yet to see an example in either of those games where the image is bad that they needed super hero DLSS guess machine to massively get the image closer to native.
Did you just blame DLSS for making Spider-Man look old and tired?Or was that just a jibe at the games art direction? And that ghosting is a good thing because of TV motion? Just no, absolutely not. Per object motion blur is indeed a good thing (which a video game would use on a ball going across the screen) but that is completely unrelated to ghosting which has always been a bad thing.

The physics point is also rather moot. Oh you can claim it's a problem all you want but the kicker is you have zero evidence to actually back up your claim. Go on, prove to me that DLSS neural model has made it worse in games compared to FSR. Some images or video would be nice.

Lastly you come out of nowhere with the claim that the best selling titles use FSR. Elden Ring doesn't use FSR and Zelda use FSR 1.0 which is basically just a sharpening filter. But yes, both games look great but would have looked even better with DLSS and DLAA.

Also more games have been sold with DLSS then FSR 2, just to point it out.
 

PaintTinJr

Member
That's just being disingenuous.

I also don't think Elden Ring uses FSR or any image upscaling technique. As for Zelda, it could certainly use better image quality but what sells games is their quality. Resolution, graphics, and even performance are secondary to gameplay and fun.
Exactly, which is why the whole DLSS cheerleading versus FSR is garbage is so disingenuous.

Those saying they'll skip a game or wait for DLSS support when it is splitting hairs difference to the overall game composition is a narrative being driven by someone.

The added lag doesn't help the average young PC gamer wanting low latency, high frame-rate, high native resolution in Fortnite, CoD, Overwatch, etc, where AMD's RX-6650XT provides good entry value on the common 1080p screen of Steam user, really puts the tech completely in perspective IMO.
 

Buggy Loop

Member
Exactly, which is why the whole DLSS cheerleading versus FSR is garbage is so disingenuous.

Those saying they'll skip a game or wait for DLSS support when it is splitting hairs difference to the overall game composition is a narrative being driven by someone.

The added lag doesn't help the average young PC gamer wanting low latency, high frame-rate, high native resolution in Fortnite, CoD, Overwatch, etc, where AMD's RX-6650XT provides good entry value on the common 1080p screen of Steam user, really puts the tech completely in perspective IMO.

plz-stop-post.jpg


Don't even go into gamers wanting low latency and picking AMD, LOL

Fortnite without reflex :

fortnite-latency-4070-ti-perf.png


Overwatch 2 with reflex/anti-lag, fps

Overwatch-2-Traning-FPS-comparison-DX11-Ultra-Reflex_Anti-Lag-on.png


Resulting latency

Overwatch-2-Traning-Latency-comparison-DX11-Ultra-Reflex_Anti-Lag-on.png


Raw reflex vs anti-lag, the 3070Ti 4K at 109 fps beats the 6700XT 1080p 211 fps in latency

Those insufferable ~12ms latency from enabling DLSS 3 frame gen will look like witchcraft once peoples bench FSR3. Native AMD latency is inherently higher than Nvidia, without reflex. Thanks to AMD's superior async architecture :messenger_tears_of_joy:
 
Last edited:

LiquidMetal14

hide your water-based mammals
Difference does not mean better when the point of reference is native. Graphics are supposed to be deterministic to the developers vision, not some AI guess that decides to change things, and in those further shots none of them look artistically great, as examples of good looking game graphics, even spiderman in the previous shots looks old and tired, and the so called ghosting - on a LCD/LED - with less motion resolution will actually be providing a Sony MotionFlow style interpolation that enhances the viewers ability to interpolate motion in their brain, just like subtle ghosting of a tennis ball travelling at 100miles per hour across a 25metre diagonal on a court being displayed on 120hz panel helps the viewer.

The problem with comparing DLSS, etc - other than all the Nvidia cheerleading - is that it is being done unscientifically and by people that don't understand rudimentary physics to provide good insight and to not give a guessing machine algorithm a free pass because they prefer the pseudo-image compared to native.

Both Elden Ring and Zelda have sold more than 10m copies on consoles using FSR - along with hundreds of millions of other games sold using FSR - and I'm yet to see an example in either of those games where the image is bad that they needed super hero DLSS guess machine to massively get the image closer to native.
Stop sniffing paint.
 

PaintTinJr

Member
Did you just blame DLSS for making Spider-Man look old and tired?Or was that just a jibe at the games art direction? And that ghosting is a good thing because of TV motion? Just no, absolutely not. Per object motion blur is indeed a good thing (which a video game would use on a ball going across the screen) but that is completely unrelated to ghosting which has always been a bad thing.

The physics point is also rather moot. Oh you can claim it's a problem all you want but the kicker is you have zero evidence to actually back up your claim. Go on, prove to me that DLSS neural model has made it worse in games compared to FSR. Some images or video would be nice.

Lastly you come out of nowhere with the claim that the best selling titles use FSR. Elden Ring doesn't use FSR and Zelda use FSR 1.0 which is basically just a sharpening filter. But yes, both games look great but would have looked even better with DLSS and DLAA.

Also more games have been sold with DLSS then FSR 2, just to point it out.
Where would per object motion blur live in the final viewport image - where the image is a composite? how would know which is which if the ghosting is inferred in the direction of travel and was adding data that is lost from under-sampling? How is that not beneficial for motion clarity in the way MotionFlow works?

The only reasons I have zero evidence is because the whole sales pitch is: "trust me bro". It is Nvidia's tech, it is their responsibility to make the claim of an average PSNR for DLSS versus the native it is trying to scale up to.

No, not a jibe at the art direction of Spidey, it is an old last-gen game where most of the city is boxes you can't enter, just the point that DLSS is being given high praise for polishing last-gen designed games, where I feel neither the FSR or DLSS image even warranted a comparison and certainly not to sell a newer version of DLSS tied to the next £500 mid-range Nvidia card.
 

Killer8

Member
PSNR isn't a particularly good way to asses differences. Video encoding has shown for years how gameable it is. You can encode a video for example in H264/HEVC in PSNR mode, which should surely be the most accurate, and yet it can look perceptually worse than the encoder providing its own optimizations.

The whole point of models like PSNR and SSIM is also to try to provide a way of predicting human perception. That comes first and foremost, not the other way around.


The mean opinion score (MOS) of human subjects for each video viewed at each condition is computed and used as the ground-truth to test the performance of SSIMplus and other objective video quality models.

In other words, how accurate these models are is judged against human being's opinions. The models aren't meant to be used as a refutation of human judgement.

So yeah, I think people's opinions are definitely a very valid way of assessing the image performance of DLSS vs FSR, particularly if you build a consensus around it.
 
Last edited:
😁 My 4090 is ready. I can’t wait to see what this does for CP2077 in person. It already looks insane with path tracing but now the reflections are gonna be improved even more which is crazy when you think about it.
 
Top Bottom