• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Cyberpunk 2077 with NVIDIA DLSS 3 & Ray Tracing: Overdrive

Mister Wolf

Member
A while ago I had similar discussion about raytracing. One guy was adamant about not seeing difference between SSR and Raytraced reflections. Also, one of his arguments about RT being useless gimmic was "We had mirrors in games before RT".

Some people just choose to stay ignorant and no amount of evidence will change their mind.

This is all it is:

OIP.Wk9o1CT8p8q1Znn3lKq2uAHaIe
 

Quezacolt

Member
Haven't Nvidia been trying to push this RT BS for the past 2 Generations now about to be a 3rd. You'd expect them to have nailed it by now.
RT is extremely demanding, no matter what, wich is why some games only use small features instead of the entire toolkit.
You can do smart optimizations, depending on what you want to use, but overall, if you want to push RT as far as it can, the only way is with more and more power.
The more demanding a game is even without RT, it will be even more more when you start to add those features.
Just RT global illumination can make a game look drastically different, much better than before, but that has an imense cost in performance, when you add reflections, shadows, refraction, ambient occlusion, you rarely see all that in new games.
one of the few path traced games is minecraft with raytracing, and you can only do that because minecraft is not a very demanding game.
 

winjer

Gold Member
Do you have a source on that? Genuinely interested.

DLSS 3 is just frame interpolation, like what most TVs have.
It does have some AI to reduce artificing, but it's still bad.

GZwM2L4.jpg


If this was just some other feature added for the sake of it, it wouldn't be an issue.
But this crap is used by NVidia to justify this huge price increase gen over gen. Claiming up to 4 times performance improvement is just misleading for consumers.
 

Filben

Member
Claiming up to 4 times performance improvement is just misleading for consumers.
I see, their performance improvement claim is based on using DLSS. And the raw performance gain? Sorry, I didn't inform myself because the price is so out of touch they could actually have ten times more performance and I still wouldn't buy it. But if it's only the usual 20-40% performance gain to last generation you could really blame them for anti-consumer practices, withholding DLSS to new GPUs that have the "traditional" performance increase but cost double or even thrice the traditional price.

For instance, when I upgraded from a GTX 1070, which cost me 400 EUR back when it came out, to a 2070 Super, I had a 40% performance gain for roughly 40% more (540 EUR). But now? It's like 100% more the previous cost...
 

FireFly

Member
DLSS 3 is just frame interpolation, like what most TVs have.
It does have some AI to reduce artificing, but it's still bad.

GZwM2L4.jpg


If this was just some other feature added for the sake of it, it wouldn't be an issue.
But this crap is used by NVidia to justify this huge price increase gen over gen. Claiming up to 4 times performance improvement is just misleading for consumers.
Cryio said that without DLSS 3.0 RTX 4000 is barely faster than RTX 3000. However DLSS 3.0 only seems to provide a ~50% boost, so for 3X or 4X improvements you would need 2X-2.66X improvements when using DLSS 2.0. And that points to a 2X gain in native RT performance.
 
DLSS 3 is just frame interpolation, like what most TVs have.
It does have some AI to reduce artificing, but it's still bad.

GZwM2L4.jpg


If this was just some other feature added for the sake of it, it wouldn't be an issue.
But this crap is used by NVidia to justify this huge price increase gen over gen. Claiming up to 4 times performance improvement is just misleading for consumers.
But since NVidia is hyping and promoting it now we must all pretend frame interpolation is something we always wanted.
 

bbeach123

Member
60fps interpolation to 120 was fine but not needed .

Can dlss 3.0 make a 30fps interpolation to 60fps with low input lag ?


 
Last edited:

Reizo Ryuu

Member
Haven't Nvidia been trying to push this RT BS for the past 2 Generations now about to be a 3rd. You'd expect them to have nailed it by now.
this is such a bad take smh.
RT is the holy grail of lighting, it's the entire reason CGI can look hyper realistic, yet you expect nvidia to "nail" single point hardware solution to drive RT, even though it takes entire datacenters filled with computers to do it for movies, and it only has been somewhat feasable (in very low precision) for games only recently? c'mon bruv, there's no magic solution for hardware.
It might not ever be "nailed", because as hardware improves, games will also push that hardware with other things that are not RT; you're always going to have to sacrifice something.
 

winjer

Gold Member
60fps interpolation to 120 was fine but not needed .

Can dlss 3.0 make a 30fps interpolation to 60fps with low input lag ?




No, it will always have the same input lag of the original frame rate. Plus a bit more nanoseconds, from the overhead of calculating the fake frames for DLSS 3.
But it will look smoother, although with artifacts, every other frame.
 

zephiross

Member
Haven't Nvidia been trying to push this RT BS for the past 2 Generations now about to be a 3rd. You'd expect them to have nailed it by now.
If you don't understand anything about it...

RT isn't an on or off thing. You have different type of workloads adapted to different usage.

On one hand you have relatively light RT workload performed in real time in videogames, but on the other hand you have much more complicated RT as used in movies such as pixars where it takes several hours to render ONE frame on render farms exponentially more powerful than an RTX 4090. Thus you will always be able to criple any new GPU by increasing the workload (read performing more complex calculation of light bounces) to get better RT. It all scales with the power at disposal.

What this overdrive mode promise is miles beyond some "simple" RT reflections you can find in most games released in the past couple years. And even in ten years you could update the games to make it run at 30 fps on an RTX 9090 with much more demanding and close to photorealistic RT.
 
Last edited:

artsi

Member
Haven't Nvidia been trying to push this RT BS for the past 2 Generations now about to be a 3rd. You'd expect them to have nailed it by now.

They're nailing it by providing technology that can push RT at good framerates despite being the ultimate processing challenge.

It's amazing we even have real time raytracing, 10 years ago it seemed like an impossible task considering rendering a single frame in a 3D program took hours.
 

lukilladog

Member
They're nailing it by providing technology that can push RT at good framerates despite being the ultimate processing challenge.

It's amazing we even have real time raytracing, 10 years ago it seemed like an impossible task considering rendering a single frame in a 3D program took hours.
They could have added a dedicated chip for rt shadows and reflections 10 or 20 years ago, fast rt hardware is nothing new. Furthermore, I'm certain that they could do it today and it would be much faster than having the gpu doing a huge chunk of the rt calculations. Its all about making and selling gpu's at premium prices.
 

01011001

Banned
They could have added a dedicated chip for rt shadows and reflections 10 or 20 years ago, fast rt hardware is nothing new. Furthermore, I'm certain that they could do it today and it would be much faster than having the gpu doing a huge chunk of the rt calculations. Its all about making and selling gpu's at premium prices.

Intel's and Nvidia's RT hardware already does that tho.
the cuda cores of Nvidia cards don't really do much that has to do with raytracing, neither do Intel's raster cores
 
Last edited:

Hoddi

Member
They could have added a dedicated chip for rt shadows and reflections 10 or 20 years ago, fast rt hardware is nothing new. Furthermore, I'm certain that they could do it today and it would be much faster than having the gpu doing a huge chunk of the rt calculations. Its all about making and selling gpu's at premium prices.
How would that even work? You can't have ray traced reflections without access to the GPU's data and ray tracing is highly bandwidth intensive. A dedicated chip doesn't magically solve these hurdles and nevermind on those old PCIe links.
 

Hoddi

Member
So DLSS 3 basically is what my shitty Samsung TV's were doing 14 years ago with motion smoothing, plus AI to minimize interpolation artifacts..and Nvidia reflex bolted on to minimize input lag. I'm I right?
Pretty much. It's also not the first time as LucasArts experimented with it briefly.

I still can't help remembering the fallout from this thread. Poor Assurdum must be spinning in his virtual grave.
 

lukilladog

Member
Intel's and Nvidia's RT hardware already does that tho.
the cuda cores of Nvidia cards don't really do much that has to do with raytracing, neither do Intel's raster cores

Nvidia uses a lot of shader work between the ray generation and the traversal and intersection stages:

pasted-image-0-7.png



"Four key components make up our ray tracing API:

  • Acceleration Structures
  • New shader domains for ray tracing
  • Shader Binding Table
  • Ray tracing pipeline objects"
https://developer.nvidia.com/blog/vulkan-raytracing/

How would that even work? You can't have ray traced reflections without access to the GPU's data and ray tracing is highly bandwidth intensive. A dedicated chip doesn't magically solve these hurdles and nevermind on those old PCIe links.

Through a high bandwidth bus.
 
Last edited:

01011001

Banned
Nvidia uses a lot of shader work between the ray generation and the traversal and intersection stages:

pasted-image-0-7.png



"Four key components make up our ray tracing API:

  • Acceleration Structures
  • New shader domains for ray tracing
  • Shader Binding Table
  • Ray tracing pipeline objects"
https://developer.nvidia.com/blog/vulkan-raytracing/



Through a high bandwidth bus.

I mean... you cant shade anything with RT hardware...

RT cores are exclusively to calculate, well, the rays.
 

Buggy Loop

Member
They could have added a dedicated chip for rt shadows and reflections 10 or 20 years ago, fast rt hardware is nothing new. Furthermore, I'm certain that they could do it today and it would be much faster than having the gpu doing a huge chunk of the rt calculations. Its all about making and selling gpu's at premium prices.
Through a high bandwidth bus.

Fast RT hardware is nothing new? The maths behind RT are decades old, but hardware accelerating to the point it's real time? Please show me how it's nothing new, as in, existing hardware.

Dedicated chip externally? That was called a CPU as it's basically maths, and it was never fast to the point of being real-time except maybe for the most simple scenes, but never found it's way into gaming. To ACCELERATE this they put the computing on a GPU, which also are kings of parallelization and cheated the typical "hollywood quality RT" algorithms by relying on rasterization. It's the hybrid approach and uses the GPU's FP32 shader units to reach those real-time speeds.

Each object's surfaces are shaded BASED on material properties and the light bounces. So you want to rip the polygon object from the GPU, send it over a slower pipeline (than whatever is within the monolithic block), create the BVH object, calculate the intersections, send it back to GPU to shade correctly. This outsourcing of the BVH so close to the shader pipeline would probably force the algorithm to update all shader parameters every frame for every object which is super inefficient, while Nvidia right now is only updated the shader that changed as everything is stored locally in buffers and looping in the same pipeline.
Let's add a denoising budget of 1ms @ 1080p that has to be ghosting free and temporally stable in games (you know... unpredictable movement) and that denoising is done where? Shader pipeline with temporal solution.

So, to resume :

You think back in the days (2002) where games looked like this :

5fe50f273c3bc252aafb2705328ab4b5.jpg

610a1b994dcaab5a82abce3365f2b345.jpg


That the industry just wanted to hold back on RT, in a time where freaking cube maps were barely used, baked lighting was often neglected as even that required a lot of time to render and we were at the dawn of programmable shaders? Let's not even get into the appearance of temporal vectors which is years and years beyond the tech we had in those days.

Microsoft who created the DXR consortium got trolled by Nvidia? AMD who was part of the consortium somehow collaborated in a scheme of holding back RT technology? Sony/Nintendo/Microsoft consoles in the last 20 years, not a single engineer said "Very easy real-time RT, let's implement it"?

Here comes lukilladog lukilladog though with these vibes :

135.png


Saying that multiple billionaire companies just held on tech for.... "selling GPU at premium prices". Companies that work with universities around the world on computer graphic advancements and publish peer reviewed papers that are looked upon by thousands of experts in the field. Everyone, such as lukilladog here on NEOGAF, knew that they were holding back for decades to milk GPU prices, and nobody said a word nor did they take advantage of implementing it for having an edge against competition, like oh.. .say ATI who almost went bankrupt, or AMD that followed suit and barely survives with 1.5-1.8% market shares.

The same AMD who was part of the consortium for DXR 5 years ago, but somehow managed to make a worse RT/CUs performance ratio than Turing, although apparently it's all very easy to make.

Man, what a gold mine you are. Please go publish your ideas in papers. I want you to bring down the GPU monopolies and make them remove their dirty hands around the neck of progress. Can't wait to read.

50 cent laughing GIF
 
Last edited:

01011001

Banned
That the industry just wanted to hold back on RT, in a time where freaking cube maps were barely used, baked lighting was often neglected as even that required a lot of time to render and we were at the dawn of programmable shaders?

yeah imagine having super high tech raytracing cores in a PC back then, that can calculate light rays in astonishing detail and speed... but then all that data is sent to the graphics card to shade it all, cards that could barely hold a steady framerate when stencil shadows are on screen, let alone soft shaded and contact hardened shadows + soft shaded AO :pie_roffles:
 
Last edited:

Roni

Gold Member
I probably wont use RT Overdrive mode but it will be nice being able to put the original lighting to Psycho. Hopefully I can snag a 4090.
You can get away with that on a 3080 Ti... No need to dish out those eddies there!
 
Last edited:

lukilladog

Member
Which high bandwidth bus? The whole point of the problem is that there isn't one.

It's like saying that the solution to low performance is to have more performance. Like nobody ever thought of that.

What do you mean, there is not one or there cannot be one?.
 

lukilladog

Member
Fast RT hardware is nothing new? The maths behind RT are decades old, but hardware accelerating to the point it's real time? Please show me how it's nothing new, as in, existing hardware.

Dedicated chip externally? That was called a CPU as it's basically maths, and it was never fast to the point of being real-time except maybe for the most simple scenes, but never found it's way into gaming. To ACCELERATE this they put the computing on a GPU, which also are kings of parallelization and cheated the typical "hollywood quality RT" algorithms by relying on rasterization. It's the hybrid approach and uses the GPU's FP32 shader units to reach those real-time speeds.

Each object's surfaces are shaded BASED on material properties and the light bounces. So you want to rip the polygon object from the GPU, send it over a slower pipeline (than whatever is within the monolithic block), create the BVH object, calculate the intersections, send it back to GPU to shade correctly. This outsourcing of the BVH so close to the shader pipeline would probably force the algorithm to update all shader parameters every frame for every object which is super inefficient, while Nvidia right now is only updated the shader that changed as everything is stored locally in buffers and looping in the same pipeline.
Let's add a denoising budget of 1ms @ 1080p that has to be ghosting free and temporally stable in games (you know... unpredictable movement) and that denoising is done where? Shader pipeline with temporal solution.

So, to resume :

You think back in the days (2002) where games looked like this :

5fe50f273c3bc252aafb2705328ab4b5.jpg

610a1b994dcaab5a82abce3365f2b345.jpg


That the industry just wanted to hold back on RT, in a time where freaking cube maps were barely used, baked lighting was often neglected as even that required a lot of time to render and we were at the dawn of programmable shaders? Let's not even get into the appearance of temporal vectors which is years and years beyond the tech we had in those days.

Microsoft who created the DXR consortium got trolled by Nvidia? AMD who was part of the consortium somehow collaborated in a scheme of holding back RT technology? Sony/Nintendo/Microsoft consoles in the last 20 years, not a single engineer said "Very easy real-time RT, let's implement it"?

Here comes lukilladog lukilladog though with these vibes :

135.png


Saying that multiple billionaire companies just held on tech for.... "selling GPU at premium prices". Companies that work with universities around the world on computer graphic advancements and publish peer reviewed papers that are looked upon by thousands of experts in the field. Everyone, such as lukilladog here on NEOGAF, knew that they were holding back for decades to milk GPU prices, and nobody said a word nor did they take advantage of implementing it for having an edge against competition, like oh.. .say ATI who almost went bankrupt, or AMD that followed suit and barely survives with 1.5-1.8% market shares.

The same AMD who was part of the consortium for DXR 5 years ago, but somehow managed to make a worse RT/CUs performance ratio than Turing, although apparently it's all very easy to make.

Man, what a gold mine you are. Please go publish your ideas in papers. I want you to bring down the GPU monopolies and make them remove their dirty hands around the neck of progress. Can't wait to read.

50 cent laughing GIF

  • (1996) Researchers at Princeton university proposed using DSPs to build a hardware unit for ray tracing acceleration, named "TigerSHARK".[6]
  • Implementations of volume rendering using ray tracing algorithms on custom hardware were carried out in 1999 by Hanspeter Pfister[7] and researchers at Mitsubishi Electric Research Laboratories.[8] with the vg500 / VolumePro ASIC based system and in 2002 with FPGAs by researchers at the University of Tübingen with VIZARD II[9]
  • (2002) The computer graphics laboratory at Saarland University headed by Dr.-Ing. Philipp Slusallek has produced prototype ray tracing hardware including the FPGA based fixed function data driven SaarCOR (Saarbrücken's Coherence Optimized Ray Tracer) chip[10][11][12] and a more advanced programmable (2005) processor, the Ray Processing Unit (RPU)[13]
  • (2002–2009) ART VPS company (founded 2002[14]), situated in the UK, sold ray tracing hardware for off-line rendering. The hardware used multiple specialized processors that accelerated ray-triangle intersection tests. Software provided integration with Autodesk Maya and Max data formats, and utilized the Renderman scene description language for sending data to the processors (the .RIB or Renderman Interface Bytestream file format).[15] As of 2010, ARTVPS no longer produces ray tracing hardware but continues to produce rendering software.[14]
  • (2009 - 2010) Intel[16] showcased their prototype "Larrabee" GPU and Knights Ferry MIC at the Intel Developer Forum in 2009 with a demonstration of real-time ray-tracing.
  • Caustic Graphics[17] produced a plug in card, the "CausticOne" (2009),[18] that accelerated global illumination and other ray based rendering processes when coupled to a PC CPU and GPU. The hardware is designed to organize scattered rays (typically produced by global illumination problems) into more coherent sets (lower spatial or angular spread) for further processing by an external processor.[19]
  • Siliconarts[20] developed a dedicated real-time ray tracing hardware (2010). RayCore (2011), which is the world's first real-time ray tracing semiconductor IP, was announced.
  • Imagination Technologies, after acquiring Caustic Graphics, produced the Caustic Professional's R2500 and R2100 plug in cards containing RT2 ray trace units (RTUs). Each RTU was capable of calculating up to 50 million incoherent rays per second.[21]
https://en.wikipedia.org/wiki/Ray-tracing_hardware

blow-mind-mind-blown.gif
 
Last edited:

01011001

Banned
  • (1996) Researchers at Princeton university proposed using DSPs to build a hardware unit for ray tracing acceleration, named "TigerSHARK".[6]
  • Implementations of volume rendering using ray tracing algorithms on custom hardware were carried out in 1999 by Hanspeter Pfister[7] and researchers at Mitsubishi Electric Research Laboratories.[8] with the vg500 / VolumePro ASIC based system and in 2002 with FPGAs by researchers at the University of Tübingen with VIZARD II[9]
  • (2002) The computer graphics laboratory at Saarland University headed by Dr.-Ing. Philipp Slusallek has produced prototype ray tracing hardware including the FPGA based fixed function data driven SaarCOR (Saarbrücken's Coherence Optimized Ray Tracer) chip[10][11][12] and a more advanced programmable (2005) processor, the Ray Processing Unit (RPU)[13]
  • (2002–2009) ART VPS company (founded 2002[14]), situated in the UK, sold ray tracing hardware for off-line rendering. The hardware used multiple specialized processors that accelerated ray-triangle intersection tests. Software provided integration with Autodesk Maya and Max data formats, and utilized the Renderman scene description language for sending data to the processors (the .RIB or Renderman Interface Bytestream file format).[15] As of 2010, ARTVPS no longer produces ray tracing hardware but continues to produce rendering software.[14]
  • (2009 - 2010) Intel[16] showcased their prototype "Larrabee" GPU and Knights Ferry MIC at the Intel Developer Forum in 2009 with a demonstration of real-time ray-tracing.
  • Caustic Graphics[17] produced a plug in card, the "CausticOne" (2009),[18] that accelerated global illumination and other ray based rendering processes when coupled to a PC CPU and GPU. The hardware is designed to organize scattered rays (typically produced by global illumination problems) into more coherent sets (lower spatial or angular spread) for further processing by an external processor.[19]
  • Siliconarts[20] developed a dedicated real-time ray tracing hardware (2010). RayCore (2011), which is the world's first real-time ray tracing semiconductor IP, was announced.
  • Imagination Technologies, after acquiring Caustic Graphics, produced the Caustic Professional's R2500 and R2100 plug in cards containing RT2 ray trace units (RTUs). Each RTU was capable of calculating up to 50 million incoherent rays per second.[21]
https://en.wikipedia.org/wiki/Ray-tracing_hardware

so your first real time RT hardware you got here came into play in 2009... and that was a concept.
the first "product announcement" for a real time RT graphics card was in 2010 (raycore) with a back then estimated release date of 2021... are you for real?
 
Last edited:

lukilladog

Member
so your only real time RT hardware you got here came into play in 2009... and that was a concept.
the first "product announcement" for a real time RT graphics card was in 2010 (raycore) with a back then estimated release date of 2021... are you for real?

Oh please, stop putting words in my mouth.
 

Madflavor

Member
This honestly frustrates me. I've been PC gaming my whole life but I've always cruised by on budget PCs. I never really had the luxury to game the best looking shit on the highest settings at a good framerate. I finally decided to save save save and save to build a high end PC. Somehow and someway I was actually able to get an RTX 3080 last year. It felt good. It felt good that for once I got to be on top for a change. I know it wasn't the absolutely best GPU you could have, but it was almost the best you could have, and that was more than enough for me. Get that "PC Master Race" ego brewing. But you know what, I earned it.

Anyway that feeling lasted around 8 months give or take before Nvidia announced the 4xxx series. Bear in mind I never really kept up with PC tech until early 2020, so I don't know if this is just par for the course, but I couldn't believe it. It really felt like the 3xxx series just came out, with that magnum sized big dick energy. But now it's like "The RTX 3080? Pffff. That's sooooo 2020." And apparently I can't run DLSS 3.0 with it, for reasons.

But judging by literally everyone elses reactions, I'm not the only one who thinks it's bullshit. Not sure if it's for the same reasons.
 

OZ9000

Banned
This honestly frustrates me. I've been PC gaming my whole life but I've always cruised by on budget PCs. I never really had the luxury to game the best looking shit on the highest settings at a good framerate. I finally decided to save save save and save to build a high end PC. Somehow and someway I was actually able to get an RTX 3080 last year. It felt good. It felt good that for once I got to be on top for a change. I know it wasn't the absolutely best GPU you could have, but it was almost the best you could have, and that was more than enough for me. Get that "PC Master Race" ego brewing. But you know what, I earned it.

Anyway that feeling lasted around 8 months give or take before Nvidia announced the 4xxx series. Bear in mind I never really kept up with PC tech until early 2020, so I don't know if this is just par for the course, but I couldn't believe it. It really felt like the 3xxx series just came out, with that magnum sized big dick energy. But now it's like "The RTX 3080? Pffff. That's sooooo 2020." And apparently I can't run DLSS 3.0 with it, for reasons.

But judging by literally everyone elses reactions, I'm not the only one who thinks it's bullshit. Not sure if it's for the same reasons.
Wait until DLSS 4 is exclusive to the 5000 series lol
Nvidia gonna Nvidia
 
Last edited:

DeaDPo0L84

Member
This honestly frustrates me. I've been PC gaming my whole life but I've always cruised by on budget PCs. I never really had the luxury to game the best looking shit on the highest settings at a good framerate. I finally decided to save save save and save to build a high end PC. Somehow and someway I was actually able to get an RTX 3080 last year. It felt good. It felt good that for once I got to be on top for a change. I know it wasn't the absolutely best GPU you could have, but it was almost the best you could have, and that was more than enough for me. Get that "PC Master Race" ego brewing. But you know what, I earned it.

Anyway that feeling lasted around 8 months give or take before Nvidia announced the 4xxx series. Bear in mind I never really kept up with PC tech until early 2020, so I don't know if this is just par for the course, but I couldn't believe it. It really felt like the 3xxx series just came out, with that magnum sized big dick energy. But now it's like "The RTX 3080? Pffff. That's sooooo 2020." And apparently I can't run DLSS 3.0 with it, for reasons.

But judging by literally everyone elses reactions, I'm not the only one who thinks it's bullshit. Not sure if it's for the same reasons.
You should buy a console, you have big dick energy in the console space for 4-5 years until they do a mig-gen refresh. PC hardware is all about constantly improving, but as a consumer you're not forced to buy into the latest tech. Your 3080 is still a great card and depending on varying factors will carry you forward for years to come.

PC space isn't for people who feel left behind the second something new gets announced cause they're current hardware is "old". When you buy or build a PC it's like buying a new car, as soon as you drive off the lot it depreciates in value. This is again cause there is always something new coming around the corner.

I for one want companies to constantly innovate and release new tech. Even if I don't buy into the 4000 series it gives me a heads up into what they're probably working on for the 5000 series. Just enjoy the rig you put together, don't stress about how others spend their money, and upgrade when it makes sense, don't let hype be the motivating factor.
 

Skifi28

Member
But since NVidia is hyping and promoting it now we must all pretend frame interpolation is something we always wanted.
It seemed to work for digital foundry, the artifacts were waved away like a minor issue and the doubling of latency was still "fine" apparently.

Just look at the big 300% and 400% numbers and get hyped to spend spend spend!
 
Last edited:

01011001

Banned
It seemed to work for digital foundry, the artifacts were waved away like a minor issue and the doubling of latency was still "fine" apparently.

Just look at the big 300% and 400% numbers and get hyped to spend spend spend!

I mean if the latency numbers seen in DF's video are too high for you you should never play anything on console, because these were still lower than the average console game.
 

Reallink

Member
DLSS 3 is just frame interpolation, like what most TVs have.
It does have some AI to reduce artificing, but it's still bad.

GZwM2L4.jpg


If this was just some other feature added for the sake of it, it wouldn't be an issue.
But this crap is used by NVidia to justify this huge price increase gen over gen. Claiming up to 4 times performance improvement is just misleading for consumers.

If you'd watched the rest of the video, you'd have heard that these artifacts are rare and brief enough to not be noticeable in practice to the naked eye, and they're questioning how or even if they're going to cover it in future analyses because it's only really visible in freeze frame slow-mo. If every game works that well, the only legitimate downside and caveat to this feature is that you don't get the input lag reduction of native high frame rate, yet it's still lower than what you'd get with raw rasterization at that resolution (i.e. faster than no DLSS at the same resolution). So at worst it's a win-draw situation, effectively perfect doubling of the visible frame rate, but only slightly faster input lag than you'd get trying to brute force render the same resolution.
 
Last edited:

winjer

Gold Member
If you'd watched the rest of the video, you'd have heard that these artifacts are rare and brief enough to not be noticeable in practice to the naked eye, and they're questioning how or even if they're going to cover it in future analyses because it's only really visible in freeze frame slow-mo. If every game works that well, the only legitimate downside and caveat to this feature is that you don't get the input lag reduction of native high frame rate, but it's still lower than what you'd get with raw rasterization at that resolution (i.e. faster than no DLSS at the same resolution).

And you really believe Alex?
Dude is the biggest NVidia shill ever to grace youtube.
Why do you think that DF got a 3080 before every other tech review outlet in 2020. And why do you think they now have a 4090 and access to drivers and tech, before every one else.
I can name you plenty of tech sites that have been doing this for much longer than DF, with much greater expertise, but still don't get sampled as soon as DF.
And the reason is because Alex is doing PR for NVidia.

FFS, NVidia is asking thousands of dollars for frame interpolation.
 
Last edited:

Reallink

Member
And you really believe Alex?
Dude is the biggest NVidia shill ever to grace youtube.
Why do you think that DF got a 3080 before every other tech review outlet in 2020. And why do you think they now have a 4090 and access to drivers and tech, before every one else.
I can name you plenty of tech sites that have been doing this for much longer than DF, with much greater expertise, but still don't get sampled as soon as DF.
And the reason is because Alex is doing PR for NVidia.

FFS, NVidia is asking thousands of dollars for frame interpolation.

I appreciate the skepticism, but unless they doctored the video, you can watch it for yourself. Even with their 120Hz capture slowed down 50% to fit in Youtube's 60Hz output, none of these things were visible until he paused them and pointed them out. The raw 120hz video would be twice as difficult to pick out. As pointed out, you're talking about a <8ms artifact that is both preceded and followed by non-artifact frames.
 
Last edited:

bbeach123

Member
I believe if somehow AMD come up with frame interpolation first Digital foundry probably spend a whole 30 minute video talking about any artifact they can detect . :messenger_tears_of_joy:
 
Last edited:

winjer

Gold Member
I appreciate the skepticism, but unless they doctored the video, you can watch it for yourself. Even with their 120Hz capture slowed down 50% to fit in Youtube's 60Hz output, none of these things were visible until he paused them and pointed them out. The raw 120hz video would be twice as difficult to pick out.

I still remember when Alex was claiming that DLSS 2.0 on Control was better than native. And he showed that in the video.
Then I played the game and realized he was full of it. He is constantly lying to make NVidia look better than what it really is.
And it's paying off for him and DF.
 

Skifi28

Member
the only legitimate downside and caveat to this feature is that you don't get the input lag reduction of native high frame rate, yet it's still lower than what you'd get with raw rasterization at that resolution

In 2 of the 3 games tested the latency nearly doubled in dlss3 vs 2 and that seems like a huge drawback. You can compare it to the game without any dlss, but that could be running at 30-40fps so having better latency than that is hardly a revelation.

I don't know how I feel about gaming at 100+ fps with worse latency than 60fps. To me it seems like nvidia is rushing something not ready in order to justify the huge price increase of the new GPUs.
 

GymWolf

Member
This is all it is:

OIP.Wk9o1CT8p8q1Znn3lKq2uAHaIe
Not really, many people have rtx capable cards and they still ditch rtx in favor of resolution or framerate or raw details.

Let's not start with this sterile discussion once again, demons remake and fw are top5 best graphic in game and use zero rtx, same for ratchet even if you disable rtx reflections.

Some people just have different priorities dude.
 

Turk1993

GAFs #1 source for car graphic comparisons
Fuck Nvidia for increasing the prices of those GPU's and naming the 4070 the 4080 12gb. But some people already started hating on DLSS 3.0 just like they did with DLSS1. Just give it some time, im sure it will get much much better later on and its gonna be a game changer.

Not really, many people have rtx capable cards and they still ditch rtx in favor of resolution or framerate or raw details.

Let's not start with this sterile discussion once again, demons remake and fw are top5 best graphic in game and use zero rtx, same for ratchet even if you disable rtx reflections.

Some people just have different priorities dude.
Depends on the game actually, games like CP2077 are games where i just want to max out RT which makes it look soo much better. There are also other games like Metro, dying light,... But thats the nice thing about pc gaming, options!

Also this is ridiculous, its the same or even more expensive than a 3090 during covid and miners time here in europe.
NVIDIA-RTX4090-EUROPE-1480x987.jpg
 

winjer

Gold Member
Fuck Nvidia for increasing the prices of those GPU's and naming the 4070 the 4080 12gb. But some people already started hating on DLSS 3.0 just like they did with DLSS1. Just give it some time, im sure it will get much much better later on and its gonna be a game changer.

But DLSS 1 was, and is still complete crap. The issue was the tech itself, a spatial upscaler with ML trained specifically for one game.
The saving grace for DLSS, was it's second incarnation, that is very different, as it uses temporal data and generalized trained ML for all games.

DLSS has issues that can't be solve just by tech. The latency is one of them. A game using DLSS 3 running at 120 fps, will have input latency similar to a native game running at 60fps. Plus a bit more because of overhead.
Reflex can improve latency a bit, but that can be used with DLSS3, DLSS2 and native resolution.
Artifacts is another issue. Maybe one day someone might be able to solve this. But that is not today.
And NVidia is suing the claim of $x performance with DLSS3 to justify scalping it's own GPUs.
 
Top Bottom