• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.
  • The Politics forum has been nuked. Please do not bring political discussion to the rest of the site, or you will be removed. Thanks.

Sony Patent Suggests DLSS-Like Machine Learning Could Be Used On PS5 Games

xStoyax

posts out of context tweets for attention
Oct 29, 2017
602
2,486
740
A new Sony patent suggests that a technique similar to Nvidia’s DLSS sampling could be used on PS5 games to improve the resolution of games using AI learning.

The patent, which was spotted by a Reddit user, describes the technique as the following:


An information processing device for acquiring a plurality of reference images obtained by imaging an object that is to be reproduced, acquiring a plurality of converted images obtained by enlarging or shrinking each of the plurality of reference images, executing machine learning using a plurality of images to be learned, as teaching data, that include the plurality of converted images, and generating pre-learned data that is used for generating a reproduction image that represents the appearance of the object.
DLSS stands for Deep Learning Super Sampling and it is an AI-powered technique that can give lower resolution visuals the impression that they are at a higher resolution. For example, most DLSS 2.0 technology looks better than 4K and the Unreal Engine 5 demo used these sampling techniques to make it look like a higher resolution than the 1440p it was running in.

This makes a lot of sense for the PS5 and allows you to use processing power elsewhere whilst still upping the visuals of your game. On top of that, this will make the PS5 more efficient as it will be able to work smarter and use its power in better ways rather than just being the more powerful machine.

DLSS is a Nvidia owned piece of technology, which seems to suggest that Sony and AMD have collaborated to create their own version of the technique that will be able to run on the hardware in the PS5.

 

MH3M3D

Member
Feb 27, 2013
251
479
640
So Sony improved their checkerboarding tech for PS5.

Shocking.

Nope, checkerboarding and DLSS are technically completely different methods of achieving the same thing.

DLSS invents new pixels based on prior training, so it knows what's missing and fills it in. Checkerboard doesn't invent new pixels, when it has low confidence it interpolates based on nearby pixels.
 
Last edited:

Dnice1

Member
Mar 31, 2020
200
579
330
The patent describes a device.
An information processing device for acquiring a plurality of reference images obtained by imaging an object that is to be reproduced, acquiring a plurality of converted images obtained by enlarging or shrinking each of the plurality of reference images

Skimming through the PDF there is some pictures of a camera like the device looking at a person. Looks like motion capture or something because there is also a picture of skeletal frame model.
 
  • Like
Reactions: chilichote

IntentionalPun

Ask me about my wife's perfect butthole
Aug 28, 2019
8,657
16,010
660
I don't think this has anything to do with playstation... other than just being some sort of motion capture/3d modeling generation technique.
 

wordslaughter

Banned
Apr 17, 2019
1,377
3,954
445
There is a massive difference in quality between checkerboard rendering and DLSS 2.0. Largely because the later required dedicated hardware.

I'm not expecting some new checkerboard technique to be anywhere near as good.

Prove me wrong Sony....
 
  • Like
Reactions: TurboSpeed

mckmas8808

Ah. Peace and quiet. #ADayWithoutAWoman
May 24, 2005
47,569
14,591
2,000
Nice, please move on from native 4K.

Imagine Death Stranding 2 running internally at 1080p, but using ML to look 95% as close to native 4K!!!! It'll look like this....



 

IntentionalPun

Ask me about my wife's perfect butthole
Aug 28, 2019
8,657
16,010
660
They probably can, even ps4 and xbone can do ai compute if I'm not mistaken, the question is at what performance gain.
You'd have to use the entire Xbox Series X GPU to reach 1/4 of the ML power in the specialized tensor cores on some of the nVidia cards out there running DLSS 2.0 on PC.

I was hopeful for DLSS like features for next-gen but it seems unlikely it will be really possible unless I'm missing something (this was pointed out to me earlier in another thread, didn't really realize how much the tensor cores matter for nVidia DLSS.)
 
  • Thoughtful
Reactions: DonJuanSchlong

A.Romero

Member
Feb 23, 2009
3,621
2,375
1,250
Mexico
www.lavejota.com
They probably can, even ps4 and xbone can do ai compute if I'm not mistaken, the question is at what performance level.

Yes, any modern CPU can do AI but results won't be comparable to specialized hardware like the tensor cores.

If it was as simple as that, there wouldn't be any reason for specialized hardware to be running in data centers. For example, check out AWS offerings (which is tech I work with): AWS Tensor Cores

As I said, any improvement is welcomed but let's keep expectations in check. Console hardware in this aspect is nothing out of this world.
 
Jun 16, 2018
4,272
3,928
550
prettycoolgraphics.blogspot.com
You'd have to use the entire Xbox Series X GPU to reach 1/4 of the ML power in the specialized tensor cores on some of the nVidia cards out there running DLSS 2.0 on PC.

I was hopeful for DLSS like features for next-gen but it seems unlikely it will be really possible unless I'm missing something (this was pointed out to me earlier in another thread, didn't really realize how much the tensor cores matter for nVidia DLSS.)

Even the rtx 2060 can do dlss 1080p to 4k if I'm not mistaken. And its tensor cores might not necessarily be fully utilized by dlss2.0. IF say hypothetically 10% of the tensor cores were being used by dlss, a few cus would be enough to do it, and there would be benefit to doing so.

Also it is likely given that cus are universal and can do pixel shading vertex shading, etc. It is likely the ai compute workload can be done in between workloads, as sony has been doing with async compute on ps4.

Yes, any modern CPU can do AI but results won't be comparable to specialized hardware like the tensor cores.

If it was as simple as that, there wouldn't be any reason for specialized hardware to be running in data centers. For example, check out AWS offerings (which is tech I work with): AWS Tensor Cores

As I said, any improvement is welcomed but let's keep expectations in check. Console hardware in this aspect is nothing out of this world.
the benefit of h/w is higher performance. But what matters is minimal required performance not maximum theoretical performance. The same quality can be matched, but performance will depend on minimum required operations.
 
Last edited:

A.Romero

Member
Feb 23, 2009
3,621
2,375
1,250
Mexico
www.lavejota.com
Even the rtx 2060 can do dlss 1080p to 4k if I'm not mistaken. And its tensor cores might not necessarily be fully utilized by dlss2.0. IF say hypothetically 10% of the tensor cores were being used by dlss, a few cus would be enough to do it, and there would be benefit to doing so.

Also it is likely given that cus are universal and can do pixel shading vertex shading, etc. It is likely the compute workload can be done in between workloads, as sony has been doing with async compute on ps4.


the benefit of h/w is higher performance. But what matters is minimal required performance not maximum theoretical performance.

You are right but maximum theorical performance for these kind of operations is really low compared to specialized hardware like the tensor cores.

If they tried to replicate the same it wouldn't be even close to what tensor cores do right now in the PC space which is what enables DLSS in Death Stranding.
 

GODbody

Member
Jun 8, 2020
96
230
210
An information processing device for acquiring a plurality of reference images obtained by imaging an object that is to be reproduced, acquiring a plurality of converted images obtained by enlarging or shrinking each of the plurality of reference images, executing machine learning using a plurality of images to be learned, as teaching data, that include the plurality of converted images, and generating pre-learned data that is used for generating a reproduction image that represents the appearance of the object.

Unless the PS5 is capable of capturing photos, this is most certainly a patent for a camera. (Check out AA in picture)


 
Last edited:
Jun 16, 2018
4,272
3,928
550
prettycoolgraphics.blogspot.com
You are right but maximum theorical performance for these kind of operations is really low compared to specialized hardware like the tensor cores.

If they tried to replicate the same it wouldn't be even close to what tensor cores do right now in the PC space which is what enables DLSS in Death Stranding.
Again it depends on what percent of the tensor cores are being used for dlss. If the tensor cores are being heavily utilized, yes that would be a problem. But if the tensor cores are being barely utilized, then similar performance is attainable without tensor cores by dedicating several cus. But you probably wouldn't have to dedicate cus if there's enough spare time between pixel shading and vertex shading work to do the work there.
 

IntentionalPun

Ask me about my wife's perfect butthole
Aug 28, 2019
8,657
16,010
660
Even the rtx 2060 can do dlss 1080p to 4k if I'm not mistaken. And its tensor cores might not necessarily be fully utilized by dlss2.0. IF say hypothetically 10% of the tensor cores were being used by dlss, a few cus would be enough to do it, and there would be benefit to doing so.

Also it is likely given that cus are universal and can do pixel shading vertex shading, etc. It is likely the compute workload can be done in between workloads, as sony has been doing with async compute on ps4.

Well that card has over twice (52 tflops vs. 24 tflops) the ML power of the entire Xbox Series X GPU in it's tensor cores alone (without touching the main GPU cores.)

If it only takes a fraction of those tensor cores to actually run DLSS 2.0 on a game I'd find that a little odd.

But even if it's a quarter of the RTX2060 52TF of 16bit integer ops, that's still over half of the entire XSX GPU's power.

I've been actually searching for "how many tensor cores are actually used by DLSS 2.0?" all day but haven't found anything lol
 
Last edited:
Jun 16, 2018
4,272
3,928
550
prettycoolgraphics.blogspot.com
Well that card has twice the ML power of the entire Xbox Series X GPU in it's tensor cores alone (without touching the main GPU cores.)

If it only takes a fraction of those tensor cores to actually run DLSS 2.0 on a game I'd find that a little odd.

But even if it's a quarter of the RTX2060 52TF of 16bit integer ops, that's still over half of the entire XSX GPU's power.

I've been actually searching for "how many tensor cores are actually used by DLSS 2.0?" all day but haven't found anything lol
The 2080ti tensor cores are likely not fully utilized by dlss, as there might be a limit to parallelization of image processing from 1080p to 4k. At least 2060 can do 1080p to 4k if I'm not mistaken.

The question is is there ample performance left for developers to use tensor cores in games besides dlss, or does dlss take up most of the tensor cores or a significant portion of them.


I think i'd heard that control had dlss 1.9 that could run on shaders, without tensor cores, in game and even boost performance.
. DLSS 1.9 ran just fine on shaders (thoigh obv dedicated hw is better)

If true, question is what is the difference in required performance between quality mode and dlss1.9. What is being done? Two passes? Something else?
 
Last edited:
  • Thoughtful
Reactions: polybius80

wordslaughter

Banned
Apr 17, 2019
1,377
3,954
445
This patent pretty obviously has nothing to do with the PS5.

Sony also makes cameras. This patent is about image processing, likely specifically for smart phone cameras.

Could this same technology be used to improve the image quality for PS5 games?

I honestly can't answer that, but it seems clear that the article written, comparing this patent to DLSS, was massively reaching to begin with.
 

A.Romero

Member
Feb 23, 2009
3,621
2,375
1,250
Mexico
www.lavejota.com
Again it depends on what percent of the tensor cores are being used for dlss. If the tensor cores are being heavily utilized, yes that would be a problem. But if the tensor cores are being barely utilized, then similar performance is attainable without tensor cores by dedicating several cus. But you probably wouldn't have to dedicate cus if there's enough spare time between pixel shading and vertex shading work to do the work there.

If tensor cores weren't used at their fullest what would be stopping Nvidia to do so? Of course tensor cores are topped. That's why 2060 doesn't have the same performance as 2070 for DLSS and so on. They have a different amount of Tensor cores.

Believe me, I'd be more than happy to have something like that in $500 console but it is simply unlikely.

Same for RT. Anyone expecting the same performance for RT as what GPUs on PC are offering right now is misinformed in how that technology currently works.

Could it change in the future? Yes. But there is no indication that there is anything coming that could make standard CPU's matching Tensor Cores for these kind of activities.
 

wordslaughter

Banned
Apr 17, 2019
1,377
3,954
445
If only the PS5 had the hardware for DLSS 2.0 ( it wouldn't be called that of course ) and the XSX didn't, that would be a complete game changer. Likely even more so than raytracing or a faster SSD IMO.

It's likely that a main advantage of the XSX over the PS5 will be higher resolution games on average. Gonna be a lot more native 4K games on XSX than on PS5 I think.

Looking at what DLSS 2.0 can do with a 1440p image, it wouldn't be such an advantage anymore if the XSX had native 4K while the PS5 had 1440p + DLSS 2.0

But ironically it's not a console that has this feature... it's currently a PC exclusive.

Can consoles improve checkerboard rendering? Sure.

Can they improve to the level of tensor cores? I doubt it. In fact it's likely that the gap will widen rather than close IMO. Can't wait to see what DLSS 3.0 can do :messenger_sunglasses:
 
Jun 16, 2018
4,272
3,928
550
prettycoolgraphics.blogspot.com
If tensor cores weren't used at their fullest what would be stopping Nvidia to do so? Of course tensor cores are topped. That's why 2060 doesn't have the same performance as 2070 for DLSS and so on. They have a different amount of Tensor cores.
The 2070 doesn't only have more tensor cores, but it also has more rendering performance, more cuda cores. IT is conceivable the rtx 2060 tensor cores could have significantly higher performance if paired with more rendering cuda cores without needing additional tensor cores.

IIRC, nvidia's dlss 1.9 could even run on shaders, cuda cores, without the use of tensor cores, all while boosting gaming performance. So it could run on amd cards too. If true this suggests dlss1.9 likely utilized only a very small fraction of performance when running on tensor cores, unless dlss2.0 heavily utilizes tensor core it too might only use a small fraction of tensor performance.
 
Last edited:
  • Like
Reactions: shubhang

A.Romero

Member
Feb 23, 2009
3,621
2,375
1,250
Mexico
www.lavejota.com
The 2070 doesn't only have more tensor cores, but it also has more rendering performance, more cuda cores. IT is conceivable the rtx 2060 tensor cores could have significantly higher performance if paired with more rendering cuda cores without needing additional tensor cores.

IIRC, nvidia's dlss 1.9 could even run on shaders, cuda cores, without the use of tensor cores, all while boosting gaming performance. So it could run on amd cards too. If true this suggests dlss1.9 likely utilized only a very small fraction of performance when running on tensor cores, unless dlss2.0 heavily utilizes tensor core it too might only use a small fraction of tensor performance.

If the 2060 couldn't take advantage of the tensor cores because it bottlenecks somewhere, the same amount of tensor core might match what the 2070,thus not needing the increase.

I understand your speculation but believe, there is no way cpus in consoles can match the performance of tensor core for the tasks the tensor core are specialized for.
 
  • Thoughtful
Reactions: DonJuanSchlong

LordOfChaos

Member
Mar 31, 2014
12,011
7,517
985
Sony, bro. If this is in there, start talking about it.

Patents may or may not apply to current products. I'd expect the next gen of checkerboard rendering to make it in, but we have heard no such thing yet.
 

Portugeezer

Member
Dec 11, 2008
20,996
5,026
1,455
London
abload.de
I don't think that is PS5 related, although I guess PS5 could be the device and rendered frames are the images used for teaching/reproduction... But the wording doesn't click.
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
Aug 28, 2019
8,657
16,010
660
The 2080ti tensor cores are likely not fully utilized by dlss, as there might be a limit to parallelization of image processing from 1080p to 4k. At least 2060 can do 1080p to 4k if I'm not mistaken.

The question is is there ample performance left for developers to use tensor cores in games besides dlss, or does dlss take up most of the tensor cores or a significant portion of them.


I think i'd heard that control had dlss 1.9 that could run on shaders, without tensor cores, in game and even boost performance.


If true, question is what is the difference in required performance between quality mode and dlss1.9. What is being done? Two passes? Something else?
Interesting stuff, thanks.

looks like that convo is based off of a now deleted Russian YouTube video. Hope it’s true I can’t find any real info on that video like benchmarks or anything other then a vague description.