• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox Series X DirectML: A Next-Generation Game Changer

Vasto

Member
This sounds like a really good feature that Series X will have. Would love to see Microsoft go into detail about DirectML and Velocity at the June event then in July see all of this come together in real time when Microsoft shows off the 1st party games.




“One of the studios inside Microsoft has been experimenting with using ML models for asset generation. It’s working scarily well. To the point where we’re looking at shipping really low-res textures and having ML models uprez the textures in real-time. You can’t tell the difference between the hand-authored high-res texture and the machine-scaled- up low-res texture, to the point that you may as well ship the low-res texture and let the machine do it”

“Deep Learning Super Sampling (DLSS) uses artificial intelligence and machine learning to produce an image that looks like a higher-resolution image, without the rendering overhead. AI algorithm learns from tens of thousands of rendered sequences of images that were created using a supercomputer. That trains the algorithm to be able to produce similarly beautiful images, but without requiring the graphics card to work as hard to do it.”

This means that developers will have an option to render a game at 1080p or 1440p. Then they can upsample it to 4K with little loss to image quality. Rendering a game in 1080p will also allow devs to utilize spare GPU resources for better graphics. Resources such as graphical effects, Ray Tracing, and/or better frame rate. How developers use it is up to them, and I’m sure it will depend on the type of game they make. DLSS on PC is usually used in games that use Ray tracing, allowing games to run at a high frame rate and with high image quality- upscaled 1440p or 4K.
 

martino

Member
When will amd talks about rdna2 architecture change ?
I'm really curious how it will take shape together with amd approach
nvidia uses dedicated silicon for each part to get its result
 

oldergamer

Member
This sounds like a really good feature that Series X will have. Would love to see Microsoft go into detail about DirectML and Velocity at the June event then in July see all of this come together in real time when Microsoft shows off the 1st party games.




“One of the studios inside Microsoft has been experimenting with using ML models for asset generation. It’s working scarily well. To the point where we’re looking at shipping really low-res textures and having ML models uprez the textures in real-time. You can’t tell the difference between the hand-authored high-res texture and the machine-scaled- up low-res texture, to the point that you may as well ship the low-res texture and let the machine do it”
Oh wow I was wondering if they would try this with textures. Essentially they could use textures that are 4 times smaller (reducing bandwidth) and then upscale once in system memory.

They could even include the training data offline for the textures to the hardware how to handle specific images ( in theroy ).
 

Ascend

Member
Interesting. So... Is this the hardware texture filter they were talking about? Maybe that is how they achieve the 2x -3x bandwidth savings in combination with XVA. With DirectML they generate the higher quality texture from a lower quality one without having to actually load it at all.
 

ZywyPL

Banned
That's a bunch of old articles put together, nothing new sadly. But what is overlooked is that AI computations on XBX are done on CUs, which means the devs will have to balance the workload between typical GPGPU tasks and AI calculations, those 97TOPS are for all 52CUs combined, which would leave absolutely no processing power left to draw even a single polygon, so the question is how few CUs are needed and how good results they can produce.
 
We will see What happens. If Lockhart is true and the only Downgrade is the GPU then both the XSX AND LOCKHART will punch higher than its weight graphically.
 

RaySoft

Member
This sounds like a really good feature that Series X will have. Would love to see Microsoft go into detail about DirectML and Velocity at the June event then in July see all of this come together in real time when Microsoft shows off the 1st party games.




“One of the studios inside Microsoft has been experimenting with using ML models for asset generation. It’s working scarily well. To the point where we’re looking at shipping really low-res textures and having ML models uprez the textures in real-time. You can’t tell the difference between the hand-authored high-res texture and the machine-scaled- up low-res texture, to the point that you may as well ship the low-res texture and let the machine do it”



This means that developers will have an option to render a game at 1080p or 1440p. Then they can upsample it to 4K with little loss to image quality. Rendering a game in 1080p will also allow devs to utilize spare GPU resources for better graphics. Resources such as graphical effects, Ray Tracing, and/or better frame rate. How developers use it is up to them, and I’m sure it will depend on the type of game they make. DLSS on PC is usually used in games that use Ray tracing, allowing games to run at a high frame rate and with high image quality- upscaled 1440p or 4K.
Im having a feeling that this is something UE5 engine does as well. AI, with proper training, is very good at getting the details right.
So when you import your 8k textures and raw models & assets in general into the engine, it scales them according to the set target.
In the past humans had to do it because you'll always need some touchup to the assets after downscaling. We know already UE5 does this automatically, but what about artifacts and such? I think this is were the ML comes in and does all the labour of fixing up all artifacts and noise left after downscalin to the point you'd be hard pressed to even spot the difference between the source asset.
People tend to think that making games next-gen with all the fidelity it brings will take longer than before, but with all the tools available today, you'll probably end up producing stuff faster than before. Next-gen wil be interesting indeed:)
 
Last edited:

M1chl

Currently Gif and Meme Champion
For those who doubt, shamefull plug:
I'VE SEEN MOTHERFUCKING LIGHT

DLSS (60 FPS, VSYNC > 70% GPU utilisation )

utGeHkX.jpg


pK4nIG2.png


NATIVE (cca 40FPS)

qDDxD6h.jpg


2080Ti

Just HOW?
 

martino

Member
People tend to think that making games next-gen with all the fidelity it brings will take longer than before, but with all the tools available today, you'll probably end up producing stuff faster than before. Next-gen wil be interesting indeed:)
you will do and use the same stuff faster (one asset)
but you will have to do it a lot more so overall it will take longer (one to X)
this is if you want to add lot of different details.
 
Last edited:

RaySoft

Member
you will do and use the same stuff faster (one asset)
but you will have to do it a lot more so overall it will take longer (one to X)
this is if you want to add lot of different details.
In the past (and now as well) an artist had to draw everything by hand, or at least use a lot of time applying touch-ups to the assets after downscaling from source object.
Now (and I know it's been used for a while) you jsut 3d scan evryting in from real objects. But where they, then had to use manual labour to the touchup to EVERY downscaled objects (as in different fidelity of objects dependant on target system. (switch will use a lower version of the same asset than the XSX for instance)) That adds up to alot of hours of manual labour on a multiplat title. Now you can offset all this work to AI.
If you're not satisfied with one result for instance, you just apply more learning, and cycle the job again:)

Time saved with all the new tech will outweigh the time needed to make more assets. Remember a lot more can be automated compared with all manual labour previously.
This way devs can just focus on making ONE asset (the source one) the engine does the rest...
 

Shin

Banned
They are on their own, none or not as much ass kissing going on for Xbox, so they'd be wise to spend time highlighting all the features.
If they leave it as is which I highly doubt then they would have successfully kill their own momentum, they let everything out - now it's good to discuss it.
 

martino

Member
In the past (and now as well) an artist had to draw everything by hand, or at least use a lot of time applying touch-ups to the assets after downscaling from source object.
Now (and I know it's been used for a while) you jsut 3d scan evryting in from real objects. But where they, then had to use manual labour to the touchup to EVERY downscaled objects (as in different fidelity of objects dependant on target system. (switch will use a lower version of the same asset than the XSX for instance)) That adds up to alot of hours of manual labour on a multiplat title. Now you can offset all this work to AI.
If you're not satisfied with one result for instance, you just apply more learning, and cycle the job again:)

Time saved with all the new tech will outweigh the time needed to make more assets. Remember a lot more can be automated compared with all manual labour previously.
This way devs can just focus on making ONE asset (the source one) the engine does the rest...

let's agree to disagree , you re assuming too much gain from benefit already there imo.
the change is more those gains to become more affordable
 
Last edited:

Dory16

Banned
This sounds like a really good feature that Series X will have. Would love to see Microsoft go into detail about DirectML and Velocity at the June event then in July see all of this come together in real time when Microsoft shows off the 1st party games.




“One of the studios inside Microsoft has been experimenting with using ML models for asset generation. It’s working scarily well. To the point where we’re looking at shipping really low-res textures and having ML models uprez the textures in real-time. You can’t tell the difference between the hand-authored high-res texture and the machine-scaled- up low-res texture, to the point that you may as well ship the low-res texture and let the machine do it”



This means that developers will have an option to render a game at 1080p or 1440p. Then they can upsample it to 4K with little loss to image quality. Rendering a game in 1080p will also allow devs to utilize spare GPU resources for better graphics. Resources such as graphical effects, Ray Tracing, and/or better frame rate. How developers use it is up to them, and I’m sure it will depend on the type of game they make. DLSS on PC is usually used in games that use Ray tracing, allowing games to run at a high frame rate and with high image quality- upscaled 1440p or 4K.
I really think that the focus should be on games now. As impressive as Direct ML sounds, the time for touting hw features has passed. It would be a mistake to have a hardware deep dive event just after Sony just had the UE5 demo and and an all-games event back to back. MS would find themselves on their backfoot again, playing catch up after having had a 6 month head start in revealing their console.
It's time for them to deliver on the one thing that is universally considered as their achille heel. THE GAMES. And the May 20/20 event didn't quite do the job of selling the XSX.
By all means talk about DIrect ML but don't end the event without presenting at least one or two very impressive games.
 

Whitecrow

Banned
Man, I'm still expecting MS to give us some info about their most recent API features,
like DirectPR and DirectBS.
jk, i just had to do it ok?
 
Last edited:

Hendrick's

If only my penis was as big as my GamerScore!
They are on their own, none or not as much ass kissing going on for Xbox, so they'd be wise to spend time highlighting all the features.
If they leave it as is which I highly doubt then they would have successfully kill their own momentum, they let everything out - now it's good to discuss it.
The June event coming up is supposed to cover the hardware. Hoping for a deeper dive on some of these features.
 
Last edited:

RaySoft

Member
let's agree to disagree , you re assuming too much gain for benefit already there imo.
the change is more those gains to become more affordable
I've obviously taken the liberty to by-pass the natural timespan it takes for a dev to adopt the new tech, wich takes time ofc.
 
I thought fauxK was something to poke fun at here, seems to be quite the turnaround on acceptance lately

Personally looking forward to seeing how it all pans out after trying the likes of Checkerboard rendering & Radeon Image Sharpening to claw back performance in games without much of a loss to quality
 
Last edited:
I thought fauxK was something to poke fun at here, seems to be quite the turnaround on acceptance lately

Personally looking forward to seeing how it all pans out after trying the likes of Checkerboard rendering & Radeon Image Sharpening to claw back performance in games without much of a loss to quality
It's all about 60-120fps on consoles now and I agree with that. 30fps is last gen and hope less games will go under 60fps. Doesn't mean I wouldn't enjoy a 30fps game though.
 

M1chl

Currently Gif and Meme Champion
Man, I'm still expecting MS to give us some info about their most recent API features,
like DirectPR and DirectBS.
jk, i just had to do it ok?
If you do it, do it properly. ML as in Machine Learning is something which sounds less misterious than Deep Learning.

If the jaggies are gone forever it's fine by me.
In control, it does not even allow you to put AA on that, it's 16k image downsampled basically, well with some neural network fuckery.
 
Last edited:
That's a bunch of old articles put together, nothing new sadly. But what is overlooked is that AI computations on XBX are done on CUs, which means the devs will have to balance the workload between typical GPGPU tasks and AI calculations, those 97TOPS are for all 52CUs combined, which would leave absolutely no processing power left to draw even a single polygon, so the question is how few CUs are needed and how good results they can produce.

This is a valid point, but I’d answer your question with the fact that Control’s early implementation of DLSS 2.0 was done without the tensor cores on RTX (using only normal GPU compute) and the results are great.
 
Last edited:
So now lower res and upscaling are good for a high end console 🤭



OT this is indeed pretty neat and with it we'll be able to get even more performance out of this box.
 
A lot of people have been talking about DirectML as kind of a hidden gem feature heading into next gen. DLSS 2.0 on PC has shown has how AI upscaling can really be a changer, especially when utilizing resource expensive ray tracing effects.

How each console handles trying to emulate DLSS 2.0 is one of the things i have been most curious to see with my own eyes.
 

nosseman

Member
So now lower res and upscaling are good for a high end console 🤭



OT this is indeed pretty neat and with it we'll be able to get even more performance out of this box.

4k in 60 FPS is sweet but if you could have almost exactly the same image quality and rendering at 1080p or 1440p you get a whole lot of GPU power to spare that could be used for raytracing/120fps/more effects and so on.
 

IntentionalPun

Ask me about my wife's perfect butthole
That's a bunch of old articles put together, nothing new sadly. But what is overlooked is that AI computations on XBX are done on CUs, which means the devs will have to balance the workload between typical GPGPU tasks and AI calculations, those 97TOPS are for all 52CUs combined, which would leave absolutely no processing power left to draw even a single polygon, so the question is how few CUs are needed and how good results they can produce.
Yeah nVidia does it with tensor cores that IIRC take up like 20% of the die space.

But not sure how hard the tensor cores are hit for DLSS either.

No clue how that translates into CUs for DirectML on XSX.
 
Last edited:

martino

Member
Yeah nVidia does it with tensor cores that IIRC take up like 20% of the die space.

But not sure how hard the tensor cores are hit for DLSS either.

No clue how that translates into CUs for DirectML on XSX.

i tried to find infos on that and failled
 
Last edited:

Tarkus98

Member
Seems like some sweet tech here. Things are getting really interesting for next gen and I cannot wait to get my hands on these new consoles.
 

IntentionalPun

Ask me about my wife's perfect butthole
i tried to find infos on that and failled
DLSS works on the RTX 2060 which has the least amount of tensor cores of the nVidia cards. But those are specialized and supposed to provide more computational power for that type of computing than the GPU cores ever could, hence the specialized cores.

So yeah, really tough to say how much XSX can use that kind of tech on AMD CUs.
 

Thirty7ven

Banned
Yeah nVidia does it with tensor cores that IIRC take up like 20% of the die space.

But not sure how hard the tensor cores are hit for DLSS either.

No clue how that translates into CUs for DirectML on XSX.


With over 12 teraflops of FP32 compute, RDNA 2 also allows for double that with FP16 (yes, rapid-packed math is back). However, machine learning workloads often use much lower precision than that, so the RDNA 2 shaders were adapted still further.

"We knew that many inference algorithms need only 8-bit and 4-bit integer positions for weights and the math operations involving those weights comprise the bulk of the performance overhead for those algorithms," says Andrew Goossen. "So we added special hardware support for this specific scenario. The result is that Series X offers 49 TOPS for 8-bit integer operations and 97 TOPS for 4-bit integer operations. Note that the weights are integers, so those are TOPS and not TFLOPs. The net result is that Series X offers unparalleled intelligence for machine learning."
 

Journey

Banned
I really think that the focus should be on games now. As impressive as Direct ML sounds, the time for touting hw features has passed. It would be a mistake to have a hardware deep dive event just after Sony just had the UE5 demo and and an all-games event back to back. MS would find themselves on their backfoot again, playing catch up after having had a 6 month head start in revealing their console.
It's time for them to deliver on the one thing that is universally considered as their achille heel. THE GAMES. And the May 20/20 event didn't quite do the job of selling the XSX.
By all means talk about DIrect ML but don't end the event without presenting at least one or two very impressive games.


This is a technical topic, not "What should MS do" Poll.
 
Last edited:

Neo_game

Member
IMO anything over 1440P is good enough as long as it scales wells on TV. Native 4K is waste of resources which can be used for having more details instead.
 

martino

Member
DLSS works on the RTX 2060 which has the least amount of tensor cores of the nVidia cards. But those are specialized and supposed to provide more computational power for that type of computing than the GPU cores ever could, hence the specialized cores.

So yeah, really tough to say how much XSX can use that kind of tech on AMD CUs.

outside hard numbers, i looked for 2060 bench using dlss targettting 4k.
the card is not made for that so unsurprisingly i found nothing there.
so can we be sure in this case ?
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
outside hard numbers, i looked for 2060 bench using dlss targettting 4k.
the card is not made for that so unsurprisingly i found nothing there.
so can we be sure in this case ?

The RTX 2060 is an accomplished performer at 1440p resolution and that's the base resolution used for DLSS rendering at 4K. As the numbers on this page suggest, DLSS with 2060 effectively delivers similar performance to RTX 2080 and GTX 1080 Ti without DLSS. That's pretty impressive.


Thirty7ven Thirty7ven : Interesting stuff... so the AMD cards are designed to do that kind of work efficiently on the CUs, hence no specialized hardware. So hopefully these techniques don't take many CUs, and they can be used fairly easily on XSX particularly due to it's "wide" architecture.
 
Last edited:

Diablos

Member
DLSS is fucking amazing. Square-Enix should patch all of their PS1 JRPG’s (FF, Chrono Cross etc) with this.
 

martino

Member




Thirty7ven Thirty7ven : Interesting stuff... so the AMD cards are designed to do that kind of work efficiently on the CUs, hence no specialized hardware. So hopefully these techniques don't take many CUs, and they can be used fairly easily on XSX particularly due to it's "wide" architecture.
i didn't searched hard enought :D thanks
 

Ascend

Member




Thirty7ven Thirty7ven : Interesting stuff... so the AMD cards are designed to do that kind of work efficiently on the CUs, hence no specialized hardware. So hopefully these techniques don't take many CUs, and they can be used fairly easily on XSX particularly due to it's "wide" architecture.
AMD are generally smart in using their CUs. They had the issue with GCN that many of them were idling most of the time. That's why you see literally zero difference between a Vega 56 and a Vega 64 if they have the same clocks for the GPU and the memory, despite more CUs being available on the Vega 64. They have tried improving this with async compute, but the CUs were still mostly underutilized. It's one of the reasons going beyond 64 CUs was a waste of die space for them.

RDNA improved the CU efficiency quite a bit (which is likely why we will see a 80CU RDNA card soon), but most likely there are still unused CUs at any given time that can be given some work. It's probably the reason they will do ML on the CU. Even AMD's RT implementation relies partially on the use of the CUs.

We'll have to wait and see how their approaches work in practice. nVidia's sacrfice is die size and thus cost, while AMD's approach most likely sacrifices some performance to save on die space. It might possibly even out? Who knows.
 
Last edited:

Thirty7ven

Banned




Thirty7ven Thirty7ven : Interesting stuff... so the AMD cards are designed to do that kind of work efficiently on the CUs, hence no specialized hardware. So hopefully these techniques don't take many CUs, and they can be used fairly easily on XSX particularly due to it's "wide" architecture.

I don’t know how it compares to Nvidia’s specialized silicon approach.
 

Dory16

Banned
This is a technical topic, not "What should MS do" Poll.
Someone mentioned that DirectML would be part of the June 20/20 and I gave my opinion. And where is your topic police badge?
You are free to treat the thread as a strictly technical topic. I am free to speak about what part of the next communication DirectML should have.
 

Great Hair

Banned
To the point where we’re looking at shipping really low-res textures and having ML models uprez the textures in real-time.

Wait a second. After years of mocking Sony for using PS2 models of GT cars in GT5, GT6 ... Microsoft is actually mirroring that now and hopes that, with DML it will improve assets by 1000% by adding extra polygons to make them look better like in GT6 2013?

"Adaptive Tessallation in Gran Turismo 6 2013"


Microsoft (evolutionary step)
high res polygons downgraded to low res polygons assets, LOD in place LOD0 to LOD11... does DML not interfer with UE5? Or will it even add more polyons to the 33mill. statue in the techdemo? ;)

Sony (revolutionary step)
high res polygons assets, direct import, no LOD?

Is this only meant for textures?
 
Last edited:

Journey

Banned
Someone mentioned that DirectML would be part of the June 20/20 and I gave my opinion. And where is your topic police badge?
You are free to treat the thread as a strictly technical topic. I am free to speak about what part of the next communication DirectML should have.


So you don't think that going off topic to criticize MS would be frowned upon?

If I went into a Sony thread about The Last of US 2, and instead of discussing the game, I complain and ask "When is Sony going to put in more powerful hardware so I can play their exclusives at 60fps instead of 30". There would be nothing constructive or a contribute to the topic, just like your comment doesn't contribute anything towards DirectML and its use.
 

Journey

Banned
Wait a second. After years of mocking Sony for using PS2 models of GT cars in GT5, GT6 ... Microsoft is actually mirroring that now and hopes that, with DML it will improve assets by 1000% by adding extra polygons to make them look better like in GT6 2013?

"Adaptive Tessallation in Gran Turismo 6 2013"


Microsoft (evolutionary step)
high res polygons downgraded to low res polygons assets, LOD in place LOD0 to LOD11... does DML not interfer with UE5? Or will it even add more polyons to the 33mill. statue in the techdemo? ;)

Sony (revolutionary step)
high res polygons assets, direct import, no LOD?

Is this only meant for textures?




No, this is more like DLSS from nVidia (Deep Learning Super Sampling) Microsoft calls theirs Machine Learning.


DLSS is pretty freaking amazing, forget politics between MS vs Sony for a minute and take a look at this video.

 
Top Bottom