• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox Series X’s Advantage Could Lie in Its Machine Learning-Powered Shader Cores, Says Quantic Dream

J_Gamer.exe

Member
Should see some huge performance advantages when this and all the RDNA2 features come to fruition.
First it was series x is a monster, will crush PS5, just look at the specs, wait for digital foundry......Df and others came along and exposed that narrative, highlighted xbox as being completely overhyped, clearly bottlenecked in multiple different scenarios now.

Now its kick the can down the road...

"Just you wait guys, you'll see, when xbox rdna2 features come in, just you keep waiting"

48dd5dba2867d6fb08e30fd818284fc5.jpg
 

Riky

$MSFT
First it was series x is a monster, will crush PS5, just look at the specs, wait for digital foundry......Df and others came along and exposed that narrative, highlighted xbox as being completely overhyped, clearly bottlenecked in multiple different scenarios now.

Now its kick the can down the road...

"Just you wait guys, you'll see, when xbox rdna2 features come in, just you keep waiting"

48dd5dba2867d6fb08e30fd818284fc5.jpg

We've already seen games with a constant 44% resolution advantage and higher settings, that doesn't even include this and RDNA2 hardware support. The only bottleneck is having less compute units than a last gen X1X, that will haunt PS5 forever.
 

J_Gamer.exe

Member
We've already seen games with a constant 44% resolution advantage and higher settings, that doesn't even include this and RDNA2 hardware support. The only bottleneck is having less compute units than a last gen X1X, that will haunt PS5 forever.
1 game where ps5 clearly could have been higher as it didn't have a single drop.

Put ps5 at 4k and the drops would likely be similar to sx. There would be nothing in it as the areas ax did drop qere a similar % difference to the res difference but in performance.

Its literally what devs told is closest consoles ever each with own advantages thats it, they will be so close always probably.
 
1 game where ps5 clearly could have been higher as it didn't have a single drop.

Put ps5 at 4k and the drops would likely be similar to sx. There would be nothing in it as the areas ax did drop qere a similar % difference to the res difference but in performance.

Its literally what devs told is closest consoles ever each with own advantages thats it, they will be so close always probably.
"always probably".

I can see you know what you're talking about.... probably.
 
It does drop, there is a screenshot where it hits 37fps, check the DF comparison.
It's the silly opinion resurfacing that the PS5 version could have ran at 4K but the developer decided to deliberately hold it back 🤣 "We can easily do 4K but 1800p will do guys, also drop the shadow quality as well just to annoy PS5 owners and Sony"

It's absolute madness, but that's what some think.
 

yurinka

Member
In XBO they also were supposed to be able to do great things with the 'power of the cloud', Kinect tracking fingers or objects for games, the AI for Mylo and so on but never saw that implemented.

So maybe, let's see if they find the tools. Let's see what they really implement, because as of now most real world next gen native games look pretty similar on both consoles and in many cases have advantages on PS5 in many areas, because each console has its pro and cons when compared with the other one. Series X has advantages in some areas while PS5 has advantages in other areas, and what we saw until now are pretty tied real world results.

Let's see if there is some game in the future that gets a big advantage in Series X, I need to see it to believe it.
 
Last edited:

MonarchJT

Banned
nice try microsoft. welcome to the technology graveyard for failed consoles and forgotten dreams. it is as suitable for your kind, now move aside and let the gaming continue without you terrible buffoon. you have done great damage to gaming and some may choose to forgive that horrible sin :(
it's incredible how this community continues to be held down by people like you. You already know what I'm going to do.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
1) machine learning is used to upscale games and save gpu power ...for years now
2) is a second party dev that is saying this not ms
3) it's an console hardware feature...power of the cloud was very different
They are riding on the popularity of DLSS 2.0 which has about 5x the ML performance available in dedicated Tensor Cores while XSX and PS5 are re-using shader cores and thus stealing resources away from graphics and compute shaders.
 
Last edited:

J_Gamer.exe

Member
"always probably".

I can see you know what you're talking about.... probably.
Well they most likely will be close all gen no one can say with 100% certainty.

The results so far show this and are likely to continue. Both have different advantages and in different scenarios will differ but overall, will be so close the average gamer wouldn't ever notice.
 

MonarchJT

Banned
They are riding on the popularity of DLSS 2.0 which has about 1.5 the ML performance available in dedicated Tensor Cores while XSX and PS5 are re-using shader cores and thus stealing resources away from graphics and compute shaders.
Everyone will use ml or other things to upscale games . You exactly know how they modified the cu to accelerate things? is the standard rdna2? have you some paper or just imaginating? Xbox engineer saying they customized the hw ... quantic dream (which is not related. ms in any way but much more to Sony) says the same thing. other devs have done it too (saying that there are differences between the two console). than there are a lot of usuals saying it's just pr without any proof apparently.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Everyone will use ml or other things to upscale games . You exactly know how they modified the cu to accelerate things? is the standard rdna2? have you some paper or just imaginating? Xbox engineer saying they customized the hw ... quantic dream (which is not related. ms in any way but much more to Sony) says the same thing. other devs have done it too (saying that there are differences between the two console). than there are a lot of usuals saying it's just pr without any proof apparently.

Not saying they have not done anything, either of them, but people would have spotted extra units in the die shots published. The evidence we have is that they have added support for executing 4/8 bits INT operations at 8x/4x the rate of 32 bits ones like they did for FP16 Ops (RPM, 2x the rate of FP32 ones) as well as new mixed precision operations. You are still using the same CU’s that are normally running graphics or compute shaders unlike RTX cards with their Tensor Cores.


These are the official statements by MS and by AMD:
"We knew that many inference algorithms need only 8-bit and 4-bit integer positions for weights and the math operations involving those weights comprise the bulk of the performance overhead for those algorithms," says Andrew Goossen. "So we added special hardware support for this specific scenario. The result is that Series X offers 49 TOPS for 8-bit integer operations and 97 TOPS for 4-bit integer operations.

e1jSyeV.jpg
TIJ9SHj.jpg


9vfYeGn.jpg


Vs.

Turing (nVIDIA)
5mZNWFR.jpg


97 TOPS shared (XSX) vs 500 TOPS dedicated (RTX)...
 
Last edited:

Omni_Manhatten

Neo Member
The RDNA 2 architecture used in Series X does not have tensor core equivalents, but Microsoft and AMD have come up with a novel, efficient solution based on the standard shader cores. With over 12 teraflops of FP32 compute, RDNA 2 also allows for double that with FP16 (yes, rapid-packed math is back). However, machine learning workloads often use much lower precision than that, so the RDNA 2 shaders were adapted still further.

"We knew that many inference algorithms need only 8-bit and 4-bit integer positions for weights and the math operations involving those weights comprise the bulk of the performance overhead for those algorithms," says Andrew Goossen. "So we added special hardware support for this specific scenario. The result is that Series X offers 49 TOPS for 8-bit integer operations and 97 TOPS for 4-bit integer operations. Note that the weights are integers, so those are TOPS and not TFLOPs. The net result is that Series X offers unparalleled intelligence for machine learning." Andrew Goosen > Online Warriors.
- EUROGAMER

Again MS added HW. What they did is not an RDNA2 standard feature. Sony can’t just just turn this on and MS have to add it if they want to use it. Sony also would of had to add this. Then it takes an algorithm multitudes of millions of training pictures to view before it is even close to an effective ML super resolution. MS has spent a lot of time and money on ML and MLSS is just one benefit they will have from it this gen. Sony went to production first. They used custom tools. AMD coordinated with MS on the desktop version of RDNA2 and even Nvidia still partners with them and uses their tech like Direct storage. Yes ML could be an advantage for Xbox. If they don’t make it too difficult to implement for devs.
 

Panajev2001a

GAF's Pleasant Genius
The RDNA 2 architecture used in Series X does not have tensor core equivalents, but Microsoft and AMD have come up with a novel, efficient solution based on the standard shader cores. With over 12 teraflops of FP32 compute, RDNA 2 also allows for double that with FP16 (yes, rapid-packed math is back). However, machine learning workloads often use much lower precision than that, so the RDNA 2 shaders were adapted still further.

"We knew that many inference algorithms need only 8-bit and 4-bit integer positions for weights and the math operations involving those weights comprise the bulk of the performance overhead for those algorithms," says Andrew Goossen. "So we added special hardware support for this specific scenario. The result is that Series X offers 49 TOPS for 8-bit integer operations and 97 TOPS for 4-bit integer operations. Note that the weights are integers, so those are TOPS and not TFLOPs. The net result is that Series X offers unparalleled intelligence for machine learning." Andrew Goosen > Online Warriors.
- EUROGAMER

Again MS added HW. What they did is not an RDNA2 standard feature. Sony can’t just just turn this on and MS have to add it if they want to use it. Sony also would of had to add this. Then it takes an algorithm multitudes of millions of training pictures to view before it is even close to an effective ML super resolution. MS has spent a lot of time and money on ML and MLSS is just one benefit they will have from it this gen. Sony went to production first. They used custom tools. AMD coordinated with MS on the desktop version of RDNA2 and even Nvidia still partners with them and uses their tech like Direct storage. Yes ML could be an advantage for Xbox. If they don’t make it too difficult to implement for devs.
Sony could have taken up this optional RDNA1 extension, see above.
 

Omni_Manhatten

Neo Member
Yes... yes it is.

Mixed precision packed math for 4 and 8 bit INT OPs is standard RDNA2. It was an option for RDNA1 too. Very standard fair.

You and others seem to be reading too much into MS PR statement kool aid.
It’s not even 5 or 6 post up that shows the RDNA2 standard CU is FP16? Also why do you internet Warriors think you can say something and it is more credible than the guys who built the system? Like your knowledge defeats the system architects who literally said they did this specific thing? You even have official AMD releases saying the standard is FP16 and you still are pushing that it’s FP8/4? With official AMD docs showing FP16?
 
Last edited:
It’s not even 5 or 6 post up that shows the RDNA2 standard CU is FP16? Also why do you internet Warriors think you can say something and it is more credible than the guys who built the system? Like your knowledge defeats the system architects who literally said they did this specific thing? You even have official AMD releases saying the standard is FP16 and you still are pushing that it’s FP8/4? With official AMD docs showing FP16?

You don't even understand this stuff.

FP16 is half-precision Floating Point. Andrew Goosen is talking about INT ops. RDNA CUs can do both. You should really educate yourself first before calling anyone else internet warriors. At least then you might have a chance at correctly interpreting what the XSX system's architect is actually saying.
 

Omni_Manhatten

Neo Member
You don't even understand this stuff.

FP16 is half-precision Floating Point. Andrew Goosen is talking about INT ops. RDNA CUs can do both. You should really educate yourself first before calling anyone else internet warriors. At least then you might have a chance at correctly interpreting what the XSX system's architect is actually saying.
I absolutely do. You don’t. The CU shader support had to be implemented for FP8/4. RDNA can do both he’ll GCN could do FP calcs but not at integers of 8/4. Again you can say it’s standard RDNA2 but can’t show 1 sign of evidence it is. Then say the shader support for Int8/4 is standard. It’s ridiculous and wrong.
 

Riky

$MSFT
Indeed it's in the joint AMD/MS statement at the RDNA2 reveal that Series consoles go beyond just RDNA2 features,

"Through close collaboration and partnership between Xbox and AMD, not only have we delivered on this promise, we have gone even further introducing additional next-generation innovation such as hardware accelerated Machine Learning capabilities for better NPC intelligence, more lifelike animation, and improved visual quality via techniques such as ML powered super resolution."
 

J_Gamer.exe

Member
It does drop, there is a screenshot where it hits 37fps, check the DF comparison.
Thats not in gameplay thats that silly DF cutscene benchmark on 1 frame.

Try again.

Series x drops in gameplay, pS5 doesn't suggesting it could easily raise res, probably to 4k and have the same drops as xbox.

You'd say the same if ps5 ran 4k with drops of 40% below 60fps or whatever it is and the series x ran flawlessly with 40% less res.

To me thats a draw. Its dev choice. But series x gets the nod as drops infrequent so higher res on xbox is worth it. But in terms of performance PS5 could have gone the 4k route with drops. I bet there would be hardly anything in it.
 

Riky

$MSFT
Thats not in gameplay thats that silly DF cutscene benchmark on 1 frame.

Try again.

Series x drops in gameplay, pS5 doesn't suggesting it could easily raise res, probably to 4k and have the same drops as xbox.

You'd say the same if ps5 ran 4k with drops of 40% below 60fps or whatever it is and the series x ran flawlessly with 40% less res.

To me thats a draw. Its dev choice. But series x gets the nod as drops infrequent so higher res on xbox is worth it. But in terms of performance PS5 could have gone the 4k route with drops. I bet there would be hardly anything in it.

You've only got to start one of the Hitman 2 levels to see the PS5 drop in gameplay, it's in the same video.

Try Again.
 

Hendrick's

If only my penis was as big as my GamerScore!
They are riding on the popularity of DLSS 2.0 which has about 5x the ML performance available in dedicated Tensor Cores while XSX and PS5 are re-using shader cores and thus stealing resources away from graphics and compute shaders.
SO they are stealing resources to free up resources? Makes sense.
 

Panajev2001a

GAF's Pleasant Genius
It’s not even 5 or 6 post up that shows the RDNA2 standard CU is FP16? Also why do you internet Warriors think you can say something and it is more credible than the guys who built the system? Like your knowledge defeats the system architects who literally said they did this specific thing? You even have official AMD releases saying the standard is FP16 and you still are pushing that it’s FP8/4? With official AMD docs showing FP16?

The official docs show INT4/8 and I quoted the RDNA architecture PDF, their official RDNA2 CU improvements explanations (see slide), as well as the full context from both the DF Interview as well as HotChips.

See:

MS quotes ~12.15 TFLOPS at FP32 or ~97 TOPS at INT4 precision... 12.15 * 8 = 97.2...

Anyways, you are thinking RTX’s DLSS 2.0 and comparing the XSX GPU core that has, by MS official numbers, about 1/5th of the INT4/INT8 performance for ML operations compared to RTX cards... and MS made it quite clear you are using the Shader ALU’s to do either compute, graphics, or ML.
 

BeardGawd

Banned
Indeed it's in the joint AMD/MS statement at the RDNA2 reveal that Series consoles go beyond just RDNA2 features,

"Through close collaboration and partnership between Xbox and AMD, not only have we delivered on this promise, we have gone even further introducing additional next-generation innovation such as hardware accelerated Machine Learning capabilities for better NPC intelligence, more lifelike animation, and improved visual quality via techniques such as ML powered super resolution."
Unless Sony themselves admit their own weaknesses people will continue to spread disinformation.
 

Panajev2001a

GAF's Pleasant Genius
SO they are stealing resources to free up resources? Makes sense.
:LOL:, call it leave it to developers to allocate a finite number of compute units to perform work based on developer decided work allocation.

Call it stealing, reserving units, etc... it does not change things. You do not have 97 TOPS peak of ML power + 12.15 TFLOPS dedicated to graphics processing and general compute.
 

Panajev2001a

GAF's Pleasant Genius
Indeed it's in the joint AMD/MS statement at the RDNA2 reveal that Series consoles go beyond just RDNA2 features,

"Through close collaboration and partnership between Xbox and AMD, not only have we delivered on this promise, we have gone even further introducing additional next-generation innovation such as hardware accelerated Machine Learning capabilities for better NPC intelligence, more lifelike animation, and improved visual quality via techniques such as ML powered super resolution."

That has been covered, you keep posting a single marketing PR statement and refusing to interpret it with any data made available to you. Also, point still stand that you are looking at DLSS 2.0 and those cards have about 5x the performance dedicated to ML Ops alone (excluding the other shader ALU’s).

You made fun on post RDNA2 enhancements Sony fans suggested based on the Sony statements released and now you are making hidden Tensor Cores that apparently add tons of extra performance on top of graphics abd yet do not appear on any due shot?

This is Xbox One hidden dGPU all over again, isn’t it?
 
Last edited:

BeardGawd

Banned
:LOL:, call it leave it to developers to allocate a finite number of compute units to perform work based on developer decided work allocation.

Call it stealing, reserving units, etc... it does not change things. You do not have 97 TOPS peak of ML power + 12.15 TFLOPS dedicated to graphics processing and general compute.
The ML upscale happens at a different stage in the pipeline. So while yes the CUs are being used for this it does not impact the available teraflops as much as you are implying.

It is true they only have about RTX2060 worth of machine learning to work with. But noone knows how taxing DLSS is in the first place on the Tensor Cores. There may be tons of head room. Making that point useless.
 

Panajev2001a

GAF's Pleasant Genius
I absolutely do. You don’t. The CU shader support had to be implemented for FP8/4. RDNA can do both he’ll GCN could do FP calcs but not at integers of 8/4. Again you can say it’s standard RDNA2 but can’t show 1 sign of evidence it is. Then say the shader support for Int8/4 is standard. It’s ridiculous and wrong.
RDNA1 docs from AMD (see INT8/INT4 support):
Ziw1Wlk.jpg


RDNA2 CU improvements detailed by AMD:
uKOTR7b.jpg


See the rest here and the AMD documentation here:
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
The ML upscale happens at a different stage in the pipeline. So while yes the CUs are being used for this it does not impact the available teraflops as much as you are implying.
Not hugely, but it is not uncommon or undesirable to have the GPU work on the next frame and not stall until you are done. A stall is a stall.

It is true they only have about RTX2060 worth of machine learning to work with. But noone knows how taxing DLSS is in the first place on the Tensor Cores. There may be tons of head room. Making that point useless.
DLSS or what MS suggested to be doing with textures, there is not really a fixed limit of what you may want to do with HW acceleration for ML on chip. We are free to fly with PR and imagine the sky, this is not a personal attack or diss on XSX.
 

Andodalf

Banned
Meanwhile we have a direct quote from a Playstation principle engineer specifically stating the PS5 does not have the machine learning that XSX does.

I love how Sony Fanboys say that PS Engineers are the best ever and invented RDNA 4.0 8 years early, and then immediately say that PS engineers are super dumb and don't know anything about the PS5 and what it has
 
I absolutely do. You don’t. The CU shader support had to be implemented for FP8/4. RDNA can do both he’ll GCN could do FP calcs but not at integers of 8/4. Again you can say it’s standard RDNA2 but can’t show 1 sign of evidence it is. Then say the shader support for Int8/4 is standard. It’s ridiculous and wrong.

The evidence is up there posted by P Panajev2001a . You simply chose to ignore it because it doesn't fit your narrative.

Even the RDNA1 white paper talks about allowing an option for mixed precision packed math for INT8/4 for ML ops.

Don't ask anyone to prove their claims with evidence when you've shown all of nothing to support your BS arguments, all the while continuing to mix up FP and INT in your posts (demonstrating your complete lack of understanding at the difference between the two). I'm not wasting my precious time digging up info to correct your misinformation simply because you can't be arsed to educate yourself.
 
Last edited:

MonarchJT

Banned
Sony could have taken up this optional RDNA1 extension, see above.
my fuckin god it's all over again like in the other thread
We have official ms statements
ex sony exclusives dev on XsX statements
hotchip information
rumors.
and some go around "it's nothing they can't do ml....."
we have nothing,zero,nada,任何事物 ,что-либо about the ps5
and the same usuals " sony could add the same ..sony have ml hw"

ooookay...rdna3 guys rdna3 or it will be hidden inside the geometry engine like everything else because they don't have to advertise it since it is not standard rdna2 or dx12 compatible, no?. I don't understand how one can have constructive discussions if one side continues to deny the evidence of the facts and official information ... downplaying it constantly. And instead we have to drink any fictional speculation created by fans as fact. Some people should accept the fact that without official information the more likely it is that THERE ARE NO implementations of these features, otherwise anyone in their right mind would have advertised them.
 
Last edited:

Omni_Manhatten

Neo Member
Indeed it's in the joint AMD/MS statement at the RDNA2 reveal that Series consoles go beyond just RDNA2 features,

"Through close collaboration and partnership between Xbox and AMD, not only have we delivered on this promise, we have gone even further introducing additional next-generation innovation such as hardware accelerated Machine Learning capabilities for better NPC intelligence, more lifelike animation, and improved visual quality via techniques such as ML powered super resolution."
The PS5 has accelerated RT like the Xbox. That we know. But even when DF kept asking Cerny if outside the RT structure does PS5 have support for Int8/4 Cerny said he was saving that for a later deep dive they were planning. Them
The evidence is up there posted by P Panajev2001a . You simply chose to ignore it because it doesn't fit your narrative.

Even the RDNA1 white paper talks about allowing an option for mixed precision packed math for INT8/4 for ML ops.

Don't ask anyone to prove their claims with evidence when you've shown all of nothing to support your BS arguments, all the while continuing to mix up FP and INT in your posts (demonstrating your complete lack of understanding at the difference between the two). I'm not wasting my precious time digging up info to correct your misinformation simply because you can't be arsed to educate yourself.
I love when people use this but don’t want to post the very next line of info that came with it. “To ensure compatibility with the older GCN instruction set, the RDNA SIMDs in Navi support mixed-precision compute. This makes the new Navi GPUs suitable for not only gaming workloads (FP32), but also for scientific (FP64) and AI (FP16) applications. The RDNA SIMD improves latency by 2x in wave32 mode and by 44% in wave64 mode.” I mean how many more times is this stuff going to be argued on the internet?
Like I said show me the actual evidence? I even share the stuff that his argument comes from. Did they indeed go as far as MS on the shaders to support it? DF has asked . The dev who made the article of this topic isn’t just talking out his butt. It’s been an actual question for a while. Even Cerny said he would answer it later?

 

Panajev2001a

GAF's Pleasant Genius
That was clearly damage control. I respect the original quote more as Sony didn't dictate to him what he should say:
7pB2kuT.jpg

Sure, let’s cherry pick and complain about others cherry picking it. So the other statement he made mirrors Cerny’s Road to PS5 presentation but it is invalid because reasons in a thread where some are arguing about essentially a hidden AI accelerator like the hidden dGPU in the Xbox One because that would make the marketing PR more believable. C’mon...

Take both or take neither.
XiZr3RB.jpg
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
The PS5 has accelerated RT like the Xbox. That we know. But even when DF kept asking Cerny if outside the RT structure does PS5 have support for Int8/4 Cerny said he was saving that for a later deep dive they were planning. Them

I love when people use this but don’t want to post the very next line of info that came with it. “To ensure compatibility with the older GCN instruction set, the RDNA SIMDs in Navi support mixed-precision compute. This makes the new Navi GPUs suitable for not only gaming workloads (FP32), but also for scientific (FP64) and AI (FP16) applications. The RDNA SIMD improves latency by 2x in wave32 mode and by 44% in wave64 mode.” I mean how many more times is this stuff going to be argued on the internet?
Like I said show me the actual evidence? I even share the stuff that his argument comes from. Did they indeed go as far as MS on the shaders to support it? DF has asked . The dev who made the article of this topic isn’t just talking out his butt. It’s been an actual question for a while. Even Cerny said he would answer it later?


We shared actual evidence and you keep ignoring it, not really bothering to read it, answering with the same portion of the PR statement of your choice or cherry picking the DF interview, and reading out of the quotes what you want to read into them.
 

MonarchJT

Banned
do
That was clearly damage control. I respect the original quote more as Sony didn't dictate to him what he should say:
7pB2kuT.jpg
he said (two times) that is navi based (not big navi). and FOR THIS REASON dosnt have ml...it's very clear english (not like mine ahaha)..i hope that the day we will find out IF ps5 have or dosnt have hw support for int4 it will be clear evidence of which architecture it is based on. am That day a lot of people will be called out. Being right or completely rotten wrong.

And if that day I hear as an excuse someone just assume that they have deliberately turned off support for int4 ... I'll clearly laugh hard.
 
Last edited:
Top Bottom