• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox Series X’s Advantage Could Lie in Its Machine Learning-Powered Shader Cores, Says Quantic Dream

assurdum

Banned
Also... You do realize Computer Scientists invented the very term Machine Learning - don't you? But you simply scoff at the idea of Machine Learning being a prime subject in Computer Science?

Yes it is in fact the main subject of nearly 5 books in totality spanning in total 7 books.

It's important people realize Machine Learning is not just about propagating data with more statistical efficiency, it is about programming software that is in fact actually able to code and create software far superior to what human's are capable. And then furthermore applying machine learning to words, architectures, material science - quiet literally every single physical entity will eventually be bolstered by Machine Learning and improved upon. Software optimization due to Machine Learning is merely topical and the tip of the iceberg here. According to Computer Science, the Era before Machine Learning is implemented in full (roughly January 3rd 2021) should be considered literally as the Dark Ages before Machine Learning became common.
And again the hell it has to do how it will work on series X? There aren't dedicate hardware for machine learning. If it uses CUs too much for machine learning , CUs can't be used for other graphical tasks. Now I'm not saying we won't see any benefit but let's not pretend to see a new world of new perfomance just for that
 
Last edited:
I find unbelievable you continue to substain such libraries can land such enormous boost in performance and you continue to cite monumental book and theories, which, seriously, has no concrete prove of anything how it will work on series X and it s limited hardware (from the perspective of machine learning).
And the fan comes out, you are obviously anti-graphics.

I am sorry to inform you, Machine Learning, be it Microsoft or it's competitors - Is scheduled to increase hardware performance this era 300% - the fact that I reference DirectML in regards to DX12Ultimate only proves that in fact one out of 2 of the manufacturers in question have discussed that is fully implemented in their API.

Are you Implying without a doubt Xbox Series X is not a DirectX12Ultimate part? Or that it is an inferior DirectX12Ultimate Part? Or that, wrongly - DX12Ultimate - does not have Machine Learning in the form of DirectML?

You are also very seriously Implying there is no direct correlation between Machine Learning and Computer Science? That these are Theories?

No they are proved and factual propagators of disruptive performance gains as has been addressed as FACT by Computer Science for decades -

Say what you will, 303% and upwards of performance through software optimization is on the way due to Machine Learning as is stated by Computer Science.


by Michael Hochster, PhD in Statistics from Stanford; Director of Research at Pandora, on Quora:
I don't think it makes sense to partition machine learning into computer science and statistics. Computer scientists invented the name machine learning, and it's part of computer science, so in that sense it's 100% computer science.

 

assurdum

Banned
Now you try to dismiss anything that you don't like as "PR stuff".

An historical Sony Dev doing "PR stuff" for Microsoft. The impressive coping logic of the Sony crowd.
It's not a Sony dev to be fair. And again I'm not dismiss saying I'm not expect miracles from ML considered there arent hardware part dedicate in series X.
 

assurdum

Banned
And the fan comes out, you are obviously anti-graphics.

I am sorry to inform you, Machine Learning, be it Microsoft or it's competitors - Is scheduled to increase hardware performance this era 300% - the fact that I reference DirectML in regards to DX12Ultimate only proves that in fact one out of 2 of the manufacturers in question have discussed that is fully implemented in their API.

Are you Implying without a doubt Xbox Series X is not a DirectX12Ultimate part? Or that it is an inferior DirectX12Ultimate Part? Or that, wrongly - DX12Ultimate - does not have Machine Learning in the form of DirectML?

You are also very seriously Implying there is no direct correlation between Machine Learning and Computer Science? That these are Theories?

No they are proved and factual propagators of disruptive performance gains as has been addressed as FACT by Computer Science for decades -

Say what you will, 303% and upwards of performance through software optimization is on the way due to Machine Learning as is stated by Computer Science.


https://www.quora.com/How-much-of-m...ience-vs-statistics-1/answer/Michael-Hochster
A simple question. You seen how works machine learning on serie X? You have a benchmark to show to us? Concrete data about the benefit, not general chats indeed to repeat again and again what it is machine learning in the science? You can't even know how concretely works in serie X.
 
Last edited:
And again the hell it has to do how it will work on series X? There aren't dedicate hardware for machine learning. If it uses CUs too much for machine learning , CUs can't be used for other graphical tasks. Now I'm not saying we won't see any benefit but let's not pretend to see a new world of new perfomance just for that
Again, a CU that has been programmed to work as a standard Compute Unit if reprogrammed to work as MLCompute Unit will maintain the same efficiency but use a different subset of FP operations. In fact once applied - CU efficiency will only improve. You are trying to infer that this will hinder standard CU performance and that is plainly wrong - there is simply no reason to reprogram a CU only to hinder performance. And you are also inferring that is in fact the case and will possibly become practice.
 

assurdum

Banned
Again, a CU that has been programmed to work as a standard Compute Unit if reprogrammed to work as MLCompute Unit will maintain the same efficiency but use a different subset of FP operations. In fact once applied - CU efficiency will only improve. You are trying to infer that this will hinder standard CU performance and that is plainly wrong - there is simply no reason to reprogram a CU only to hinder performance. And you are also inferring that is in fact the case and will possibly become practice.
You know right CU needs to do other things than just ML instructions. How can maintain the same efficiency for his "typical" operations if you need to use it also for ML data? That's why dedicate hardware is fundamental.
 
Last edited:

mrmeh

Member
A simple question. You seen how works machine learning on serie X? You have a benchmark to show to us? Concrete data about the benefit, not general chats indeed to repeat again and again what it is machine learning in the science? You can't even know how concretely works in serie X.

I'm hoping it can automap bono's face on to every in game enemy for me to slaughter mercilessly
 
A simple question. You seen how works machine learning on serie X? You have a benchmark to show to us? Concrete data about the benefit, not general chats indeed to repeat again and again what it is machine learning in the science? You can't even know how concretely works in serie X.
The only example of DirectML used is non hardware specific, and I'm not even sure it is actually DirectML though it is cited as such - but I imagine even when faced with concrete evidence you will continue grasping at air but the fact remain's ML is due to deliver disruptive performance gains of up to 300%. Period.


"DirectML - The API that could bring DLSS-like features to everyone

While DirectML hasn't been confirmed as a next-generation console feature, you can be sure that Microsoft has been considering the option heavily. Work on DirectML has been happening, at least publically, for as long as DXR has, making it likely that AMD is working on hardware DirectML support for its next-generation graphics cards.

Microsoft has already showcased the potential of machine learning in gaming applications, with the image below showcasing what happens when Machine Learning is used to upscale an image to four times its original resolution (basically from 1080p to 4K) to generate a sharper final image and reduced aliasing. The image below is a comparison between ML Super Sampling and bilinear upsampling.

This technique has also been showcased during one of Microsoft's SIGGRAPH 2018 tech talks. This talk, which is entitled "Deep Learning for Real-Time Rendering: Accelerating GPU Inferencing with DirectML and DirectX 12" showcases Nvidia hardware upscaling Playground Games' Forza Horizon 3 from 1080p to 4K using DirectML in real-time. DirectML has the potential to improve the graphical fidelity of future console and PC games. "


 

assurdum

Banned
The only example of DirectML used is non hardware specific, and I'm not even sure it is actually DirectML though it is cited as such - but I imagine even when faced with concrete evidence you will continue grasping at air but the fact remain's ML is due to deliver disruptive performance gains of up to 300%. Period.


"DirectML - The API that could bring DLSS-like features to everyone

While DirectML hasn't been confirmed as a next-generation console feature, you can be sure that Microsoft has been considering the option heavily. Work on DirectML has been happening, at least publically, for as long as DXR has, making it likely that AMD is working on hardware DirectML support for its next-generation graphics cards.

Microsoft has already showcased the potential of machine learning in gaming applications, with the image below showcasing what happens when Machine Learning is used to upscale an image to four times its original resolution (basically from 1080p to 4K) to generate a sharper final image and reduced aliasing. The image below is a comparison between ML Super Sampling and bilinear upsampling.

This technique has also been showcased during one of Microsoft's SIGGRAPH 2018 tech talks. This talk, which is entitled "Deep Learning for Real-Time Rendering: Accelerating GPU Inferencing with DirectML and DirectX 12" showcases Nvidia hardware upscaling Playground Games' Forza Horizon 3 from 1080p to 4K using DirectML in real-time. DirectML has the potential to improve the graphical fidelity of future console and PC games. "



The funny things it's you continue to grasp in the air not me, without provide concrete data about the use in the CUs, the gain in performance on series X because obviously doesn't exist yet. Show an upscaled picture doesn't tell you how much cost to the GPU, if we need to sacrifice other stuff to achieve such reconstruction. That's my whole point. Now again I don't say it's useless, it doesn't give benefit. But we have to remember no dedicate hardware means to sacrifice the hardware in the meanwhile when other stuff could be possible; it's not a free cost practice.
 
Last edited:
The funny things it's you continue to grasp in the air not me, without provide concrete data about the use in the CUs, the gain in performance on series X because obviously doesn't exist yet. Show an upscaled picture doesn't tell you how much cost to the GPU, if we need to sacrifice other stuff to achieve such reconstruction. That's my whole point. Now again I don't say it's useless, it doesn't give benefit. But we have to remember no dedicate hardware means to sacrifice the hardware in other stuff.
Here - a video supposedly showing DirectML in action though I doubt it will post here specifically. However it show's a DirectML variant and it's performance/resolution gains - a whole 1 hour seminar.


Next you will tell me, so what - that's not on series X and to this I will say - Computer Science states it doesn't need to be shown on Series X - a piece of hardware that does not yet officially exist to the consumer - to plainly see that ML in this case, DIrectML - provides and will continue to provide disruptive performance gains.

And on that note I leave you with this, so what - you're some nameless person on a forum claiming Microsoft are out rightly lying - but Computer Science says ML will be able to provide a 300% jump in performance on this hardware through merely utilizing better software - and it will. The End.
 

assurdum

Banned
Here - a video supposedly showing DirectML in action though I doubt it will post here specifically. However it show's a DirectML variant and it's performance/resolution gains - a whole 1 hour seminar.


Next you will tell me, so what - that's not on series X and to this I will say - Computer Science states it doesn't need to be shown on Series X - a piece of hardware that does not yet officially exist to the consumer - to plainly see that ML in this case, DIrectML - provides and will continue to provide disruptive performance gains.

And on that note I leave you with this, so what - you're some nameless person on a forum claiming Microsoft are out rightly lying - but Computer Science says ML will be able to provide a 300% jump in performance on this hardware through merely utilizing better software - and it will. The End.
I give up. I don't know if you are just obtuse or you want deliberating ignore the fact when ML is not hardware dedicate at all it changes drastically in the percentage of benefit, because it costs precious resources. That's all for me
 
Last edited:

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
I know I get all my console info from a non-technical CEO who doesn't know either APU design or machine learning. He must be repeating something an MS rep told him.
 

longdi

Banned
I wonder how close is XSX to 6800.

6800 has the same game clocks. 6800 has 15% more shader cores. The raw memory bandwith probably about the same. But with infinity cache, 6800 can be up to twice the bandwidth.

But at 1440p, let's say if Series X is about 25% slower than 6800, but as it comes with rDNA2 efficieny features, it may run at about 2080Ti levels for BF5? :messenger_open_mouth:

bh6lOdho6U0z5vas.jpg
 
Last edited:
I give up. I don't know if you are just obtuse or you want deliberating ignore the fact when ML is not hardware dedicate at all it changes drastically in the percentage of benefit, because it costs precious resources. That's all for me

I, plainly - have stated - ML is not something that need's dedicated hardware - you have ignored this countless times now. Even without direct dedicated DirectML parts - ML will bolster this generation of hardware 300% through software optomization.

But as not myself, but countless others have pointed out - your message is "Please ignore Machine Learning gains as those disruptive examples have not been officially shown off on the Series X"

However I have substantiated the use of DirectML - a feature the Xbox Series X clearly utilizes - in the above video. You are insistent that Microsoft is possibly lying about how well it's fully compliant DirectX12Ultimate Console - handles DirectML.

All I've ever said is Computer Science states a 303% increase in performance - based soley on Software Optimization - is quickly on the horizon no matter which variant of ML is in question.

You don't seem to believe Microsoft DirectML need's dedicated DirectX12Ultimate hardware, and or - the SeriesX is fully DirectX12Ultimate Compliant.
 
Last edited:

Dodkrake

Banned
Here - a video supposedly showing DirectML in action though I doubt it will post here specifically. However it show's a DirectML variant and it's performance/resolution gains - a whole 1 hour seminar.


Next you will tell me, so what - that's not on series X and to this I will say - Computer Science states it doesn't need to be shown on Series X - a piece of hardware that does not yet officially exist to the consumer - to plainly see that ML in this case, DIrectML - provides and will continue to provide disruptive performance gains.

And on that note I leave you with this, so what - you're some nameless person on a forum claiming Microsoft are out rightly lying - but Computer Science says ML will be able to provide a 300% jump in performance on this hardware through merely utilizing better software - and it will. The End.

What computer science?
 

assurdum

Banned
I, plainly - have stated - ML is not something that need's dedicated hardware - you have ignored this countless times now. Even without direct dedicated DirectML parts - ML will bolster this generation of hardware 300% through software optomization.

But as not myself, but countless others have pointed out - your message is "Please ignore Machine Learning gains as those disruptive examples have not been officially shown off on the Series X"

However I have substantiated the use of DirectML - a feature the Xbox Series X clearly utilizes - in the above video. You are insistent that Microsoft is possibly lying about how well it's fully compliant DirectX12Ultimate Console - handles DirectML.

All I've ever said is Computer Science states a 303% increase in performance - based soley on Software Optimization - is quickly on the horizon no matter which variant of ML is in question.

You don't seem to believe Microsoft DirectML need's dedicated DirectX12Ultimate hardware, and or - the SeriesX is fully DirectX12Ultimate Compliant.
No I don't believe it's free cost on the serie X that's why for the third time I repeat dedicate hardware is fundamental, or it gives 303% of increase in performance, like lol. Straight and simple. But of course such 303% could be referred to other stuff with coding efficiency and so on, which aren't directly involved in real graphic perfomance; MS it's not new to play with the words
 
Last edited:

inflation

Member
I, plainly - have stated - ML is not something that need's dedicated hardware - you have ignored this countless times now. Even without direct dedicated DirectML parts - ML will bolster this generation of hardware 300% through software optomization.

But as not myself, but countless others have pointed out - your message is "Please ignore Machine Learning gains as those disruptive examples have not been officially shown off on the Series X"

However I have substantiated the use of DirectML - a feature the Xbox Series X clearly utilizes - in the above video. You are insistent that Microsoft is possibly lying about how well it's fully compliant DirectX12Ultimate Console - handles DirectML.

All I've ever said is Computer Science states a 303% increase in performance - based soley on Software Optimization - is quickly on the horizon no matter which variant of ML is in question.

You don't seem to believe Microsoft DirectML need's dedicated DirectX12Ultimate hardware, and or - the SeriesX is fully DirectX12Ultimate Compliant.
I'm just curious, all those numbers, "303%", where does it come from? Do you have any sources? I would love to see that. Last time when I read papers about image reconstructions dated back to 2018, when SRGAN was a hot topic.
 

assurdum

Banned
I'm honestly trying to find where people are pulling this 303% improvement from.
Machine learning books, please don't push him to post again this stuff. It's quite annoying and pointless if I can say. Most because we don't know yet how much hardware resources ML requires on series X, the real benefit in the big scheme of the things, what's the cost Vs the pros etc etc
 
Last edited:

Lysandros

Member
The validity of this text falls here



This is pretty much all wrong or misleading

  • Effective Bandwidth
    • (10*560) + (6*336) / 16 = 476GB/s
  • Bandwidth difference
    • |448−476|[(448+476)2]×100= 6.06061%
  • SSD transfer Speed
    • 5.5GB/s / 2.4GB/s = 2.29X

So, the appropriate quote would be
As i mentioned earlier, i find Cage's comparison extremely simplistic for a developer.
 
Last edited:
I am going to restate that RDNA2 Hardware is most likely reprogrammable - as all CU unit's typically are - and if so - that mean's yes... Series X has dedicated DirectML hardware if Microsoft chooses to reprogram those CU unit's to ML Compute Units.

This should only inspire higher performance gains otherwise there is no reason to reprogram those CU's but reprogramming a CU to provide ML dataset's should not reduce performance of that CU.
 

longdi

Banned
I wonder how close is XSX to 6800.

6800 has the same game clocks. 6800 has 15% more shader cores. The raw memory bandwith probably about the same. But with infinity cache, 6800 can be up to twice the bandwidth.

But at 1440p, let's say if Series X is about 25% slower than 6800, but as it comes with rDNA2 efficieny features, it may run at about 2080Ti levels for BF5? :messenger_open_mouth:

bh6lOdho6U0z5vas.jpg

Continuing along this line of thoughts wrt to Amd newly released graphs....

Remember when we were surprised about this, somehow Series X can run Gears 5 as fast as 2080Ti
n6pF1Dt.png


So now that we seen AMD rDNA2 in action, AMD releasing numbers which places 6800 as clearly faster than 2080Ti in Gears 5. (~71fps)
Ab1IZB6.png


Extrapolating, Series X performs as good as 2080Ti is still in line(60fps).
But at that, Series X is only around 20% slower than 6800!

Series X: 52 rDNA2 CU at 1825mhz (sustained) at 10GB at 560GB/s, 6GB at 336GB/s
6800: 60 rDNA2 CU at 1815mhz (game clocks) at 16GB at 512GB/s + 128MB Infinity Cache with up to >2X bandwidth multiplier

Series X has 15% less CU than 6800, loss of Infinity Cache, so translating to a 20% in game deficit is imo reasonable.

$499 for a full gaming system thats only 20% slower
vs
$578 for a 6800 alone.

Santa Phil!
 
ML is obviously something MS intends to push this gen. Direct ML API, together with additional hardware alterations to allow 4 and 8 bit shows a deliberate direction, one they expect to deliver a result.
MS also has massive super computer banks to do any off line training as well.

In the hot chips presentation MS claimed a 3-10 times gain from ML.

One thing about this generation, from MSs point of view anyway, is the push for efficiencies. VRS, Mesh Shading, ML, SFS etc are all about maximising performance.
I think there should be more legs in the XSX compared to last gen.
 
Ex Sony dev state's the shader cores are ML Shader Cores, Microsoft states it's a fully compliant DirectX12Ultimate Part - Computer Science say's if a CU is reprogrammable it can be essentially programmed to work as a dedicated ML Compute Unit at no loss in performance.

There is no reason reprogramming a CU would cause less efficiency - seems like the Series X certainly has the capability to utilize it's resources as dedicated compute units if it intends and/or wants to.
 

Tschumi

Member
I want someone, somewhere, to give me a concrete footage example at this very instant of the XSX enjoying any kind of visual advantage, over any peer platform. Forget about possible advantages, I mean like, show it happening. As far as I can tell the top 3, 5, games we've seen next gen gameplay for have been PS titles - or PC. My point here is: big words don't bake a chocolate cake. I just made that up.
 

rnlval

Member
I wonder how close is XSX to 6800.

6800 has the same game clocks. 6800 has 15% more shader cores. The raw memory bandwith probably about the same. But with infinity cache, 6800 can be up to twice the bandwidth.

But at 1440p, let's say if Series X is about 25% slower than 6800, but as it comes with rDNA2 efficieny features, it may run at about 2080Ti levels for BF5? :messenger_open_mouth:

bh6lOdho6U0z5vas.jpg

In terms of raw DCU compute power, reference RX 6800 has up to 33% with 2.1Ghz boost when compared to XSX GPU's 12.1 TFLOPS. RX 6800's gap from XSX can be wider with AIB factory OCs e.g. ASUS ROG Strix variants.
 
Last edited:
I want someone, somewhere, to give me a concrete footage example at this very instant of the XSX enjoying any kind of visual advantage, over any peer platform. Forget about possible advantages, I mean like, show it happening. As far as I can tell the top 3, 5, games we've seen next gen gameplay for have been PS titles - or PC. My point here is: big words don't bake a chocolate cake. I just made that up.
There are going to be some multi plats at launch, or around launch, where we will start to see what is what.
 

longdi

Banned
In terms of raw DCU compute power, reference RX 6800 has up to 33% with 2.1Ghz boost when compared to XSX GPU's 12.1 TFLOPS. RX 6800's gap from XSX can be wider with AIB factory OCs e.g. ASUS ROG Strix variants.

Imo let's take 1815mhz, because that is the average sustainable game clocks at 250w that we will get to see.

Im not sure if we can get sustained 2.1Ghz off the 6800 even in AIB oc cards. Probably 2Ghz is fair more conservative and easier at 300W.

Still between 15-30%, say midpointing to 25% faster, aint half as bad showing. :messenger_open_mouth:
 
Last edited:

Dogman

Member
And again the hell it has to do how it will work on series X? There aren't dedicate hardware for machine learning. If it uses CUs too much for machine learning , CUs can't be used for other graphical tasks. Now I'm not saying we won't see any benefit but let's not pretend to see a new world of new perfomance just for that
Buddy, you've been getting trolled for like 3 pages now... How do you read all this BS about 13 books and 7 books of the "Computer Science Curriculum" and him talking about a quadrillion percent leap in compute, and yet still think he's serious. He's been playing you like a fiddle this whole time
 

rnlval

Member
Imo let's take 1815mhz, because that is the average sustainable game clocks at 250w that we will get to see.

Im not sure if we can get sustained 2.1Ghz off the 6800 even in AIB oc cards. Probably 2Ghz is fair more conservative and easier at 300W.

Still between 15-30%, say midpointing to 25% faster, aint half as bad showing. :messenger_open_mouth:
From https://www.techpowerup.com/review/amd-radeon-rx-5700-xt/34.html
Reference RX 5700 XT's average clock speed is 1887 Mhz which is close to max boost's 1905 Mhz paper spec.

1887 Mhz average clock speed is 99% of 1905 Mhz max boost paper specs.

ASUS ROG Strix RX 5700 XT has 2 GHz average clock speeds.

Both of my MSI RTX 2080 Ti Gaming X Trio (1st gaming PC/Blender3D hardware RT rig) and ASUS Dual RTX 2080 EVO OC(2nd gaming PC rig) is beyond the reference clock speed specs.

I plan to upgrade towards RTX 3080 Ti (MSI brand, Gaming X SKU type) and perhaps RX 6800 XT (ASUS brand) in the future.
 

Dunnas

Member
Buddy, you've been getting trolled for like 3 pages now... How do you read all this BS about 13 books and 7 books of the "Computer Science Curriculum" and him talking about a quadrillion percent leap in compute, and yet still think he's serious. He's been playing you like a fiddle this whole time
Seriously, it was going on so long I actually started wondering If I’m the idiot and was missing something and maybe he was wasn’t actually just talking complete nonsense.
 

longdi

Banned
From https://www.techpowerup.com/review/amd-radeon-rx-5700-xt/34.html
Reference RX 5700 XT's average clock speed is 1887 Mhz which is close to max boost's 1905 Mhz paper spec.

1887 Mhz average clock speed is 99% of 1905 Mhz max boost paper specs.

ASUS ROG Strix RX 5700 XT has 2 GHz average clock speeds.

Both of my MSI RTX 2080 Ti Gaming X Trio (1st gaming PC/Blender3D hardware RT rig) and ASUS Dual RTX 2080 EVO OC(2nd gaming PC rig) is beyond the reference clock speed specs.

I plan to upgrade towards RTX 3080 Ti (MSI brand, Gaming X SKU type) and perhaps RX 6800 XT (ASUS brand) in the future.

Nvidia advertised clocks are more conservatives than Amd. So that we know. My 1080Ti can play most games at higher clocks.

That said, AT 5700XT sample got about 1800mhz in a suite of current games.
Imo that's the critical thing. How sure can 5700XT runs at above game clocks in the next wave of game engines? The division 2 is already hovering at game clocks.

Thats why Amd has game clocks to cater for future, tougher titles, and cover their asses.

At any rate, even with TDP and cooling keeping the 5700 XT more down to earth, the card is still able to hit high clockspeeds. More than half of the games in our benchmark suite average clockspeeds of 1800MHz or better, and a few get to 1900MHz. Even The Division 2, which appears to be the single most punishing game in this year's suite in terms of clockspeeds, holds the line at 1760MHz, right above AMD's official game clock.

 
Last edited:

rnlval

Member
Nvidia advertised clocks are more conservatives than Amd. So that we know. My 1080Ti can play most games at higher clocks.

That said, AT 5700XT sample got about 1800mhz in a suite of current games.
Imo that's the critical thing. How sure can 5700XT runs at above game clocks in the next wave of game engines? The division 2 is already hovering at game clocks.

Thats why Amd has game clocks to cater for future, tougher titles, and cover their asses.



FYI, Anandtech's 9 game sample is less than Techpowerup's 21 game sample.
 

longdi

Banned
FYI, Anandtech's 9 game sample is less than Techpowerup's 21 game sample.

Yes but it takes just 1 new'ish game engine like Division 2 to show why Amd has game clocks and defined it as such:

‘Game Frequency’ is the expected GPU clock when running typical gaming applications, set to typical TGP (Total Graphics Power). Actual individual game clock results may vary. GD-147
 

Lethal01

Member
Machine learning books, please don't push him to post again this stuff. It's quite annoying and pointless if I can say. Most because we don't know yet how much hardware resources ML requires on series X, the real benefit in the big scheme of the things, what's the cost Vs the pros etc etc

I think the point he is making is that ML will be used in the design phase of software, a development tool basically rather than something in the system. In that case I think it's important to remember that those types of optimization seen from ML can be applied to both systems.

Edit: nvm it's just stupidity and you should give up.

ML hardware may be extremely helpful this gen, gotta wait and see.
 
Last edited:
Am I the weird one, I usually am, but I'm taking his comments as "Hardware" and software as "the suite that the console maker has created to help the game maker... make games" There seems to be a section of folks that think he's talking about "software" in terms of games and not the surrounding software the helps game makers make the game... am I the wrong one here? Or what? Yea, no shit good game have always been a thing that makes consoles successful... but, if the software that helps them make the games is not good, then it does matter, right? The software around the PS3 was SHIT! Game makers had to relearn how to code games 🤦‍♂️ what a shit show that was.
 

Md Ray

Member
Continuing along this line of thoughts wrt to Amd newly released graphs....

Remember when we were surprised about this, somehow Series X can run Gears 5 as fast as 2080Ti
n6pF1Dt.png


So now that we seen AMD rDNA2 in action, AMD releasing numbers which places 6800 as clearly faster than 2080Ti in Gears 5. (~71fps)
Ab1IZB6.png


Extrapolating, Series X performs as good as 2080Ti is still in line(60fps).
But at that, Series X is only around 20% slower than 6800!

Series X: 52 rDNA2 CU at 1825mhz (sustained) at 10GB at 560GB/s, 6GB at 336GB/s
6800: 60 rDNA2 CU at 1815mhz (game clocks) at 16GB at 512GB/s + 128MB Infinity Cache with up to >2X bandwidth multiplier

Series X has 15% less CU than 6800, loss of Infinity Cache, so translating to a 20% in game deficit is imo reasonable.

$499 for a full gaming system thats only 20% slower
vs
$578 for a 6800 alone.

Santa Phil!
RX 6800: Ultra, 4K (fixed)
Series X: Ultra, 4K (dynamic) dropping down to even 1080p.

Santa Phil, for sure. :messenger_grinning:
 

geordiemp

Member
Well last gen we had the power of the cloud would transform all games. Name one game anyone ? Crackdown 3 ?

This gen its maybe possibly ML for games (Not deep learning or usual ML applications, GAMES with 16 ms frame time).

Can anyone name any games enhanced by ML ? (other than DLSS upscaling from Nvidia using dedicated Tensor cores)

Nope, well maybe next year...
 
Last edited:
We know this from the Digital Foundry spec reveal, they customised the hardware.

"We knew that many inference algorithms need only 8-bit and 4-bit integer positions for weights and the math operations involving those weights comprise the bulk of the performance overhead for those algorithms," says Andrew Goossen. "So we added special hardware support for this specific scenario. The result is that Series X offers 49 TOPS for 8-bit integer operations and 97 TOPS for 4-bit integer operations. Note that the weights are integers, so those are TOPS and not TFLOPs. The net result is that Series X offers unparalleled intelligence for machine learning."
 
Top Bottom