• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox Series X’s Advantage Could Lie in Its Machine Learning-Powered Shader Cores, Says Quantic Dream

geordiemp

Member
Yes P3MM is also scheduled for massive gains solely due to machine learning, I'll leave the other 2 I spotted here a mystery in case other user's want a go

I have not read much about deep ML applicatuions, not my thing, and that IPC gain was from the recent AMD white paper in shared L1 caches and when some applications need to go huntiing for data that is not private on L1, the benefit of a mesh like sharing of L1 would improve cache misses.

The main benefit for GAMING was BVH, allot of the deep learning stuff is other applications and not directly exciting for gaming with 16 ms frame times.

So it is unclear if the shared L1 is part of infinity cache or if its more DL application and wont be in gaming GPUs. The AMD paper has a seperate section for deep ML.

We shall see as that level of detail was not in the AMD presentation.
 
Last edited:
I have not read much about deep ML applicatuions, not my thing, and that IPC gain was from the recent AMD white paper in shared L1 caches and when some applications need to go huntiing for data that is not private on L1, the benefit of a mesh like sharing of L1 would improve cache misses.

The main benefit was BVH, allot of the deep leaning stuff is other applications and not directly exciting for gaming with 16 ms frame times, so it is unclear if the shared L1 is part of infinity cache or if its more DL application and wont be in gaming GPUs.

The paper has a seperate section for deep ML.

We shall see as that level of detail was not in the AMD presentation.
I just updated my post to touch on caching before seeing this.
 

longdi

Banned
Yes P3MM is also scheduled for massive gains solely due to machine learning, but everyone expect's caching will see large performance boost's, not everyone believes ML software optimization's will yield large performance gains.

I'll leave the other 2 I spotted here a mystery in case other user's want a go

Interesting you think we can get up to 65% boost for XSX dML. Do you mean by doing super resolution scaling?

How much cu do you think it needs?
XSX has 52cu, can you do ML concurrently in each cu, or you need to dedicate a few cu for it.
Seems MS made a smart choice to have hardware ML. :messenger_ok:
 

TGO

Hype Train conductor. Works harder than it steams.
Wow a playstation centric developer being honest. It's actually quite refreshing to see.
I was gonna say can he be trusted because usually we get the "oh they are a Playstation centric developer" etc, etc.
But yes when they say what you wanna hear it seems they can 😁
 

MarkMe2525

Gold Member
I remember speculating that MS may introduce the ML hardware they put into the surface pro x into the Series systems. I guess it makes more sense to integrate the feature set into the Apu vs a discrete chip. I speculate all these feature sets, while not being transformative on their own, will pay dividends starting with 2nd wave games and forward.
 

longdi

Banned
I remember speculating that MS may introduce the ML hardware they put into the surface pro x into the Series systems. I guess it makes more sense to integrate the feature set into the Apu vs a discrete chip. I speculate all these feature sets, while not being transformative on their own, will pay dividends starting with 2nd wave games and forward.

Yup the 2nd wave of XSXS should be rather spectacular, when the engines start having maximum use of these next gen features.
That untapped potential may be the surprise for us. :messenger_ok:
 

bender

What time is it?
Microsoft buying Quantic Dreams confirmed.

David Cage right now.

KpMuFUi.gif
 

rnlval

Member
Keep double counting things (higher bandwidth —> more MC’s —> more L2), adding charts, etc... not sure beyond agreeing that it has more memory bandwidth and also a higher TFLOPS rating to feed we can exchange platitudes and talk over each other all day. You know what is also a thing? Memory contention... one system is feeding more units off of the same L1 cache than the other.

You can of course fallback on the bigger and shared L2 and if you can have enough running threads on the chip you can hide the higher latency than the increased L1 cache misses will cause... but you are still giving a latency advantage to the GPU feeding a smaller amount of CU’s from the same L1 pool.
Wrong
1. Each advantage reduces render time consumption.
2. Each stage in the graphics pipeline may have a dependency on the previous stage.

Ultimately, memory bandwidth bound is PITA.
 

Neo_game

Member
Memory bandwidth bound can be PITA.

BiG NAVI's very-fast 128 MB Infinity Cache being based on Zen 2's L3 cache IP is interesting.

Of course having more BW always helps. They do have something called cache scrubber though. Also SX having just 8 less CU than 6800 yet there is a massive difference in BW, so it a bottleneck for SX as well
 

rnlval

Member
Of course having more BW always helps. They do have something called cache scrubber though. Also SX having just 8 less CU than 6800 yet there is a massive difference in BW, so it a bottleneck for SX as well
RX 6800 has the three Shader Engines, 96 ROPS backed by a very fast 128 MB Infinity Cache based on Zen 2's Level 3 cache IP, and 2105 MHz boost frequency.

The relationship between RTX 6800 and XSX is like Titan RTX and RTX 2080 respectively.

Unlike XBO's 32 MB eSRAM, BiG NAVI's 128 MB Infinity Cache has delta color compression in its 4th improvement gen.
 
Last edited:

BeardGawd

Banned
Phil said he had a XSX last december in his house - so how does that tie into the above.
I'm starting to think this was all bait and switch for Sony/press. Make Sony and the press believe they were further along than they were.
 
Last edited:
Overall, I think that the pure analysis of the hardware shows an advantage for Microsoft, but experience tells us that hardware is only part of the equation: Sony showed in the past that their consoles could deliver the best-looking games because their architecture and software were usually very consistent and efficient.

And this will repeat in the new gen.
We will see.
 

Panajev2001a

GAF's Pleasant Genius
Wrong
1. Each advantage reduces render time consumption.
2. Each stage in the graphics pipeline may have a dependency on the previous stage.

Ultimately, memory bandwidth bound is PITA.

Nobody is saying being memory bandwidth bound is helpful, but you are overestimating the advantage of one and underestimating the other, but as time goes by we will learn more. They have similar bandwidth profiles especially once you count the lower TFLOPS target and the XSX “dual” memory pools setup not to mention a likely higher pressure on the L2 and external memory systems due to higher L1 misses (each L1 section in the Shader Array serving 4 more CU’s).

Still... we will disagree, you are still double counting (or reusing a structure meant to support a higher TFLOPS target and less efficient L1 caching system and cover for the dual main memory speed split also as an additional multiplier... lack of cache scrubbers for more unnecessary flushes, etc...) more than showing a cumulative pipeline effect: most of your argument so has has been that you are saying the equivalent of “I have two CPU cores so I can process two threads in parallel AND I have 100% more registers than you have which increases my advantage further”.
 
Last edited:

geordiemp

Member
I'm starting to think this was all bait and switch for Sony/press. Make Sony and the press believe they were further along than they were.

No he was not lying, hardware was done, software was not was my point actually. Puts an end to all the we waited for this and that...
 
Last edited:

Astral Dog

Member
And that’s why MS is building first party studios 😉 it’s a multi year plan for both companies.
I think Sony's first party will look a little better when all said and done, if only because the Series S exists, it won't be a really striking difference, but MS can't go crazy with XSX specs as long as they are making games for 2 systems while Sony only has PS5 to worry about, so GOW, Naughty Dog etc will still seen as the highest quality exclusives this gen.
 
Interesting you think we can get up to 65% boost for XSX dML. Do you mean by doing super resolution scaling?

How much cu do you think it needs?
XSX has 52cu, can you do ML concurrently in each cu, or you need to dedicate a few cu for it.
Seems MS made a smart choice to have hardware ML. :messenger_ok:
I think, they will dedicate up to 61% of the CU's as reprogrammed ML compute unit's once those function's are implemented.

We are entering an Era where ML is supposed to increase performance through software optimization up to 300%. ML will also increase pure hardware performance once applied to all architectures up to and over 105 quadrillion times as is taught in Computer Science Literature.
 
Last edited:

longdi

Banned
I think, they will dedicate up to 61% of the CU's as reprogrammed ML compute unit's once those function's are implemented.

We are entering an Era where ML is supposed to increase performance through software optimization up to 300%. ML will also increase pure hardware performance once applied to all architectures up to and over 105 quadrillion times as is taught in Computer Science Literature.

wow but only 40% of the CU for traditional graphicky works?
What does the ML supposed to do besides training with image quality upscaling? 🤔
 

reksveks

Member
If you want to take someone smarter than me opinion on it please see below.

(tried to paste an image of the thread but failed)

I personally just want to know if the image upscaling algo based off DirectML, is a global model or a game specific one?
 
Last edited:
wow but only 40% of the CU for traditional graphicky works?
What does the ML supposed to do besides training with image quality upscaling? 🤔
Yes leaving about 39% for traditional rendering methods, the difference would be essentially not just upscaling method's as we know with Nvidia DLSS 2.0 but the possibility of in fact physics/polygons/AI/Animations all being recompiled with ML functional CU units on hand - essentially all traditional methods of rendering could become supercharged ML rendering techniques - if Microsoft applies enough Research and Development this generation to do so.
 

KRYPT83

Member
Overall, I think that the pure analysis of the hardware shows an advantage for Microsoft, but experience tells us that hardware is only part of the equation: Sony showed in the past that their consoles could deliver the best-looking games because their architecture and software were usually very consistent and efficient.

And this will repeat in the new gen.
No sony first party devs cut alot of corners and hide alot of stuff with there artist, that's why Sony 1st party games employ like 200 artist compared to 20-30 or whatever the norm average is
 

martino

Member
Yes leaving about 39% for traditional rendering methods, the difference would be essentially not just upscaling method's as we know with Nvidia DLSS 2.0 but the possibility of in fact physics/polygons/AI/Animations all being recompiled with ML functional CU units on hand - essentially all traditional methods of rendering could become supercharged ML rendering techniques - if Microsoft applies enough Research and Development this generation to do so.
that read like what async compute was this gen (+50% performance ! for everybody !)
you will see it....but expecting it to be common practice this gen (and number announced)...i don't believe it one second.
 
Last edited:

longdi

Banned
Yes leaving about 39% for traditional rendering methods, the difference would be essentially not just upscaling method's as we know with Nvidia DLSS 2.0 but the possibility of in fact physics/polygons/AI/Animations all being recompiled with ML functional CU units on hand - essentially all traditional methods of rendering could become supercharged ML rendering techniques - if Microsoft applies enough Research and Development this generation to do so.

Does this mean developers job are easier, as those possibilities have been trained and accumulated by automation!
Instead of ploughing through boring codes, devteams can use the freed time to creatively improve their games.

So XSXS games may exhibit beautiful graphics and more innovative gameplay...
MS sure made a beastly console of exciting potential. :messenger_open_mouth:

Cant wait for 2022 games, but im going with Gamepass on PC.
 

reksveks

Member
No sony first party devs cut alot of corners and hide alot of stuff with there artist, that's why Sony 1st party games employ like 200 artist compared to 20-30 or whatever the norm average is
I don't know if i would say 'cut alot of corners' but they definitely spend a lot more time on maximising the most out of the ps4 and they also don't prioritise resolution and fps over the graphical fidelity which i kinda hope microsoft start doing for certain genre's of games.
 

longdi

Banned
I wonder how will a next gen game built around dML looks and plays like?

This is a good curve ball, i thought it was only about image upscaling. But we may see games unlike today with hardware ML?
 

MrFunSocks

Banned
I was gonna say can he be trusted because usually we get the "oh they are a Playstation centric developer" etc, etc.
But yes when they say what you wanna hear it seems they can 😁
I don’t trust a word David cage says, but Machine Learning has already proven itself in many ways. DLSS has been talked about in here as almost the holy grail for gaming - 4K resolution at 1080p processing power. If MS can get something like that it could definitely be a game changer. It could mean the series X gets full ray tracing at near 4K60FPS quality on everything.
 
Last edited:

martino

Member
No sony first party devs cut alot of corners and hide alot of stuff with there artist, that's why Sony 1st party games employ like 200 artist compared to 20-30 or whatever the norm average is
tenor.gif

simple people. if they like or don't like what they see it imply how tech is in the stuff.
 
Last edited:

betrayal

Banned
Back when Microsoft announced the Xbox One they promised how this will provide "unprecedented graphics and worlds" powered by the cloud.

So does anyone still remember how you couldn't help but be amazed when you saw the Xbox One and the cloud together in action?

Yes, me neither.
 
Last edited:
principal dev lead Shawn Hargreaves.

"When gamers purchase PC graphics hardware with the DX12 Ultimate logo or an Xbox Series X, they can do so with the confidence that their hardware is guaranteed to support all next generation graphics hardware features "

The truth is, DX12Ultimate, announced before the console was fully revealed - has a complete Machine Learning API overhead. Why would Microsoft then go on to release a machine, that does not fully utilize DX12Ultimate. Microsoft Assures everyone that "if you need Machine Learning for your game, DirectML will meet your Machine Learning needs."

Machine Learning, in computer science - when applied to Graphics/Computer Workloads - sound's like some kind of far fetched solution that won't be possible for another hundred years. But ML saturation across all platform's and software is just around the corner according to Computer Science. And on such subjects, Computer Science never falters.

And within the DirectML doc's, they insinuate ML can be applied to shaders - not just a specific group of shaders but ALL shaders - shader's are standard rendering fare - which mean's the DX12U implementation of DirectML will support and boost all rendering techniques eventually. According to Computer Science, those gains are no less than exactly, precisely - 3 months away.



"If you're counting milliseconds, and squeezing frame times, then DirectML will meet your machine learning needs.

Why does DirectML perform so well?

There's a good reason why you shouldn't just write your own convolution operator (for example) as HLSL in a compute shader. The advantage of using DirectML is that—apart from saving you the effort of homebrewing your own solution—it has the capability of giving you much better performance than you could achieve with a hand-written, general-purpose compute shader for something like convolution, or lstm.

DirectML achieves this in part due to the Direct3D 12 metacommands feature. Metacommands expose a black box of functionality up to DirectML, which allows hardware vendors to provide DirectML access to vendor hardware-specific and architecture-specific optimizations. Multiple operators—for example, convolution followed by activation—can be fused together into a single metacommand. Because of these factors, DirectML has the capability to exceed the performance of even a very well-written hand-tuned compute shader written to run on a breadth of hardware.

Metacommands are part of the Direct3D 12 API, although they're loosely coupled to it. A metacommand is identified by a fixed GUID, while almost everything else about it (from its behavior and semantics to its signature and name) are not strictly part of the Direct3D 12 API. Rather, a metacommand is specified between its author and the driver that implements it. In this case, the author is DirectML. Metacommands are Direct3D 12 primitives (just like Draws and Dispatches), so they can be recorded into a command list and scheduled for execution together.

DirectML accelerates your machine learning workloads using an entire suite of machine learning metacommands. Consequently, you don't need to write vendor-specific code paths to achieve hardware acceleration for your inferencing. If you happen to run on an AI-accelerated chip, then DirectML uses that hardware to greatly accelerate operations such as convolution. You can take the same code that you wrote, without modifying it, run it on a chip that's not AI-accelerated (perhaps the integrated GPU in your laptop), and still get great GPU hardware acceleration. And if no GPU is available, then DirectML falls back to the CPU."


"Why DirectML
Many new real-time inferencing scenarios have been introduced to the developer community over the last few years through cutting edge machine learning research. Some examples of these are super resolution, denoising, style transfer, game testing, and tools for animation and art. These models are computationally expensive but in many cases are required to run in real-time. DirectML enables these to run with high-performance by providing a wide set of optimized operators without the overhead of traditional inferencing engines.

To further enhance performance on the operators that customers need most, we work directly with hardware vendors, like Intel, AMD, and NVIDIA, to directly to provide architecture-specific optimizations, called metacommands. Newer hardware provides advances in ML performance through the use of FP16 precision and designated ML space on chips. DirectML’s metacommands provide vendors a way of exposing those advantages through their drivers to a common interface. Developers save the effort of hand tuning for individual hardware but get the benefits of these innovations.

DirectML is already providing some of these performance advantages by being the underlying foundation of WinML, our high-level inferencing engine that powers applications outside of gaming, like Adobe, Photos, Office, and Intelligent Ink. The API flexes its muscles by enabling applications to run on millions of Windows devices today."
 
Last edited:

JCK75

Member
David Cage, CEO and founder of Quantic Dream, highlighted the Xbox Series X's shader cores as more suitable for machine learning tasks, which could allow the console to perform a DLSS-like performance-enhancing image reconstruction technique.











All I know is that owning an nvidia shield 2019 watching what it does to my 1080P videos is nothing short of magical and the idea of this sort of AI technology being put towards gaming is hella exciting so I hope there is truth to this.
 

reksveks

Member
All I know is that owning an nvidia shield 2019 watching what it does to my 1080P videos is nothing short of magical and the idea of this sort of AI technology being put towards gaming is hella exciting so I hope there is truth to this.

So tempted to get the 'new' shield, got the old one. I think Nvidia have started to do local upscaling on GeForce now and so interested to see how well that works.
 

longdi

Banned
principal dev lead Shawn Hargreaves.



The truth is, DX12Ultimate, announced before the console was fully revealed - has a complete Machine Learning API overhead. Why would Microsoft then go on to release a machine, that does not fully utilize DX12Ultimate. Microsoft Assures everyone that "if you need Machine Learning for your game, DirectML will meet your Machine Learning needs."

Machine Learning, in computer science - when applied to Graphics/Computer Workloads - sound's like some kind of far fetched solution that won't be possible for another hundred years. But ML saturation across all platform's and software is just around the corner according to Computer Science. And on such subjects, Computer Science never falters.

And within the DirectML doc's, they insinuate ML can be applied to shaders - not just a specific group of shaders but ALL shaders - shader's are standard rendering fare - which mean's the DX12U implementation of DirectML will support and boost all rendering techniques eventually. According to Computer Science, those gains are no less than exactly, precisely - 3 months away.



"If you're counting milliseconds, and squeezing frame times, then DirectML will meet your machine learning needs.

Why does DirectML perform so well?

There's a good reason why you shouldn't just write your own convolution operator (for example) as HLSL in a compute shader. The advantage of using DirectML is that—apart from saving you the effort of homebrewing your own solution—it has the capability of giving you much better performance than you could achieve with a hand-written, general-purpose compute shader for something like convolution, or lstm.

DirectML achieves this in part due to the Direct3D 12 metacommands feature. Metacommands expose a black box of functionality up to DirectML, which allows hardware vendors to provide DirectML access to vendor hardware-specific and architecture-specific optimizations. Multiple operators—for example, convolution followed by activation—can be fused together into a single metacommand. Because of these factors, DirectML has the capability to exceed the performance of even a very well-written hand-tuned compute shader written to run on a breadth of hardware.

Metacommands are part of the Direct3D 12 API, although they're loosely coupled to it. A metacommand is identified by a fixed GUID, while almost everything else about it (from its behavior and semantics to its signature and name) are not strictly part of the Direct3D 12 API. Rather, a metacommand is specified between its author and the driver that implements it. In this case, the author is DirectML. Metacommands are Direct3D 12 primitives (just like Draws and Dispatches), so they can be recorded into a command list and scheduled for execution together.

DirectML accelerates your machine learning workloads using an entire suite of machine learning metacommands. Consequently, you don't need to write vendor-specific code paths to achieve hardware acceleration for your inferencing. If you happen to run on an AI-accelerated chip, then DirectML uses that hardware to greatly accelerate operations such as convolution. You can take the same code that you wrote, without modifying it, run it on a chip that's not AI-accelerated (perhaps the integrated GPU in your laptop), and still get great GPU hardware acceleration. And if no GPU is available, then DirectML falls back to the CPU."


"Why DirectML
Many new real-time inferencing scenarios have been introduced to the developer community over the last few years through cutting edge machine learning research. Some examples of these are super resolution, denoising, style transfer, game testing, and tools for animation and art. These models are computationally expensive but in many cases are required to run in real-time. DirectML enables these to run with high-performance by providing a wide set of optimized operators without the overhead of traditional inferencing engines.

To further enhance performance on the operators that customers need most, we work directly with hardware vendors, like Intel, AMD, and NVIDIA, to directly to provide architecture-specific optimizations, called metacommands. Newer hardware provides advances in ML performance through the use of FP16 precision and designated ML space on chips. DirectML’s metacommands provide vendors a way of exposing those advantages through their drivers to a common interface. Developers save the effort of hand tuning for individual hardware but get the benefits of these innovations.

DirectML is already providing some of these performance advantages by being the underlying foundation of WinML, our high-level inferencing engine that powers applications outside of gaming, like Adobe, Photos, Office, and Intelligent Ink. The API flexes its muscles by enabling applications to run on millions of Windows devices today."

Wow....didnt see that coming. No wonder David Cage was so bullish about XSX hardware advantage.
Do you think XSXS can tap onto the power of the cloud, since xBlades have the DML hardware as the consoles.
IIRC nvidia DLSS taps into the cloud too.

With Xbox, the userbase is much wider than Nvidia gpu line, so the nodes and connections are much wider, sorta like bitcoin, every XSXS can share the power of dML over the cloud!
Cant wait the 2022 games to showcase all the exciting DX12u features. :messenger_open_mouth:

Imo people underestimating Xbox design strengths. Despite MS openly talked about it with confidence.
Guess Phil has given up, and just let the games show. 🤷‍♀️
 
Last edited:

assurdum

Banned
principal dev lead Shawn Hargreaves.



The truth is, DX12Ultimate, announced before the console was fully revealed - has a complete Machine Learning API overhead. Why would Microsoft then go on to release a machine, that does not fully utilize DX12Ultimate. Microsoft Assures everyone that "if you need Machine Learning for your game, DirectML will meet your Machine Learning needs."

Machine Learning, in computer science - when applied to Graphics/Computer Workloads - sound's like some kind of far fetched solution that won't be possible for another hundred years. But ML saturation across all platform's and software is just around the corner according to Computer Science. And on such subjects, Computer Science never falters.

And within the DirectML doc's, they insinuate ML can be applied to shaders - not just a specific group of shaders but ALL shaders - shader's are standard rendering fare - which mean's the DX12U implementation of DirectML will support and boost all rendering techniques eventually. According to Computer Science, those gains are no less than exactly, precisely - 3 months away.



"If you're counting milliseconds, and squeezing frame times, then DirectML will meet your machine learning needs.

Why does DirectML perform so well?

There's a good reason why you shouldn't just write your own convolution operator (for example) as HLSL in a compute shader. The advantage of using DirectML is that—apart from saving you the effort of homebrewing your own solution—it has the capability of giving you much better performance than you could achieve with a hand-written, general-purpose compute shader for something like convolution, or lstm.

DirectML achieves this in part due to the Direct3D 12 metacommands feature. Metacommands expose a black box of functionality up to DirectML, which allows hardware vendors to provide DirectML access to vendor hardware-specific and architecture-specific optimizations. Multiple operators—for example, convolution followed by activation—can be fused together into a single metacommand. Because of these factors, DirectML has the capability to exceed the performance of even a very well-written hand-tuned compute shader written to run on a breadth of hardware.

Metacommands are part of the Direct3D 12 API, although they're loosely coupled to it. A metacommand is identified by a fixed GUID, while almost everything else about it (from its behavior and semantics to its signature and name) are not strictly part of the Direct3D 12 API. Rather, a metacommand is specified between its author and the driver that implements it. In this case, the author is DirectML. Metacommands are Direct3D 12 primitives (just like Draws and Dispatches), so they can be recorded into a command list and scheduled for execution together.

DirectML accelerates your machine learning workloads using an entire suite of machine learning metacommands. Consequently, you don't need to write vendor-specific code paths to achieve hardware acceleration for your inferencing. If you happen to run on an AI-accelerated chip, then DirectML uses that hardware to greatly accelerate operations such as convolution. You can take the same code that you wrote, without modifying it, run it on a chip that's not AI-accelerated (perhaps the integrated GPU in your laptop), and still get great GPU hardware acceleration. And if no GPU is available, then DirectML falls back to the CPU."


"Why DirectML
Many new real-time inferencing scenarios have been introduced to the developer community over the last few years through cutting edge machine learning research. Some examples of these are super resolution, denoising, style transfer, game testing, and tools for animation and art. These models are computationally expensive but in many cases are required to run in real-time. DirectML enables these to run with high-performance by providing a wide set of optimized operators without the overhead of traditional inferencing engines.

To further enhance performance on the operators that customers need most, we work directly with hardware vendors, like Intel, AMD, and NVIDIA, to directly to provide architecture-specific optimizations, called metacommands. Newer hardware provides advances in ML performance through the use of FP16 precision and designated ML space on chips. DirectML’s metacommands provide vendors a way of exposing those advantages through their drivers to a common interface. Developers save the effort of hand tuning for individual hardware but get the benefits of these innovations.

DirectML is already providing some of these performance advantages by being the underlying foundation of WinML, our high-level inferencing engine that powers applications outside of gaming, like Adobe, Photos, Office, and Intelligent Ink. The API flexes its muscles by enabling applications to run on millions of Windows devices today."
And he said to not put emphasize in the MS words but it's all about facts reported to the computer science in 7 books.
 
Last edited:

Dodkrake

Banned
The validity of this text falls here

The CPU of the two consoles uses the same processor (slightly faster on Xbox Series X), the GPU of the Xbox also seems more powerful, as it is 16% faster than the PS5 GPU, with a bandwidth that is 25% faster. The transfer speed from the SSD is twice as fast on PS5.

This is pretty much all wrong or misleading

  • Effective Bandwidth
    • (10*560) + (6*336) / 16 = 476GB/s
  • Bandwidth difference
    • |448−476|[(448+476)2]×100= 6.06061%
  • SSD transfer Speed
    • 5.5GB/s / 2.4GB/s = 2.29X

So, the appropriate quote would be

The CPU of the two consoles uses the same processor (slightly faster on Xbox Series X), the GPU of the Xbox also seems more powerful with an effective bandwidth advantage of around 6.1%. The PS5's GPU is running at faster clocks speeds and it's SSD transfer speed is over twice as fast.
 

longdi

Banned
The validity of this text falls here



This is pretty much all wrong or misleading

  • Effective Bandwidth
    • (10*560) + (6*336) / 16 = 476GB/s
  • Bandwidth difference
    • |448−476|[(448+476)2]×100= 6.06061%
  • SSD transfer Speed
    • 5.5GB/s / 2.4GB/s = 2.29X

So, the appropriate quote would be

:messenger_open_mouth: the length people goes to downplay XSX design?

560/448 = 25% better for graphics tasks.

the 6GB is for less bandwidth intensive tasks.
PS5 is not going to have 16GB all for graphics only, it even has a 4K UI.
 

Jon Neu

Banned
Machine learning is on ps5 too. Cerny mentioned it on the road of ps5. But doubt you ever care to know when you heard just SSD, tempest or controller about ps5...neither machine will offer something significant in that sense. There aren't exclusive hardware part dedicate as Nvidia.

How cute.

An actual Dev highlights how MS can exploit a superior part of the hardware over the PS5 and suddenly, armchair developers of GAF (who, by mere coincidence, favour Sony) try to downplay it.

All at the same time that we have like 500 hundred threads about the magical properties of the Sony's SSD.
 
And he said to not put emphasize in the MS words but it's all about facts reported to the computer science in 7 books.
I said specifically, I was not deriving 303% gain's in software performance, and 105quadrillion X improvement in hardware performance due to words on machine learning from Microsoft or any Series X hype - which I didn't - you are inferring it is now unfair to cite direct references to DX12Ultimate - and I suppose insinuating the new XSX is not fully DX12Ultimate compliant.

Either way, citing material about DirectML stemming directly from DX12Ultimate docs and quote's are relevant. And of particular importance if you believe Microsoft when they claim Xbox Series X is fully DX12Ultimate compliant.
 
Last edited:

longdi

Banned
How cute.

An actual Dev highlights how MS can exploit a superior part of the hardware over the PS5 and suddenly, armchair developers of GAF (who, by mere coincidence, favour Sony) try to downplay it.

All at the same time that we have like 500 hundred threads about the magical properties of the Sony's SSD.

Imo their new directives/target is fud about XSX being 'rdna1.5' and it has 'slow' clocks.

Cant believe some of them are getting away with the fud. 🤷‍♀️
 

assurdum

Banned
How cute.

An actual Dev highlights how MS can exploit a superior part of the hardware over the PS5 and suddenly, armchair developers of GAF (who, by mere coincidence, favour Sony) try to downplay it.

All at the same time that we have like 500 hundred threads about the magical properties of the Sony's SSD.
In what way I have downplayed it? You know the measure how such hardware feature land in the real world of perfomance? Please enlight the poor peasant
 
Last edited:

assurdum

Banned
Imo their new directives/target is fud about XSX being 'rdna1.5' and it has 'slow' clocks.

Cant believe some of them are getting away with the fud. 🤷‍♀️
The facts some you seems don't really know of what we talking about and don't care at all . They heard David Cage (now David Cage suddenly it's a developer expert) claims generic stuff (and obvious) and gotcha, you see, superior hardware, the other just downplayed it. The reality? It's not exactly clear what advantage we will see. Series X is more powerful? Sure, but how much more, it's all about to see, it's more relative than absolute. Not saying it hasn't advantage but I'm a bit skeptical we will see any revolutionary stuff not possible on ps5.
 
Last edited:
And he said to not put emphasize in the MS words but it's all about facts reported to the computer science in 7 books.

Also... You do realize Computer Scientists invented the very term Machine Learning - don't you? But you simply scoff at the idea of Machine Learning being a prime subject in Computer Science?

Yes it is in fact the main subject of nearly 5 books in totality spanning in total 7 books.

It's important people realize Machine Learning is not just about propagating data with more statistical efficiency, it is about programming software that is in fact actually able to code and create software far superior to what human's are capable. And then furthermore applying machine learning to words, architectures, material science - quiet literally every single physical entity will eventually be bolstered by Machine Learning and improved upon. Software optimization due to Machine Learning is merely topical and the tip of the iceberg here. According to Computer Science, the Era before Machine Learning is implemented in full (roughly January 3rd 2021) should be considered literally as the Dark Ages before Machine Learning became common.
 

Jon Neu

Banned
In what way I have downplayed it? You know the measure how such hardware feature land in the real world perfomance? Please enlight us poor peasant

Well, we know that an actual Dev -historical Sony Dev no less- has highlighted that feature and has said that can make a difference for the Xbox Series X against the PS5.

And then we have a Sony fan on GAF saying "meh, this consoles will not make anything with it so it doesn't matter anyway. Disperse people, disperse!".

If MS better suited console for ML can bring something even remotely similar to DLSS, then that's obviously an incredible boost of resources. No matter how hard you try to dismiss it.
 

assurdum

Banned
I said specifically, I was not deriving 303% gain's in software performance, and 105quadrillion X improvement in hardware performance due to words on machine learning from Microsoft or any Series X hype - which I didn't - you are inferring it is now unfair to cite direct references to DX12Ultimate - and I suppose insinuating the new XSX is not fully DX12Ultimate compliant.

Either way, citing material about DirectML stemming directly from DX12Ultimate docs and quote's are relevant. And of particular importance if you believe Microsoft when they claim Xbox Series X is fully DX12Ultimate compliant.
I find unbelievable you continue to substain such graphic libraries can land such enormous boost in performance and you continue to cite monumental book and theories, which, seriously, has nothing to do how it will work concretely on series X and its limited hardware (from the perspective of machine learning).
 
Last edited:

assurdum

Banned
Well, we know that an actual Dev -historical Sony Dev no less- has highlighted that feature and has said that can make a difference for the Xbox Series X against the PS5.

And then we have a Sony fan on GAF saying "meh, this consoles will not make anything with it so it doesn't matter anyway. Disperse people, disperse!".

If MS better suited console for ML can bring something even remotely similar to DLSS, then that's obviously an incredible boost of resources. No matter how hard you try to dismiss it.
I'm not trying to dismiss anything. I can discern PR stuff to real performance world. If Sony said something of similar you would laugh straight in his face
 
Top Bottom