• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Will ML be a differentiator between the consoles?

Panajev2001a

GAF's Pleasant Genius
RGT took you guys for a ride. I wish we could see the difference in his follower/subscription number from before he became an "insider" till now. Took advantage of people's thirst for info on the consoles.

No confirmation on enhancements PS5 = PS5 does not have any of them.

vs.

PCI-E 3 SSD slotted in XSX = we do not know how many enhancements they did to it, oh so many, you do not know!

Kinda summarises the attitude quite well.
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Unless machines learn to make great exclusives, only games matter.

It is not graphics race anymore, even current gen have good enough graphics for majority of gamers

The majority of gamers are actually catered to by third parties.

Nice graphics is a good selling point though....show a pretty looking game and you are bound to get a few impulse buys.
 

kungfuian

Member
Let me get this straight. Microsoft has some high end feature that Sony doesn't and it will eventually lead to a crazy advantage. Sound familiar?

These 'tech' threads all read like the power of the cloud 2.0 which can mean only one thing; Crackdown 4 is coming.
 
No confirmation on enhancements PS5 = PS5 does not have any of them.

vs.

PCI-E 3 SSD slotted in XSX = we do not know how many enhancements they did to it, oh so many, you do not know!

Kinda summarises the attitude quite well.
You would think if Sony had some very important and specific features they would talk about them, but not a pepe and the console comes out in 2 days. I'm sorry but it just didn't happen, and not even with Mark Cerny's 45 minutes of straight technical talk was there even an inkling of mention related to ML of any kind.

It's a fairly easy situation to deduce the outcome from.
 
Let me get this straight. Microsoft has some high end feature that Sony doesn't and it will eventually lead to a crazy advantage. Sound familiar?

These 'tech' threads all read like the power of the cloud 2.0 which can mean only one thing; Crackdown 4 is coming.
I wouldn't start talking shit about the cloud, they eventually did deliver on that and in a way and scale that few of us could have imagined.


 

Concern

Member
No confirmation on enhancements PS5 = PS5 does not have any of them.

vs.

PCI-E 3 SSD slotted in XSX = we do not know how many enhancements they did to it, oh so many, you do not know!

Kinda summarises the attitude quite well.


Bringing up Xbox in a Playstation thread. Summarizes the insecurity quite well 😉.

Keep fighting the good fight warrior ✊
 

longdi

Banned
You would think if Sony had some very important and specific features they would talk about them, but not a pepe and the console comes out in 2 days. I'm sorry but it just didn't happen, and not even with Mark Cerny's 45 minutes of straight technical talk was there even an inkling of mention related to ML of any kind.

It's a fairly easy situation to deduce the outcome from.

Yea that's the question. Shu pretty much tweeted this, when Sony has the advantage https://www.neogaf.com/threads/game...nscends-on-paper-comparisons-to-xbox.1575879/

Just because PS4 Pro had some form of reconstruction, dont mean it is as good as ML. Or Sony can still drop the feature, because they may feel it is not needed, and to save some silicon areas. PS4 Pro needed the checker boarding on the cusp of 4K TV growth.

So far, we got 2 close third parties saying similar things about PS5 lack of ML hardware, the italian SCEE dude and the Cage guy.
 
Last edited:

Bojanglez

The Amiga Brotherhood
I wouldn't start talking shit about the cloud, they eventually did deliver on that and in a way and scale that few of us could have imagined.



So we have learned that MS hype something up nearly a decade before it is ready and will then only work on PC and/or the following generation of consoles.

Noted 👍
 
Last edited:

longdi

Banned
No confirmation on enhancements PS5 = PS5 does not have any of them.

vs.

PCI-E 3 SSD slotted in XSX = we do not know how many enhancements they did to it, oh so many, you do not know!

Kinda summarises the attitude quite well.


But the PCIE3 SSD, if true, still dont mean anything to XSX. MS has a rated sustained speeds that dont even need PCIE3x4.
 
Yea that's the question. Shu pretty much tweeted this, when Sony has the advantage https://www.neogaf.com/threads/game...nscends-on-paper-comparisons-to-xbox.1575879/

Just because PS4 Pro had some form of reconstruction, dont mean it is as good as ML. Or Sony can still drop the feature, because they may feel it is not needed, and to save some silicon areas. PS4 Pro needed the checker boarding on the cusp of 4K TV growth.

So far, we got 2 close third parties saying similar things about PS5 lack of ML hardware, the italian SCEE dude and the Cage guy.
Another problem for them as well is in the event their system can do these things (doubtful), they're stuck with a bunch of custom crap that developers have to familiarize themselves with which leads to not only more development time but also a lack of interest in even bothering or doing it properly.

On the other side of the equation you have two consoles which both support the PC standard so you have a trio of implementation already in the development pipeline for your game versus an oddball implementation which needs additional work.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
You would think if Sony had some very important and specific features they would talk about them, but not a pepe and the console comes out in 2 days. I'm sorry but it just didn't happen, and not even with Mark Cerny's 45 minutes of straight technical talk was there even an inkling of mention related to ML of any kind.

It's a fairly easy situation to deduce the outcome from.

They said almost nothing about BC and what they said caused the usual concern in some groups and people seem to be impressed. I guess they dropped FP16 RPM since they did not mention it much if at all actually in that video 🤔.
 
They said almost nothing about BC and what they said caused the usual concern in some groups and people seem to be impressed. I guess they dropped FP16 RPM since they did not mention it much if at all actually in that video 🤔.
And yet they talked up RPM for the Pro to no end but in line with the same spectrum of architectural hierarchy they're silent about this.

The system doesn't have these things, it's quite clear at this point. If you really look at the PlayStation 5 as a whole it's kind of half baked. It's a fine system but it seems like it's not ready, there's a lot of patchwork to do, really odd missing things which on the Xbox for whatever reason or another were just common sense things to have ready.

It is what it is.
 
DirectML short for Direct Machine Learning is AMD equivalent answer to Nvidia's DLSS short for Deep Learning Super Sampling, Enabling higher performance levels by using machine learning to upscale images to higher resolutions without visual downgrades.
AMD has an answer to Nvidia's DLSS, and that answer is DirectML-powered Super Resolution. AMD plans to utilise Machine Learning to improve the visual quality of games, & AMD's solution will have the backing of Microsoft.
U9fvcRd.jpg

Microsoft confirmed that both of their next-generation consoles would support Machine learning for games with DirectML. Through their collaboration with AMD when creating their new consoles.
A component of Microsoft's DirectX feature set DirectML isn't a Radeon-only technology, and its applications extend far beyond Super Resolution functions. Over the coming years, future PC and Xbox games will bring Machine Learning into games in several new and innovative ways, impacting all gamers with supported hardware.
BTt8e9g.jpg

Microsoft has already showcased the potential of machine learning in gaming applications, with the image below showcasing what happens when Machine Learning is used to upscale an image to four times its original resolution (basically from 1080p to 4K) to generate a sharper final image with reduced aliasing. The image below is a comparison between ML Super Sampling and bilinear upsampling.
siNpX8v.png

PS5 is confirmed to not use DirectX feature so even though AMD made their CPU and GPU for the PS5 it looks like PS5 will not have DirectML
so will Sony come up with alternative method to compensate for not having this feature cause Machine Learning is the future of gaming.
While DirectML hasn't received as much attention as DirectX raytracing, you can be sure that developers are looking at the new API closely. As screen manufacturers are starting to push beyond 4K, AI upscaling technologies like Machine Learning will continue to increase in popularity. 4K gaming is already a challenge for modern gaming hardware, and 8K is going to prove to be even more problematic for game makers and hardware vendors.
Technologies like DirectML will become vital for future games, both on PC and on consoles. The application of machine learning will allow developers to deliver higher levels of graphical fidelity without the insane hardware costs of traditional computational methods.
DitectML is Microsoft API. PS5 has its own API. An API can support hardware accelerated features. Do you think sony isn't investing in a solution for "enhancing image resolution"? Does it need to be AI? Wasn't it a Sony developer who started this image reconstruction thing ( KZ SF multiplayer ?!)
 

reptilex

Banned
Why is it that EVERY Xbox feature is countered with unsubstantiated "Sony will have this too!" even when SONY themselves haven't claimed as such?

DLSS equivalent: "They're both using AMD of course PS5 will have this too!"

Sampler Feedback Streamimg: "They're both RDNA2 of course PS5 will have this too!"

Variable Rate Shading: "They're both RDNA2 of course PS5 will have this too!"

Multiple game resumes: "Oh that's easy! They both have SSDs of course PS5 will have this too!"

I mean seriously the amount of delusion is worrisome.

I'm actually working with socio-psychology students in a local (french) university to know wether this delusion could is actually a pathological bias, that some have always being the "fanboy syndrome".

It's always been interesting for prospective since we know that in the past all brands or product lines who have started having delusion "fanboy syndrome" afflicted followers is a surefire sign of it's incoming failure.
 

Panajev2001a

GAF's Pleasant Genius
Another problem for them as well is in the event their system can do these things (doubtful), they're stuck with a bunch of custom crap that developers have to familiarize themselves with which leads to not only more development time but also a lack of interest in even bothering or doing it properly.

On the other side of the equation you have two consoles which both support the PC standard so you have a trio of implementation already in the development pipeline for your game versus an oddball implementation which needs additional work.

This I can agree with. This is the double edged sword with doing or not doing away with the more abstract PC API’s: you do get more bang for your back efficiency wise but you lack a bit of economies to scale and add problems to solve when dealing with BC (albeit I think they seem to be solving them quite well and building solutions for them in their tools).

With PS4 Pro they added quite a bit of custom HW features that they over estimated developers would use (with the way mid generation upgrade consoles would be adopted by developers it was worth trying, but it showed a limitation of going with custom features that require extra work to add). I think a new generation gives them a good chance to set a new baseline and it would be a mistake to use how the mid generation consoles were adopted to judge HW customisation choices for any of the new consoles.

Then again if you run the same abstraction/cover a lot of HW with the same code you have other levels of complexity to few with if you want to properly support all of them.
 

Cato

Banned
My source is AMD DirectML is using DirectX feature which the PS5 does not use since it is Microsoft technology did you even read the posting now show me where it shows that PS5 have AMD DirectML?

Er, No.
DirectX is an API. Not a technology.
A technology can be surfaced through multiple different APIs. Just because a technology is surfaced via one wrapper API does not mean it can not be surfaced via another wrapper API too.

Just like NTFS and ZFS are two different APIs they still can both surface the same kind of technology like dedup via their, different, APIs.
 

Panajev2001a

GAF's Pleasant Genius
And yet they talked up RPM for the Pro to no end but in line
One or two paragraphs in a multiple pages interview == “talked it up to no end”... sure... :rolleyes:.

with the same spectrum of architectural hierarchy they're silent about this.

The system doesn't have these things, it's quite clear at this point.
So, did they then drop FP16 with the proof for it being they did not talk about it at all (they did not call it out) ;)? What kind of point are you trying to make really?

You are the same one that is going about the SSD in the XSX thread claiming that it is surely customised in ways we do not even know even though nothing was mentioned about it specifically yet for some mysterious reason if Sony does not explicitly mentions something they have jack? Almost forgot that as far as MS is concerned “hope springs eternal”.

If you really look at the PlayStation 5 as a whole it's kind of half baked.
Oh yeah, this is the point you were building up to :LOL:.
 
Last edited:
No


Usual poster mistake, just because Microsoft talks about its api interface for dx12, you get confused.

Yes, like how raytracing is supported on cards without acceleration it would just kill the performance. PS5 from what I understand doesn't have hardware support for 8/4 integers for lower precision math. It doesn't mean it can't have different solutions however.
 

Elias

Member
If the consoles use a DLSS equivalent like Super Resolution then it will since the series x/s has silicon dedicated to ML and the PS5 doesn't.
 

geordiemp

Member
Yes, like how raytracing is supported on cards without acceleration it would just kill the performance. PS5 from what I understand doesn't have hardware support for 8/4 integers for lower precision math. It doesn't mean it can't have different solutions however.

No

Int 8 and int 4, you know you can do this on standard hadware cores right ?

Having dedicated int 4 and int 8 vs using larger cores and splitting does not mean software.

And where does it say anywhere super sampling from AMD needs allot of dedicated int capability - its likely more temporal anyway.

Its called reaching.
 
Last edited:

MastaKiiLA

Member
Maybe. I'm not sure how much the PS5 will be using it, if results like MM performance mode can be expected. The DF comparison between fidelity and performance mode was satisfying enough, that I don't know how much better they'd get with MLSS. Then again, the results might vary with source resolution. I'm hoping that whatever MM is using, is hardcoded into the hardware so that teams at least have the option of hardware-based upscaling, and can apply MLSS on a case-by-case basis.
 
PS5 will not have DirectML
Obviously, DirectML is a marketing name for a language that helps code GPU accelerated machine learning algorithms.... but the hardware is there and Sony is bound to implement some version of a similar language.
PS5 is confirmed to not use DirectX
That's the DX12 BS again--we went through the magic DX support back in 2013--sure Sony doesn't use MSs tech, and this is a benefit, not a weakness.

You know what other Sony consoles don't use DX:
PS1, PS2, PS3, PS4
AI upscaling technologies like Machine Learning
Machine Learning is an AI tool, neither is directly related to upscaling... Upscaling is probably its most boring use to be honest.
 
Last edited:

llien

Member
DirectML short for Direct Machine Learning is AMD equivalent answer to Nvidia's DLSS short for Deep Learning Super Sampling,
No, it's not.

DirectML is Microsoft's approach to speed up neural network inference.



AMD has an answer to Nvidia's DLSS, and that answer is DirectML-powered Super Resolution.
Ok, that thing was a quite impressive ML based upscaling, that on 2080Ti and for 5xx => 1080 upscaling took 16 milliseconds.
(note that they have used NV's model and weights)

So 4k upscaling ("supersampling') a single frame would take 60ms on 2080Ti.

Unlike DLSS 2, it is not TAA derivative but straght to NN approach.
It does look pretty good, but it's way too slow to be used:

carcompare.png


XSeX reference to having dedicated hardware for inference is V E R Y intriguing.
 
Last edited:

geordiemp

Member
No, it's not.

DirectML is Microsoft's approach to speed up neural network inference.




Ok, that thing was a quite impressive ML based upscaling, that on 2080Ti and for 5xx => 1080 upscaling took 16 milliseconds.
(note that they have used NV's model and weights)

So 4k upscaling ("supersampling') a single frame would take 60ms on 2080Ti.

Unlike DLSS 2, it is not TAA derivative but straght to NN approach.
It does look pretty good, but it's way too slow to be used:

carcompare.png


XSeX reference to having dedicated hardware for inference is V E R Y intriguing.

You realise nobody read the actual blog and what MS were actually doing wih the 2080 ti and now think XSX games are going to be doing this next year - LOL

AMD super sampling will be different than NN for sure, likely another temporal solution with maybe a sprinkle of ML, and apis will be developed by most interested parties, not just microsoft
 
Last edited:
Sony's own engineer states the PS5 doesn't have ML. He also said the PS 5 wasn't full RDNA 2, which was also proved by MS.

ML on the XSX isn't something that will show up for a while I think. It is going to take time to get devs to adopt it, but it should give the XSX an advantage.
How much of an advantage? Who knows.
 

geordiemp

Member
Sony's own engineer states the PS5 doesn't have ML. He also said the PS 5 wasn't full RDNA 2, which was also proved by MS.

ML on the XSX isn't something that will show up for a while I think. It is going to take time to get devs to adopt it, but it should give the XSX an advantage.
How much of an advantage? Who knows.

Ps5 is not Microsofts definition of RDNA2 as it has its own apis

Ps5 and XSX are not AMDs true definition of RDNA2 as no infinity cache, but they are not giving full details for now.

At the moment RDAN2 is a marketing thing until AMD release what a RDNA2 CU really looks like from this slide. Then you will know.

XSX does not have passive fine grain clock gating or any other of the other 2 points below fine glock gating for that matter.

Let the RDNA2 blurb and mis understandings continue for now, it will make the dissapointment even more fun when AMD explains this slide properly :messenger_beaming:.


mdsmnrB.png
 
Last edited:

azertydu91

Hard to Kill
Future proof is a pointless ideology when you think about technology. There is always going to be something better.
Except for the basis of the base ... Like technically electricity has been really future proof .... Other than that ...Yeah future proof doesn't exists.
 

llien

Member
You realise nobody read the actual blog and what MS were actually doing wih the 2080 ti and now think XSX games are going to be doing this next year - LOL

AMD super sampling will be different than NN for sure, likely another temporal solution with maybe a sprinkle of ML, and apis will be developed by most interested parties, not just microsoft

The first statement in MS slide is very intereting:

xbox_series_x_tricks.jpg


10 times improvement over 2080Ti would bring frame rendering time from 1080p to 4k (crazy) down to 6ms, so that's up to 166 frames per second with 6ms lag, uh oh, I want to belive that is doable on consoles... :)
 

rnlval

Member
4/8 bit RPM is an RDNA1 feature btw, it just seems to be added to all RDNA2 cards now and the XSX/XSS with no confirmation for or against about PS5.
FYI, 8X rate INT4 (4-bit dot8) and 4X rate INT8 (8-bit integer dot4 ) are optional features in RDNA 1.

Instruction Set for RDNA 1, Page 49 of 289, Rapid Pack Math only refers to the 16-bit data type.
I quote

Packed math is a form of operation which acclerates arithmetic on two values packed into the same VGPR. It performs operations on two 16-bit values within a DWORD as if they were separate threads.

----------------
What's needed is hardware support for 8-bit integer dot4 operations and 4-bit integer dot8 operations.


lpoow2c.png



"NAVI FMA Ops" is the baseline feature for NAVI.
 
Last edited:

rnlval

Member
Ps5 is not Microsofts definition of RDNA2 as it has its own apis

Ps5 and XSX are not AMDs true definition of RDNA2 as no infinity cache, but they are not giving full details for now.

At the moment RDAN2 is a marketing thing until AMD release what a RDNA2 CU really looks like from this slide. Then you will know.

XSX does not have passive fine grain clock gating or any other of the other 2 points below fine glock gating for that matter.

Let the RDNA2 blurb and mis understandings continue for now, it will make the dissapointment even more fun when AMD explains this slide properly :messenger_beaming:.


mdsmnrB.png


5AZnV9w.jpg



For PS5, you're asking for premium features from mainstream 40 CU NAVI 10 replacement?

XSX GPU's 56 CU is the second-largest scaled RDNA 2 implementation after "BiG NAVI" 80 CU ASIC design.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
FYI, 8X rate INT4 (4-bit dot8) and 4X rate INT8 (8-bit integer dot4 ) are optional features in RDNA 1.

Yeah, that is what I said :).

For PS5, you're asking for premium features from mainstream 40 CU NAVI 10 replacement?

Yup, but you can continue trying to put a 40 CU RDNA2 based semi-custom design down hehe 😉.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
The first statement in MS slide is very intereting:

xbox_series_x_tricks.jpg


10 times improvement over 2080Ti would bring frame rendering time from 1080p to 4k (crazy) down to 6ms, so that's up to 166 frames per second with 6ms lag, uh oh, I want to belive that is doable on consoles... :)

That is a speed up on part (ML inference) of the workload not on the whole rendering workload
 

geordiemp

Member


5AZnV9w.jpg



For PS5, you're asking for premium features from mainstream 40 CU NAVI 10 replacement?

XSX GPU's 56 CU is the second-largest scaled RDNA 2 implementation after "BiG NAVI" 80 CU ASIC design.

XSX is not RDNA2 CU, if you understand the 3 points, the first one is dead easy...

fR97g0o.png
 

Thirty7ven

Banned
It will be the same. The advantage won’t get any bigger throughout the course of the generation, for either side.

And by 2024 there will be mid gen refreshes anyway.

I’m sure the dumb dumbs will latch on to any piece of hope they can though. That’s what they live for.
 
Last edited:

rnlval

Member
Yeah, that is what I said :).

Yup, but you can continue trying to put a 40 CU RDNA2 based semi-custom design down hehe 😉.
It comes down to cost i.e.
PS4 Pro is about RX 470 level results
PS4 is about R7-265 level results based on Pitcairn.

X1X has an R9-390X size Hawaii GCN 44 CU scale which is the second-largest scaled GCN below Fury X'/Vega 64's 64 CU scale.

For XSX, MS repeated the second-largest scaled GPU selection from AMD.
 

rnlval

Member
XSX is not RDNA2 CU, if you understand the 3 points, the first one is dead easy...

fR97g0o.png
Sounds like a similar PR spill from X1X e.g. lower latency graphics pipeline, reduced graphics pipeline bottlenecks, and 'etc'. Lower latency = good.

MS officially claims "full RDNA 2" support.

BiG NAVI's 128 MB Infinity Cache is a workaround for 256-bit bus GDDR6-16000 (512 GB/s) limitation and it was carefully selected to be four times of XBO's 32MB eSRAM which can handle 1600x600p framebuffers without tiling and without delta color compression.

BiG NAVI version has the delta color compression feature with 128 MB Infinity Cache, hence it can handle 4K framebuffers. Unlike XBO's software-driven 32MB ESRAM management, BiG NAVI's 128 MB Infinity Cache is transparent for existing PC games since it's based on Zen 2's L3 cache design which is a hardware design.
 

llien

Member
That is a speed up on part (ML inference) of the workload not on the whole rendering workload
What does "whole rendering workload" mean in your statement?
NN upscaling is just a post processing step (that we know is too slow even on 2080Ti)

It is curious that Microsoft explicitly called out "resolution scaling" in that slide.
 

geordiemp

Member
Sounds like a similar PR spill from X1X e.g. lower latency graphics pipeline, reduced graphics pipeline bottlenecks, and 'etc'. Lower latency = good.

MS officially claims "full RDNA 2" support.

BiG NAVI's 128 MB Infinity Cache is a workaround for 256-bit bus GDDR6-16000 (512 GB/s) limitation and it was carefully selected to be four times of XBO's 32MB eSRAM which can handle 1600x600p framebuffers without tiling and without delta color compression.

BiG NAVI version has the delta color compression feature with 128 MB Infinity Cache, hence it can handle 4K framebuffers. Unlike XBO's software-driven 32MB ESRAM management, BiG NAVI's 128 MB Infinity Cache is transparent for existing PC games since it's based on Zen 2's L3 cache design which is a hardware design.

So you clearly dont undertand the slide then which tells you what RDNA2 CU is. Do you know what pervasive means ?

What your babbling on about L3 which is sort of fabric slower dense cache but not quite, is not what the slide says and is a different subject.

This is about RDNA2 COMPUTE UNITS....Read again


y4ZXZpe.png
 
Last edited:

geordiemp

Member
It's just a general graphics pipeline improvements expected to lower latency and reduce bottlenecks.

No. Google pervasive, then google fine grain clock gating.

This will be in the RDNA2 white paper, but can you read the bullet points and deduce ?
 
Last edited:

rnlval

Member
So you clearly dont undertand the slide then which tells you what RDNA2 CU is. Do you know what pervasive means ?

What your babbling on about L3 cache is not what the slide says. Read again


y4ZXZpe.png
Means little without a large 128 MB L3 cache as a workaround for 256 bit GDDR6-16000's 512 GB/s bottleneck. Hint: missing GDDR6X.
 

geordiemp

Member
Means little without a large 128 MB L3 cache as a workaround for 256 bit GDDR6-16000's 512 GB/s bottleneck. Hint: missing GDDR6X.

No. Thats not it, the subject is COMPUTE UNITS , nothing to do with memory bandwidth from PHY to memory compensation..

You going to be talking about controllers next lol
 
Last edited:

rnlval

Member
No. Google pervasive, then google fine grain clock gating.

This will be in the RDNA2 white paper, but can you read the bullet points and deduce ?
Fine-grain clock gating = power saving. XSX GPU delivers superior perf/watt when compared to RX 5700 XT! i.e. ~209 watts with 45 watts level 8 cores Zen 2 CPU at 3.6 to 3.8 Ghz clock speed. XSX GPU is about 155 watts for 12 TFLOPS and RX 6900 XT has about 24 TFLOPS for 300 watts.
 
Last edited:
Top Bottom