• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

With Hot Chips Around The Corner, Let's Fire Up Some Series X Architecture Speculation!

What area(s) of Series X (and Series S) do you think MS have made most of their customizations in?

  • GPU

    Votes: 53 66.3%
  • CPU

    Votes: 20 25.0%
  • Audio

    Votes: 9 11.3%
  • Memory

    Votes: 22 27.5%

  • Total voters
    80
  • Poll closed .
I'm sure some of you guys are aware of Dusk Golem; he's an insider mainly on Capcom stuff, and recently had some stuff to say about Street Fighter 6 development being pushed back (which I guess I kinda believe but not in the sense the game began full-tilt dev or anything). Well, over on Reeeee they're talking about Imram's comments regarding Sony getting these major 3rd-party timed and/or full exclusives (Matt has also touched on these rumors), and not that long ago Dusk Golem made this interesting post in the thread:

I'm not going to say too much, but from some murmurings I've been hearing, I'll just say I suspect this topic is going to age "interestingly" when a few more details on both platforms are revealed.

I mean this in a few more ways, but to give the broadest idea, I'll just say the Xbox X is by far more powerful than the PS5 and Microsoft are ready to lowball Sony when it comes to price. They can more easily make back profits from Game Pass than console sales than Sony can from the price.

Now in case you don't know, Microsoft's Hot Chips presentation is a little over a week from now, where they'll be doing a deep-dive into the Series X (and I'd assume, by proxy, Lockhart?) architecture. I've been looking forward to this myself since they haven't really done a concise system architecture breakdown the way Cerny did at Road to PS5, aside from blog entries and hints at things through obscure Twitter posts from team engineers.

However I'm curious as to what someone like Dusk has been hearing, and I mean from paper specs we can easily see where the Series X and PS5 compare to one another, where each one has advantages over the other. However, Dusk seems to also be insinuating that there are, in fact, probably custom hardware design choices in the Series X that give it a "very potent" advantage over PS5 than what meets the eye.

Personally, I've always suspected that MS have done their own customizations with the Series systems, obviously, but one thing I have heard a few times from some tech spots is the Series CPU having an "unusually large" cache. I believe the desktop Zen 2 processors can have around 32 MB of L3 cache, no? I don't know if Series X would have that much, but maybe their amount isn't very far off? Could they even surprise and have more than 32 MB?

Considering a minute fraction of the CPU needs to handle some of the XvA operations, more cache would certainly help there, but the system's intended use in server environments (such as Azure) would also benefit from a very large CPU cache. Sticking along this train of thought, I would take these kind of murmurs to also suggest some things regarding the GPU. I've looked at some speculation over on B3D around the recent Big Navi leaks, and some users notice that Series X's GPU size would make it an outlier in AMD's lineup, i.e even PS5's GPU fits a bit more neatly (if you look at the size anyway). Could this be suggesting a more unorthodox SE setup? There's also been some (albeit older) speculation of an increase in ROPs MS maybe could've gone with. I personally suspect they may've gone with a bigger net of L3 cache on the GPU, it would mirror the probable bump in CPU cache.

Anyways, with Hot Chips not that far off from here, what speculations do you all have WRT possible customizations to the Series X architecture that could potentially feed into certain things people like Dusk have a good feeling about? Keep in mind he isn't the only person of some reputable nature that've made these suggestions or speculation, either, as it's gone about quite a while by now. Personally I don't think these customizations have much to do with XvA or even anything more "mundane" like increasing the TF count or CPU clocks; given patents I've seen myself from posters on various forums doing the legwork to find them, and knowledgeable discussions I've seen around a lot of the same spots, I'm thinking these might be things more integral and purpose-built to the design.

For example, these might be things in relation to ML workload hardware support, RT, caches, the memory controller system for the "split" fast/slow memory pools, customizations for ECC memory control, hardware customization for executeIndriect (remember the Indian AMD employee about half a year ago who posted their work on Series X APU and explicitly mentioned ARM cores?), etc.

But have at it; let's speculate as to what could be going on here ahead of Hot Chips, and try to keep it civil. Also, let's try not making this a comparison thread between Series X and PS5, because that's bound to turn into chaos, plus it's really not required to make some educated speculation about these Series X customizations!
 
Yeah that guy doesn’t know shit about these consoles...he’s all over the place, stating nebulous things as fact, and Matt even called him out.

I saw Matt's response, it was more along the lines of just asking him why he said what he said. He's just trying to see from his POV, it wasn't a call-out or anything to that effect.

It's also possible Dark has access to more recent info on the systems or just more info on them in general than Matt does. Dunno the probability of that, but it's possible. Either that or they could have a link to folks who have more detailed info on the systems.

In any case, I don't think anything being referred to is related to a TF bump or CPU clock bump, or even anything like a RAM bump, either. These are probably things more in line with deeper customizations to the architecture that were planned a long time ago. And FWIW, he's not the only insider I've seen allude to it having a "power" advantage, though I figure others who've suggested such have been more modest and just reflecting on the obvious paper specs.
 

GODbody

Member
I personally think that the gap between top tier Series X games and top tier PS5 games is going to very stark.

After reading through the patents that they have made public as well as deducing their custom use of AMD's SSG technology, it seems they've fully optimized and streamlined every aspect that they could have and thrown in some 'future-proofing' on top of those features in the form of VRS, Mesh Shader support and DRX. All of which stacks on top of each other to form the architecture of the Series X.

Take the Machine Learning support they've added for example. This patent describes a method for the Series X to use machine learning to not only compress, but to decompress and upscale textures from 1080p to 4k. Representing a 4x reduction of data footprint. Which will also not only drastically reduce the size of the game on storage, it will reduce the amount of data that will be needed to be streamed by the game application.

This will stack with the Sampler Feedback Streaming that they have added which they have stated will further reduce the amount of texture data that is needed to be streamed by the game application at runtime by a factor of 2.5x on average. Allowing a game application to fetch only the required individual pages of texture data as opposed to portions of, or entire, MIPs. This allows the raw memory bandwith of 2.4 GB/s to transmit 6 GB/s of texture data in essence. This also has an added effect of reducing the amount of space in memory that is needed for those textures, effectively allowing the GPU optimal memory to store what represents 25 GBs of texture data.

On top of this they've stated that developers will have 'instant access to 100 GBs of game data' it's been assumed that they're using a custom combination of memory mapping and AMD's SSG technology to achieve this, due to the many mentionings in the marketing about 'using the SSD as virtual RAM' as well as James Stanard speaking on using Sampler Feedback Streaming to fetch pages of texture data. SSG creates a direct link from the GPU to the Storage of the system, enabling data to bypass the CPU entirely and enabling data to be consumed directly by the GPU, freeing up those CPU cycles with the added benefit of a reduction in latency. An in-depth article about the SSG technology can be found here for those curious.

The support for Mesh Shaders and VRS becomes the proverbial icing on the cake for this architecture. Mesh shaders will exponentially increase the level of geometric detail over current gen (demo here) and the Tier 1 and 2 VRS support will help developers milk out extra performance with no discernable difference to human eyes at runtime.

And this is only the public information, I'm sure there's still a few more tricks in the box that have not been made public which we will hopefully learn about at the hotchips presentation.

I don't see this combination of tech creating anything except the bleeding edge of game asset streaming. Developers that choose to incorporate all of these technologies into their games will be able to create games that will not be possible on any other platform.
 

Jerm411

Member
I personally think that the gap between top tier Series X games and top tier PS5 games is going to very stark.

Yeah....no....you can talk specs and all the tech you want but if you’re telling me that there will be a stark difference between ANY MS studio game and something say from ND or SSM than lol ok.

Whichever side of the fence you’re on, I think most would agree on that.
 
So a quick post until I check out; I got the thread on Reeee mixed up, but I think you guys know what the actual thread is now if you lurk. Anyway, I've been reading some of the posts there particularly those from people like Lady Gaia and while I respect their knowledge, I think they're missing the forest from the trees.

The only counter-point I'm seeing from many of the people in that thread is that the paper specs don't convey such a thing. And technically, they're right. They don't. But the paper specs don't tell us everything about either system to be perfectly honest, and they never did. Not the Github leaks, and not even the official specs given thus far really tell us everything on the full system architectures.

So it's a bit of a mistake, imo, that they're clinging to the paper specs as a point of refutation, because the paper specs (just as an example) don't mention anything about the PS5's cache scrubbers, now do they? Or it's Geometry Engine and Primitive Shader customizations, right? And no, Cerny's speech doesn't count as paper specs because the specific details he talked about aren't necessarily going to be quoted by gaming sites or most tech Youtube vids when summarizing the "big" details of the systems, which is what paper specs generally tend to focus on.

I don't know if those people have actually bothered to look into MS patents pertinent to the Series system(s) at all, to be perfectly honest, and maybe that's because they are under the assumption Series X is just a "PC in a console box"? Any brief look into things would show the opposite, you can also see this from what people like G GODbody have posted (great summary post btw). It's just that MS is much more inclined to utilize any tech R&D made through Series on PC as a result since they are wanting to leverage PC gaming much more going forward, whereas Sony doesn't quite have that type of desire (at least for the current time being).
 

GODbody

Member
Yeah....no....you can talk specs and all the tech you want but if you’re telling me that there will be a stark difference between ANY MS studio game and something say from ND or SSM than lol ok.

Whichever side of the fence you’re on, I think most would agree on that.
ND and SSM make excellent games on the hardware budget that they're limited to. But what would those companies be capable of with more powerful hardware?

I'm just basing my statement off of the promise of the XVA and the fact that the Series X has quite a hardware advantage over the PS5 already. We'll see how it actually plays out over this generation, but it appears that Series X Devs will have have more CPU and GPU bandwidth to play with before adding in the other technologies I outlined in my post.
 

CrysisFreak

Banned
I personally think that the gap between top tier Series X games and top tier PS5 games is going to very stark.

After reading through the patents that they have made public as well as deducing their custom use of AMD's SSG technology, it seems they've fully optimized and streamlined every aspect that they could have and thrown in some 'future-proofing' on top of those features in the form of VRS, Mesh Shader support and DRX. All of which stacks on top of each other to form the architecture of the Series X.

Take the Machine Learning support they've added for example. This patent describes a method for the Series X to use machine learning to not only compress, but to decompress and upscale textures from 1080p to 4k. Representing a 4x reduction of data footprint. Which will also not only drastically reduce the size of the game on storage, it will reduce the amount of data that will be needed to be streamed by the game application.

This will stack with the Sampler Feedback Streaming that they have added which they have stated will further reduce the amount of texture data that is needed to be streamed by the game application at runtime by a factor of 2.5x on average. Allowing a game application to fetch only the required individual pages of texture data as opposed to portions of, or entire, MIPs. This allows the raw memory bandwith of 2.4 GB/s to transmit 6 GB/s of texture data in essence. This also has an added effect of reducing the amount of space in memory that is needed for those textures, effectively allowing the GPU optimal memory to store what represents 25 GBs of texture data.

On top of this they've stated that developers will have 'instant access to 100 GBs of game data' it's been assumed that they're using a custom combination of memory mapping and AMD's SSG technology to achieve this, due to the many mentionings in the marketing about 'using the SSD as virtual RAM' as well as James Stanard speaking on using Sampler Feedback Streaming to fetch pages of texture data. SSG creates a direct link from the GPU to the Storage of the system, enabling data to bypass the CPU entirely and enabling data to be consumed directly by the GPU, freeing up those CPU cycles with the added benefit of a reduction in latency. An in-depth article about the SSG technology can be found here for those curious.

The support for Mesh Shaders and VRS becomes the proverbial icing on the cake for this architecture. Mesh shaders will exponentially increase the level of geometric detail over current gen (demo here) and the Tier 1 and 2 VRS support will help developers milk out extra performance with no discernable difference to human eyes at runtime.

And this is only the public information, I'm sure there's still a few more tricks in the box that have not been made public which we will hopefully learn about at the hotchips presentation.

I don't see this combination of tech creating anything except the bleeding edge of game asset streaming. Developers that choose to incorporate all of these technologies into their games will be able to create games that will not be possible on any other platform.
>DRX
An API that has a counterpart on PS5, can't make comparisons yet.

>VRS
Jeez I bet PS5 lacks this RDNA2 feature cuz it's RDNA1 /s

>Mesh Shader
Geometry Engine

>instant access to 100gigs
kill me

>SSG allows data to bypass the CPU entirely
And the PS5 IO does what in contrast to that? Giggle?

If you repost this on misterxmedia you'll have a lot of friends.
 
>DRX
An API that has a counterpart on PS5, can't make comparisons yet.

>VRS
Jeez I bet PS5 lacks this RDNA2 feature cuz it's RDNA1 /s

>Mesh Shader
Geometry Engine

>instant access to 100gigs
kill me

>SSG allows data to bypass the CPU entirely
And the PS5 IO does what in contrast to that? Giggle?

If you repost this on misterxmedia you'll have a lot of friends.

Let them dream.
 

geordiemp

Member
I will be watching.

AMD have filed allot of patents recently for RDNA2 - a hell of a lot in recent weeks and I expect we will here more in coming month from both sides.

I am sure both MS and Sony will give them different names for the APIs that use the new silicon logic, so lots of confusion will reign on GAF thats a given.

There are so many new AMD patents for RDNA2 that hopefully we see in consoles.....





I expect allot of this in hotchips.
 
Last edited:

ZywyPL

Banned
GPU is where the most new interesting futures are, so that one gets my vote, the rest of the parts are... there, that's it.
 

SirTerry-T

Member
.... let's try not making this a comparison thread between Series X and PS5, because that's bound to turn into chaos, plus it's really not required to make some educated speculation about these Series X customizations!

A noble request OP but one that looks like it's being ignored, as per usual in these sort of threads.
Maybe you overestimated the reading comprehension of some forum users????
 

JonnyMP3

Member
If there is a higher cache level rumour on the CPU and GPU... I'm wondering if that's going to actually be freely usable or a bandaid for the split board design and the 10/6GB split with their different speeds.

Caching data from the faster 10GB pool as it waits for the slower 6GB side. That's one of the ways explained to deal with the speed difference of the 2 RAMs.
2nd way to deal with it is the same as the PS5s and using a custom memory controller unit that bridges all 3 parts and automatically offloads different jobs to different places.
 
There may be some truth to this if Sony is so desperate to secure exclusives. Their system cant have the inferior version of a game if there is no other version to compare it to.

That sounds like biased fanboy logic, if first thing you can imagine is desperation.

There are more reasons than being desperate, specially in situation where ps5 is almost the winner before the gen even starts (being HUGELY more popular system outside of usa)

Funny how small tflops difference is still turned into some kind of selling point, while in reality majority wont even notice/understand it.

Inferior would be switch vs. PS4 witcher, not two systems that are almost the same performance wise.

People whom turn "12 vs 10.2" into " OMG HUGE INFERIOR VERSION" just make fools of themselves
 

GODbody

Member
>DRX
An API that has a counterpart on PS5, can't make comparisons yet.

>VRS
Jeez I bet PS5 lacks this RDNA2 feature cuz it's RDNA1 /s

>Mesh Shader
Geometry Engine

>instant access to 100gigs
kill me

>SSG allows data to bypass the CPU entirely
And the PS5 IO does what in contrast to that? Giggle?

If you repost this on misterxmedia you'll have a lot of friends.

I was waiting for this post. It was inevitable. I specifically wrote my post in a way that made no comparisons to the PS5 hardware yet here you are.

>DRX
An API that has a counterpart on PS5, can't make comparisons yet.
I misspelled DXR. Which is an featue of DirectX12U to give devs better tools to implement ray-tracing. DXR apparently resulted in a collaboration between Microsoft and AMD to help improve the API and improve ray tracing on the RDNA 2 architecture. Remind me again what GPU the PS5 is using?



>VRS
Jeez I bet PS5 lacks this RDNA2 feature cuz it's RDNA1 /s

Let me know when Sony confirms the support for VRS or their games show evidence of VRS. PS5 does not support this.

>Mesh Shader
Geometry Engine

The Geometry engine supports primitive shaders. Primitive shaders are similar to, but not as performant as Mesh Shaders. The PS5 probably won't support Mesh Shaders either as you need to specificy the support in hardware

swek4pc.png



>SSG allows data to bypass the CPU entirely
And the PS5 IO does what in contrast to that? Giggle?

The goal of the PS5s I/O architecture is to load data into memory as fast as it can so it can be read from there by the CPU or GPU. Which is what consoles have been doing for the last couple of generations. Not directly supplying the CPU or GPU with data. Here's an illustration.


OQBqqRK.jpg


Where's the data from the SSD going? (I'll give you a hint it's where the arrow is pointing to)

Now can we please return to the topic of this thread.
 
Last edited:
If anyone is curious, Dusk over on Reeee gave some elaboration on his comments and the game he was speaking of (as one of, though not only) in terms of making the comment I quoted in the OP, is RE Village. So, a rather big AAA game and this coming from Dusk makes sense given they're more on the Capcom side of insider rumor things.

Apparently the game's running perfectly fine on Series X but they're having issues on PS5 to stabilize performance, though this could simply be due to the engine (RE Engine, I'm not 100% sure how well optimized it is, though seeing the results it's produced so far it's a banger...just don't go for RE/DMC style artstyle for SF6 okay Capcom?), but nonetheless it's an interesting observation.

Is RE Village cross-gen? If so it could be explained the difference is that it's a single-threaded game, so it may not need the SMT feature. Series systems have an SMT Off mode to increase the CPU clocks, so if a game like RE Village is of that style it would benefit a lot from a mode with even higher CPU clocks and, again, faster CPU also benefits GPU in passing along instructions for the GPU to crunch further.

Or, this could also be reflective of some customization in the GPU for cutting out stall times waiting on CPU instructions, which is what executeIndirect was designed for. Could the mention of the ARM cores from many months back be a customization on the GPU to further enhance executeIndirect-style performance? Could RE Village be utilizing such hardware which could explain what might be going on with performance of it on Series X compared to PS5 currently?

It's rather interesting, nonetheless.

OQBqqRK.jpg


Where's the data from the SSD going? (I'll give you a hint it's where the arrow is pointing to)

Now can we please return to the topic of this thread.

Yeah I think it should be common knowledge now that PS5 is using the I/O block to quickly write compressed/decompressed data into the system memory and then the CPU, GPU, Tempest etc. reading from the memory. So it basically shares the memory bus with CPU, GPU, and Tempest.

That of course means a bit more bus contention, but the speed of the I/O block should resolve a lot of that on its own, since it won't need the bus for long stints of time. Also, often having more processor components share the bus creates latency (in fact I think it was either you or someone else who pointed this out); dunno how that will affect PS5's setup but everything being localized to the APU and therefore very close together will help cut down on a lot of that potential annoyance too.
 
Last edited:

CrysisFreak

Banned
I was waiting for this post. It was inevitable. I specifically wrote my post in a way that made no comparisons to the PS5 hardware yet here you are.


I misspelled DXR. Which is an featue of DirectX12U to give devs better tools to implement ray-tracing. DXR apparently resulted in a collaboration between Microsoft and AMD to help improve the API and improve ray tracing on the RDNA 2 architecture. Remind me again what GPU the PS5 is using?





Let me know when Sony confirms the support for VRS or their games show evidence of VRS. PS5 does not support this.



The Geometry engine supports primitive shaders. Primitive shaders are similar to, but not as performant as Mesh Shaders. The PS5 probably won't support Mesh Shaders either as you need to specificy the support in hardware

swek4pc.png





The goal of the PS5s I/O architecture is to load data into memory as fast as it can so it can be read from there by the CPU or GPU. Which is what consoles have been doing for the last couple of generations. Not directly supplying the CPU or GPU with data. Here's an illustration.


OQBqqRK.jpg


Where's the data from the SSD going? (I'll give you a hint it's where the arrow is pointing to)

Now can we please return to the topic of this thread.

cringe.

>the thread is not about PS5
yeah that's why it starts with a quote about PS5 being "far less powerful" lmao

>PS5 does not support VRS
Typical Xbot lies, you just state this as fact because your own delusion convinced you of this.
But yeah VRS is a game-changer considering how much it helped Halo eh?

my sides.
 

geordiemp

Member
I was waiting for this post. It was inevitable. I specifically wrote my post in a way that made no comparisons to the PS5 hardware yet here you are.


I misspelled DXR. Which is an featue of DirectX12U to give devs better tools to implement ray-tracing. DXR apparently resulted in a collaboration between Microsoft and AMD to help improve the API and improve ray tracing on the RDNA 2 architecture. Remind me again what GPU the PS5 is using?





Let me know when Sony confirms the support for VRS or their games show evidence of VRS. PS5 does not support this.



The Geometry engine supports primitive shaders. Primitive shaders are similar to, but not as performant as Mesh Shaders. The PS5 probably won't support Mesh Shaders either as you need to specificy the support in hardware

swek4pc.png





The goal of the PS5s I/O architecture is to load data into memory as fast as it can so it can be read from there by the CPU or GPU. Which is what consoles have been doing for the last couple of generations. Not directly supplying the CPU or GPU with data. Here's an illustration.


OQBqqRK.jpg


Where's the data from the SSD going? (I'll give you a hint it's where the arrow is pointing to)

Now can we please return to the topic of this thread.


Cerny has a patent on new type of VRS with Geometry engine that will also work with foveated rendering and also wide angle views saving 2.5 x efficiency (Page 4 of patent).

http://images2.freshpatents.com/pdf/US20180047129A1.pdf

Try again. You can compare mesh shaders and VRS against the Cerny patent if you want to discuss properly and not nonsense.

There are some other patents as well.
 
Last edited:

supernova8

Banned
I personally think that the gap between top tier Series X games and top tier PS5 games is going to very stark.

I can believe that the system itself may be more powerful than PS5 but so far I haven't seen much to suggest Xbox Games studios are capable of making amazingly gorgeous looking games. The last I remember being wowed was probably the original Gears of War but Cliffy B isn't involved anymore. Project Gotham Racing was another zinger but the fucking idiots closed down Bizarre Creations.

So yeah.... it's all good having a more powerful console but pretty meaningless if the developers are not pushing the envelope. If what we hear is true about Xbox being cross-gen for a year or so, it'll be a while before we see anything use remotely close to the XSX's full power.

I'm really begging to be proven wrong, only time will tell.
 

geordiemp

Member
If anyone is curious, Dusk over on Reeee gave some elaboration on his comments and the game he was speaking of (as one of, though not only) in terms of making the comment I quoted in the OP, is RE Village. So, a rather big AAA game and this coming from Dusk makes sense given they're more on the Capcom side of insider rumor things.

Apparently the game's running perfectly fine on Series X but they're having issues on PS5 to stabilize performance, though this could simply be due to the engine (RE Engine, I'm not 100% sure how well optimized it is, though seeing the results it's produced so far it's a banger...just don't go for RE/DMC style artstyle for SF6 okay Capcom?), but nonetheless it's an interesting observation.

Spreading FUD I see ? Sources ?

The only games we have seen struggling are PC versions of XSX games like Halo.

Maybe teh game has not been ported fully from PC to Ps5, who knows, they are very different set ups. There was also confusion it could be ps4 pro version...

I thought you only wanted to discuss XSX hotchips, but could not help youself ?
 
Last edited:
I can believe that the system itself may be more powerful than PS5 but so far I haven't seen much to suggest Xbox Games studios are capable of making amazingly gorgeous looking games. The last I remember being wowed was probably the original Gears of War but Cliffy B isn't involved anymore. Project Gotham Racing was another zinger but the fucking idiots closed down Bizarre Creations.

So yeah.... it's all good having a more powerful console but pretty meaningless if the developers are not pushing the envelope. If what we hear is true about Xbox being cross-gen for a year or so, it'll be a while before we see anything use remotely close to the XSX's full power.

I'm really begging to be proven wrong, only time will tell.

Flight Simulator 2020 is your friend ;)

Seriously, that game is gorgeous and in terms of "next-gen" graphical gameplay it's probably at the top of my list from what I have seen. A close second would be a tie between the recent-ish Bright Memory Infinite demo and Ratchet & Clank: A Rift Apart. After that I'd put stuff like GT7, The Medium, Crossfire X etc. and after that Kena, The Gunk, The Ascent, Scorn, Returnal, and all that kind of stuff. This is just for new/next-gen games where we've seen actual gameplay, mind you.

I don't think MS owns Asobo Studios but they really need to consider doing so ASAP, the work they've done across multiple games has been nothing short of amazing. They would have a bevy of graphical powerhouse studios bringing them on board along with the likes of Ninja Theory, Turn 10, The Coalition etc. I'd much rather MS buy a studio like Asobo than spend $4 billion on WB Games, TBQH.
 

JonnyMP3

Member
The Geometry engine supports primitive shaders. Primitive shaders are similar to, but not as performant as Mesh Shaders. The PS5 probably won't support Mesh Shaders either as you need to specificy the support in hardware.
I'm done. I am fucking done!

A shading technique needs to have hardware specification??

The Geometry Engine is designed to specifically work with 'Primitive Shaders'
But you can use the GPU like a normal GPU... Meaning it can use any shader it likes that the developers chose including MESH! 🤦🏻‍♂️

I'd delete your account if I was you.
 

JonnyMP3

Member
The goal of the PS5s I/O architecture is to load data into memory as fast as it can so it can be read from there by the CPU or GPU. Which is what consoles have been doing for the last couple of generations. Not directly supplying the CPU or GPU with data. Here's an illustration
OQBqqRK.jpg


Where's the data from the SSD going? (I'll give you a hint it's where the arrow is pointing to)

Now can we please return to the topic of this thread.
This is the one that makes me scream.
Yeah the processed data goes to system memory...
But look at the MAIN CUSTOM CHIP!
This is an APU. Meaning it's an All In One integrated board. It doesn't need to shove stuff into RAM if it's already there for the CPU and GPU to use!
 

GODbody

Member
Cerny has a patent on new type of VRS with Geometry engine that will also work with foveated rendering and also wide angle views saving 2.5 x efficiency (Page 4 of patent).

http://images2.freshpatents.com/pdf/US20180047129A1.pdf

Try again. You can compare mesh shaders and VRS against the Cerny patent if you want to discuss properly and not nonsense.

There are some other patents as well.

I've replied to you in another thread about that. That's a patent for a foveated rendering technique. Not a patent for VRS.

Ah I didn't notice the edit. Well according to the patent's description of FIG. 2D



It's elaborated later in the patent



This is an implementation of an improved version of foveated rendering not an implementation of VRS.


cringe.

>the thread is not about PS5
yeah that's why it starts with a quote about PS5 being "far less powerful" lmao

>PS5 does not support VRS
Typical Xbot lies, you just state this as fact because your own delusion convinced you of this.
But yeah VRS is a game-changer considering how much it helped Halo eh?

my sides.
I'm pretty console neutral #PCMasterRace I just find the Xbox architecture more interesting than the PS5 architecture. That said, Sony has not confirmed VRS support. 0 of the games they've shown have displayed the use of VRS. Like I said, Please show me evidence of your claims. VRS produces a 10-15% performance increase on average when used
 
Last edited:

GODbody

Member
I'm done. I am fucking done!

A shading technique needs to have hardware specification??

The Geometry Engine is designed to specifically work with 'Primitive Shaders'
But you can use the GPU like a normal GPU... Meaning it can use any shader it likes that the developers chose including MESH! 🤦🏻‍♂️

I'd delete your account if I was you.
Unsurprisingly, the command processor will still need to know how to process and display the Mesh Shader instructions. If they were universal we'd be seeing mesh shaders on current gen games by your logic. Instead the only GPUs that currently support Mesh Shaders are Nvidia turing GPUs. You clearly have no idea what you're talking about.
 
Last edited:

Jerm411

Member

There’s a thread on here now about it...

He’s backtracked and shifted on so much it’s hard to keep track of lol.

Basically it comes down to him trying to say RE8 runs like shit on the PS5 compared to the Series X, that was the point when he was pushed and it got down to brass tacks.
 
This is the one that makes me scream.
Yeah the processed data goes to system memory...
But look at the MAIN CUSTOM CHIP!
This is an APU. Meaning it's an All In One integrated board. It doesn't need to shove stuff into RAM if it's already there for the CPU and GPU to use!

Exactly, but when it DOES need to make those types of transfers the CPU, GPU etc. have to "wait their turn" on the memory bus while the I/O block does its reads and writes. It's bus contention, and it'll be similar (but not exactly) on the Series systems since the CPU, GPU etc. there need to share the memory bus as well.

That's one of the "drawbacks" of hUMA designs and SoCs integrating everything on a single chip consisting of multiple components drawing from the same block of off-chip memory. The more processor elements you have in that kind of set-up, the more bus contention you'll have and (potentially) the more conflict on the bus and need to prioritize more elements needing bus access, and when.

The "ironic" benefit with the Series design here is, if the CPU is using a fractional resource to transfer data to/from RAM, then CPU-bound logic/operations can simultaneously still access the RAM. Their "level" of access may be reduced during such operations but that would depend on the actual volume of data being transferred from storage to RAM. as one example.

There’s a thread on here now about it...

He’s backtracked and shifted on so much it’s hard to keep track of lol.

Basically it comes down to him trying to say RE8 runs like shit on the PS5 compared to the Series X, that was the point when he was pushed and it got down to brass tacks.

Your interpretation, but that's not what actually happened. He softened his language on it and also gave some conditionals on what he figured could be the reason for the differences in current build performance between the two, but he never actually rescinded his statement on RE Village in particular running better currently on Series X. And he has still stuck to his contacts having told him what they've told him.

From what's happened since, I think we can agree that RE Village at one point being a cross-gen game getting shifted to next-gen only, the builds might not've made a switch to optimize for multi-threaded workflow. Since PS5 does not offer an SMT Off mode, the game (and maybe RE Engine as a whole) could be running worst there until they optimize for multi-threaded next-gen performance. Also keep in mind PS5 devkits have to run in profile modes, so with the aforementioned being a factor it's likely the team have chosen a profile mode to try prioritizing the GPU but that in turn tones down the CPU, adding to the current performance delta of that game on the two platforms.

If that happens to be the crux of the matter, I'd expect performance to even out between the two platforms once Capcom's made the engine pipeline adjustments. However, if they still need to use the PS5's profile mode settings that could hinder some of the performance capability of the PS5 version even with their finer-tuned engine and code. Or, it could just be indicative of a limitation with the variable frequency setup in general, perhaps the dips once power budget is exceeded are larger than Cerny alluded to after all 🤷‍♂️ .

I will be watching.

AMD have filed allot of patents recently for RDNA2 - a hell of a lot in recent weeks and I expect we will here more in coming month from both sides.

I am sure both MS and Sony will give them different names for the APIs that use the new silicon logic, so lots of confusion will reign on GAF thats a given.

There are so many new AMD patents for RDNA2 that hopefully we see in consoles.....





I expect allot of this in hotchips.


Me too, dude. In fact, I think we all are. The time of MS and Sony keeping us in the dark about some of the more intricate parts of their hardware is almost past its due, so I'm definitely looking forward to seeing what they present to us. Good finds btw.
 
Last edited:

JonnyMP3

Member
Unsurprisingly, the command processor will still need to know how to process and display the Mesh Shader instructions. If they were universal we'd be seeing mesh shaders on current gen games by your logic. You clearly have no idea what you're talking about.
Mesh shaders were invented 2 years ago. Games were already well in development to be able to use the new shading techniques.
It's also just a SHADING PROGRAM so instructions will be done by the API to tell the GPU to use meshlet's instead of primitive shaders.
 

geordiemp

Member
I've replied to you in another thread about that. That's a patent for a foveated rendering technique. Not a patent for VRS.





I'm pretty console neutral #PCMasterRace I just find the Xbox architecture more interesting than the PS5 architecture. That said, Sony has not confirmed VRS support. 0 of the games they've shown have displayed the use of VRS. Like I said, Please show me evidence of your claims. VRS produces a 10-15% performance increase on average when used

If you can see VRS, then it has not worked as intended or is poorly implemented, it is supposed to be unseen in areas you would not notice. Do you know all those native 4k games on Ps5 show were Native 4K ? Nobody knows as Sony have not said how they did it.

And foveated rendering VRS in sectors is VRS, just more complex. Try reading the patent.
 
Last edited:

JonnyMP3

Member
Exactly, but when it DOES need to make those types of transfers the CPU, GPU etc. have to "wait their turn" on the memory bus while the I/O block does its reads and writes. It's bus contention, and it'll be similar (but not exactly) on the Series systems since the CPU, GPU etc. there need to share the memory bus as well.

That's one of the "drawbacks" of hUMA designs and SoCs integrating everything on a single chip consisting of multiple components drawing from the same block of off-chip memory. The more processor elements you have in that kind of set-up, the more bus contention you'll have and (potentially) the more conflict on the bus and need to prioritize more elements needing bus access, and when.

The "ironic" benefit with the Series design here is, if the CPU is using a fractional resource to transfer data to/from RAM, then CPU-bound logic/operations can simultaneously still access the RAM. Their "level" of access may be reduced during such operations but that would depend on the actual volume of data being transferred from storage to RAM. as one example.
I'm not 100% sure if this is one of the custom things from Sony to help with that problem but Cerny did say that one of the I/O coprocessors in the custom unit was dedicated to bypassing traditional file I/O bottlenecks when reading from the SSD.
 
Last edited:

GODbody

Member
If you can see VRS, then it has not worked as intended or is poorly implemented, it is supposed to be unseen in areas you would not notice. Do you know all those native 4k games on Ps5 show were Native 4K ? Nobody knows as Sony have not said how they did it.

And foveated rendering VRS in sectors is VRS, just more complex. Try reading the patent.
You should not be able to see VRS or notice VRS at runtime. But evidence of VRS will be shown in freeze frames and under scrutiny. Like I said, produce evidence and I will change my opinion.

Foveated Rendering is not equivalent to or evidence of VRS. We already went over this in the other thread that I linked. That patent specifically refers to a VR headset display technology and the specific part of that patent you refer to is a technique for Foveated rendering. You can use VRS to replicate Foveated rendering but you cannot use Foveated rendering to replicate VRS. I read that entire VR display patent you linked and nowhere does it describe the decoupling of resolution from rasterization and shading. Here's patent for VRS from AMD. Let me know which parts even sound remotely similar to the patent you've linked.
 

Panajev2001a

GAF's Pleasant Genius
Exactly, but when it DOES need to make those types of transfers the CPU, GPU etc. have to "wait their turn" on the memory bus while the I/O block does its reads and writes. It's bus contention, and it'll be similar (but not exactly) on the Series systems since the CPU, GPU etc. there need to share the memory bus as well.

That's one of the "drawbacks" of hUMA designs and SoCs integrating everything on a single chip consisting of multiple components drawing from the same block of off-chip memory. The more processor elements you have in that kind of set-up, the more bus contention you'll have and (potentially) the more conflict on the bus and need to prioritize more elements needing bus access, and when.

The "ironic" benefit with the Series design here is, if the CPU is using a fractional resource to transfer data to/from RAM, then CPU-bound logic/operations can simultaneously still access the RAM. Their "level" of access may be reduced during such operations but that would depend on the actual volume of data being transferred from storage to RAM.

I am not sure why we are blowing things over proportion talking about the data contention between CPU, GPU, and SSD controller as if doing transfers in SW taxing the CPU were actually a benefit (as if USB requiring the CPU to transfer data had ever been seen as a performance opportunity). CPU and GPU need data often from the SSD itself so they are waiting on data they could not otherwise access and the more complex processing needed the more the local cache would allow the CPU to work while another unit is transferring data to RAM directly.

What even MS has repeatedly said is that to transfer the volume of data these new consoles are meant to process is something that is desirable to offload from the CPU while you are arguing that system level arbitration and I/O DMA is the least performante option (which is not what anyone benchmarking or using say FireWire vs USB ever said for high bandwidth scenarios like say video streaming). CPU manually copying data from the SSD to RAM would still “block the bus”/contend with other memory accesses done by itself for other scenarios or by the GPU.
 
I am not sure why we are blowing things over proportion talking about the data contention between CPU, GPU, and SSD controller as if doing transfers in SW taxing the CPU were actually a benefit (as if USB requiring the CPU to transfer data had ever been seen as a performance opportunity). CPU and GPU need data often from the SSD itself so they are waiting on data they could not otherwise access and the more complex processing needed the more the local cache would allow the CPU to work while another unit is transferring data to RAM directly.

What even MS has repeatedly said is that to transfer the volume of data these new consoles are meant to process is something that is desirable to offload from the CPU while you are arguing that system level arbitration and I/O DMA is the least performante option (which is not what anyone benchmarking or using say FireWire vs USB ever said for high bandwidth scenarios like say video streaming). CPU manually copying data from the SSD to RAM would still “block the bus”/contend with other memory accesses done by itself for other scenarios or by the GPU.

Well I mean, if you read the post, you could clearly see I put quotation marks around "drawback" and "advantage" for a reason; they are not significant drawbacks nor absolutely superior advantages, BUT they do have some incidental, marginal drawbacks and advantages which may be exacerbated or minimized when looking at the fuller architecture (which we don't have all details for on either system).

Your example of CPU and GPU needing data from SSD "often" is too loose; certain games, even large games, don't necessarily require a ton of unique data being constantly shuffled in and out of RAM at all times. Not all data processing methods need unique raw data being consistently transferred in/out of RAM either. For example, if a game knows it can generate a one-time pass of new data from multiple pieces of data already in RAM, and expects to do this somewhat frequently, provided the construction of the new data isn't a big penalty in performance, it can simply just generate that new data at runtime when actually needed and spare both RAM from keeping that resultant data resident in the memory, and storage load calls from reading the data from the SSD (cutting down on accesses to a lower "tiers" of memory, given the performance delta between NAND and GDDR6 as just one example).

I am not saying that I/O DMA etc. is the least performant option; never have and never will. All I said was that there is negligible "drawback" (really, just more of a small quirk) in having 100% of that done through a separate processor block which, on a hUMA design where everything shares the bus, adds a tad to the bus contention. Conversely, there is a slight/marginal "side benefit" in letting the CPU still handle a fraction of that transfer process in moving data from storage to RAM, and I even said there would still be a reduction in that given benefit given that type of combined transaction (data I/O transfer & CPU-bound game logic RAM access) would be concurrent. But you must've read past that part or didn't see it for what it was.

I'm not 100% sure if this is one of the custom things from Sony to help with that problem but Cerny did say that one of the I/O coprocessors in the custom unit was dedicated to bypassing traditional file I/O bottlenecks when reading from the SSD.

Yes they did mention that. AFAIK there are three processor "cores" in the I/O block: The two I/O co-processors and the CCE (Cache Coherency Engine).

I think where people have things twisted is an assumption MS have no equivalent to this; a few weeks ago I linked to a Flashmaps patent and paper some people on B3D came about, and that also explicitly mentions implementations (very likely in the Series systems) which bypass traditional file I/O bottlenecks as well.

Actually, that stuff mentioned A LOT of things, and I'm curious to see what aspects of it are mentioned in the Hot Chips presentation (assuming it's being implemented in the Series systems, which I'd consider a high probability).
 
Last edited:

oldergamer

Member
If XSX is so superior then why did everything at the PS5 reveal look a generation or more ahead of the Xbox shows?

Also why was everything at the Xbox shows running on PC rigs as if they had something to hide?

If the XBox is capable of so much, when will we get to see these advantages?
Nothing looked a generation ahead with ps5. Most of the game sony showed looked like ps4 titles. Its not even close. I didnt see any games that looked remotely next gen revealed yet from ms or sony. At least not yet. Games in 4k dont impress me as ive already been playing some games in 4k on my onex
 

GODbody

Member
Mesh shaders were invented 2 years ago. Games were already well in development to be able to use the new shading techniques.
It's also just a SHADING PROGRAM so instructions will be done by the API to tell the GPU to use meshlet's instead of primitive shaders.

Mesh Shaders still require the hardware to support them to use the full capabilities. not all GPUs support Mesh Shaders. In fact, the only GPUs that currently support mesh shaders are the RTX 20xx series of cards. Please provide evidence to the contrary if you're going to continue making unfounded claims. Here's an article on Mesh Shaders

Alternatively, vertex cache optimization algorithms can probably be modified to produce meshlets directly. For GPUs without mesh shader support, you can concatenate all the meshlet vertex buffers together, and rapidly generate a traditional index buffer by offsetting and concatenating all the meshlet index buffers. It’s pretty easy to go back and forth.

This is the one that makes me scream.
Yeah the processed data goes to system memory...
But look at the MAIN CUSTOM CHIP!
This is an APU. Meaning it's an All In One integrated board. It doesn't need to shove stuff into RAM if it's already there for the CPU and GPU to use!

Typically CPUs will trigger the memory controller to move data into RAM first, then they will move some data that they need onto DRAM on the chip to use. That pass through in the image is just illustrating the data flow from the SSD into memory is not actually passing through the chip.
 
Last edited:

Elog

Member
Really looking forward to the Monday presentation. I am mostly looking for how they have worked on their I/O and cache systems (both management and sizes). Hardly any information on the XSX when it comes to this which can mean anything.

Also a bit interested in their cooling system as well even though they should probably get away with a fairly standard one given the frequencies involved.
 

Elog

Member
Mesh Shaders still require the hardware to support them to use the full capabilities. not all GPUs support Mesh Shaders. In fact, the only GPUs that currently support mesh shaders are the RTX 20xx series of cards. Please provide evidence to the contrary if you're going to continue making unfounded claims. Here's an article on Mesh Shaders

We all need to be careful when discussing shaders. Shading something is easy. Prioritise what to shade is the difficult part - all the various shaders differ in how they prioritise what geometries, what sections of the screen and how dynamic the shaders are.

So far the new consoles seem to take quite different approaches in how to prioritise - very hard to assess what is good/bad with the different solutions.
 
Top Bottom