• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Exploring The Complications Of Series X's Memory Configuration & Performance

How much are YOU willing to say recent Xbox multiplat perf is affected by the memory setup?

  • Very

    Votes: 18 9.8%
  • Mostly

    Votes: 32 17.5%
  • So/so

    Votes: 40 21.9%
  • Not really

    Votes: 41 22.4%
  • None

    Votes: 52 28.4%

  • Total voters
    183
  • Poll closed .
Status
Not open for further replies.

yamaci17

Member
Thanks, I appreciate your clarification. So from your description, those textures are arranged into 64 KB portions on the storage itself. The texture is still stored as a contiguous asset but since it's in a format SFS can understand, the system can access the portions needed.

I can see the advantages, for sure. But there are probably some disadvantages as well. For example, any accesses of data to storage still incur magnitudes more latency access than in RAM, which incur magnitudes more than data in the caches. If you're going to be using SFS for parts of a texture that effectively result in the whole texture being used in a given scene (at some point), you're probably better off storing the whole texture in RAM anyway and that's where the advantage of SFS would end (for that particular texture).

If that repeats for a group of unique textures, then you're just better off keeping that group of textures in RAM anyway, rather than incur the access latency penalty of the SSD. I can also see how, at least in theory, the concept of SFS might be at odds with how artists actually tend to make their texture assets; they aren't making them as 64 KB blocks, but at 4K-quality (or larger) sizes. So work on the artist's end stays the same in terms of load.

Meanwhile managing the feedback stack on the programmer's end probably involves a bit of work, and that may explain why SFS isn't being readily used in a lot of games yet. Though I'm sure at least a few games have to be using it, rather on Xbox or PC. I'm still not convinced it totally resolves some of the concerns I listed in the OP though, for reasons I mention in this specific post.



I'm pretty sure Forspoken is utilizing it, at least on PC, since it's also using DirectStorage there. But really, if no game's using these, that's no one's fault but Microsoft's. They should have had their 1P games making use of these things instead of waiting and hoping for 3P to do so.



Don't think this is actually the case. When devs make games for say PS5, they have to use Sony's APIs, because Sony doesn't use Microsoft's APIs or their SDKs. The high-level language that devs might write their applications in runs on all the different platforms, that's true, but for closed boxes like consoles, typically you are using specific APIs and tools designed for that box by the platform holder.

Certain things like commercial viability of the platform versus difficulty in the APIs and tools able to extract optimal performance in a reasonable time frame can impact the rate of adoption, but this is Xbox we're talking about here. It's not some little add-on like Sega CD or some obscure dead-end like the Jaguar. The brand still sells better than a lot of other consoles, in fact only Sony and Nintendo consoles have generally outsold Microsoft ones. When you consider all of the platform holders that have been in the industry (Sega, NEC, Matsushita/Panasonic/3DO, Atari, FM Towns, Phillips, Apple, etc.) and the gap between even the best selling consoles among some of them and the lower-selling Microsoft ones, it's a pretty big gap.

If something's not being used, chances are it's because the opportunity cost is not worth the headaches in working with getting optimal performance. What you're suggesting, I think, is misguided because having APIs and tools that scale to a wide net of configurations actually seems like it causes problems in settling on very specific approaches that target one particular hardware spec to optimize for it to get the best performance. Which is something Digital Foundry were touching on in their latest podcast.



Maybe it's too high, maybe not. It's not just about CPU tapping the RAM for data to calculate on the game's end. If the CPU needs to be used for helping along transfer of data from storage to RAM or vice-versa, that's also time spent occupying the bus. As it's doing so, the GPU won't be able to access RAM contents.

It's a bit more of a sticking point for Series systems because I don't think they have off-chip silicon with DMA to the RAM the way the PS5 does. If they did I think we'd of learned of it at Hot Chips a few years ago. So at the very least the CPU for Series consoles is more involved in that process.

As so, maybe you're right that my example is too high on CPU and audio's end accessing the bus, at least in terms of doing so for game data to be processed (and only processed, not moved in/out of RAM <-> storage). But my main intent was to look into situations where Series X would need more than 10 GB of data to access, and if so, any data in the slower pool would drag down effective bandwidth proportional to the amount of time data in that slower pool is being accessed.

The thing is, that is going to vary wildly from game to game, and even situation to situation, moment to moment. I'm just exploring a possibility that could become more regular as the generation goes on. The worst-case you bring up assumes the CPU is only occupying the bus for maybe 7.5% of a given second. But then say there's a situation a game needs 11 GB of data that is GPU-bound, and the last GB may need to be accessed say 33% of that time. It still creates a situation where total bandwidth is dragged down, and in that case, it'd be by a lot more than the CPU itself.

You probably still get a total bandwidth higher than PS5's (I got 485.2 GB/s), but how much does that counterbalance PS5 needing to access RAM less often because of the cache scrubbers? That it doesn't need the CPU to do as much heavy lifting for cache coherency because it has dedicated silicon that offloads the requirement? Stuff like this, I think should be considered as well.
I wonder how would this so called SFS would interact with ray tracing too
 

sinnergy

Member
The engines for now favor fast speeds over MS set-up, but the Series consoles have a forward looking design . As we still get cross generation games , none are properly build in mind with more parallel executions. Until engines are adjusted we keep seeing this .
 

Panajev2001a

GAF's Pleasant Genius
Thanks, I appreciate your clarification. So from your description, those textures are arranged into 64 KB portions on the storage itself. The texture is still stored as a contiguous asset but since it's in a format SFS can understand, the system can access the portions needed.

I can see the advantages, for sure. But there are probably some disadvantages as well. For example, any accesses of data to storage still incur magnitudes more latency access than in RAM, which incur magnitudes more than data in the caches.
To be fair though this is a problem you also get with any virtual texturing solution and the way SFS chunks textures into memory pages is not really different than what you would do with sparse texture / partially resident textures AMD GPU’s since early GCN models have been able to do and before devs would do in SW (as early as the PS2 days where individual mip levels or portions of a mip level were uploaded to the GS eDRAM scratchpad from main RAM as the system rendered frames).
Fundamentally what SFS is doing, at a small cost, is to make the task of prefetching texture pages in the GPU texture caches and VRAM easier, keeping track of which portions of which textures are located in local memory and which ones are not (residency), as well as give you data from the GPU on what texture pages you are actually using to render the scene from and that can be used to predict which pages to tell the GPU to load next.

Still, texture streaming has some problems that all need to work around on all systems as you were saying. I seriously doubt all the Xbox devs that have not jumped fully onto SFS are still stuck at loading whole textures into memory without using the much more established virtual texturing approaches at all. Someone saying this has a big bridge to sell ya hehe.

Funny to see Riky still selling Mesh Shaders vs Primitive Shaders and the usage of each after the AMD interview that basically clarified it is the same HW underneath and Mesh Shaders are essentially the MS specific API to use the underlying HW 😉. See:https://www.neogaf.com/threads/amd-primitive-shaders-vs-mesh-shaders.1653359/
 
Last edited:
Digital Foundry actually spoke with devs about this performance advantage and John said devs themselves were baffled by it. Some devs think it's the DirectX overhead, but there was no consensus.

Alex speculated that everyone is still using old DXR APIs. He also speculated that because the PS5 uses new RT APIs, devs had to do MORE work to get it up and running and ended up optimizing those PS5 APIs more compared to the DXR APIs. Odd reasoning considering devs have had 5 years of experience with RT APIs on PC, and it also doesnt reconcile with the fact that multiplatform game dev is done on PC first and then ported to consoles.

Whats important is what devs have told them. Which is that they dont fucking know. PS5 literally has some secret sauce in it that is making it perform better than its specs. Memory management was not brought up. In fact, John said no devs are complaining about the Xbox or the PS5.

Sometimes it just boils down to Mark Cerny am God.

Timestamped:

A rising tide lifts all boats
 

ryzen1

Member
Overall the split memory is designed to maximize performance by allocating resources where they are needed most.
While it may impact the console's ability to handle certain tasks, it allows the console to reach a good graphics output
 

Elog

Member
Robin Williams What Year Is It GIF


This was a topic that was discussed until death back in the day and the answer is the same: XSX has a raw computational advantage over the PS5 and the PS5 has an I/O advantage over the XSX. The RAM configuration is part of that I/O difference. Net-net the two machines are very close though.

The Cliffs Notes version of that is that games that comparatively push a lot of data will run better on the PS5 while those that do not will run better on the XSX.
 

azertydu91

Hard to Kill
Jason Schreier was talking to developers before launch and he told us how close these consoles were in power

Some people listened while others chose to put all of their faith in stuff like VRS, Tflops, hardware accelerated raytracing, mesh shaders, etc

1O516DU.jpg
There was also Matt from REEE (honestly he is probably the best person there) he is very reliable and keep telling people that these consoles are closer that they've ever been...But some people (no matter the side) rather fall in PR and their own dunning-kruger experience.
 

Lysandros

Member
You probably still get a total bandwidth higher than PS5's (I got 485.2 GB/s), but how much does that counterbalance PS5 needing to access RAM less often because of the cache scrubbers? That it doesn't need the CPU to do as much heavy lifting for cache coherency because it has dedicated silicon that offloads the requirement? Stuff like this, I think should be considered as well.
In addition to scrubbers i think we should also factor in the fact that within PS5 GPU each CU has access to ~45% more L1 cache at higher bandwidth per shader array. This should reduce RAM access by nature at least for compute processes while also increasing compute efficiency/per CU saturation.

Edit: Thinking about cache hierarchy again, there is of course the L2 cache along the way before RAM which offers (a more moderate) ~15% of additional amount per CU on PS5 which should also play a role in the matter of costlier RAM access.
 
Last edited:
Thanks, I appreciate your clarification. So from your description, those textures are arranged into 64 KB portions on the storage itself. The texture is still stored as a contiguous asset but since it's in a format SFS can understand, the system can access the portions needed.

I can see the advantages, for sure. But there are probably some disadvantages as well. For example, any accesses of data to storage still incur magnitudes more latency access than in RAM, which incur magnitudes more than data in the caches. If you're going to be using SFS for parts of a texture that effectively result in the whole texture being used in a given scene (at some point), you're probably better off storing the whole texture in RAM anyway and that's where the advantage of SFS would end (for that particular texture).

If that repeats for a group of unique textures, then you're just better off keeping that group of textures in RAM anyway, rather than incur the access latency penalty of the SSD. I can also see how, at least in theory, the concept of SFS might be at odds with how artists actually tend to make their texture assets; they aren't making them as 64 KB blocks, but at 4K-quality (or larger) sizes. So work on the artist's end stays the same in terms of load.

Meanwhile managing the feedback stack on the programmer's end probably involves a bit of work, and that may explain why SFS isn't being readily used in a lot of games yet. Though I'm sure at least a few games have to be using it, rather on Xbox or PC. I'm still not convinced it totally resolves some of the concerns I listed in the OP though, for reasons I mention in this specific post.



I'm pretty sure Forspoken is utilizing it, at least on PC, since it's also using DirectStorage there. But really, if no game's using these, that's no one's fault but Microsoft's. They should have had their 1P games making use of these things instead of waiting and hoping for 3P to do so.



Don't think this is actually the case. When devs make games for say PS5, they have to use Sony's APIs, because Sony doesn't use Microsoft's APIs or their SDKs. The high-level language that devs might write their applications in runs on all the different platforms, that's true, but for closed boxes like consoles, typically you are using specific APIs and tools designed for that box by the platform holder.

Certain things like commercial viability of the platform versus difficulty in the APIs and tools able to extract optimal performance in a reasonable time frame can impact the rate of adoption, but this is Xbox we're talking about here. It's not some little add-on like Sega CD or some obscure dead-end like the Jaguar. The brand still sells better than a lot of other consoles, in fact only Sony and Nintendo consoles have generally outsold Microsoft ones. When you consider all of the platform holders that have been in the industry (Sega, NEC, Matsushita/Panasonic/3DO, Atari, FM Towns, Phillips, Apple, etc.) and the gap between even the best selling consoles among some of them and the lower-selling Microsoft ones, it's a pretty big gap.

If something's not being used, chances are it's because the opportunity cost is not worth the headaches in working with getting optimal performance. What you're suggesting, I think, is misguided because having APIs and tools that scale to a wide net of configurations actually seems like it causes problems in settling on very specific approaches that target one particular hardware spec to optimize for it to get the best performance. Which is something Digital Foundry were touching on in their latest podcast.



Maybe it's too high, maybe not. It's not just about CPU tapping the RAM for data to calculate on the game's end. If the CPU needs to be used for helping along transfer of data from storage to RAM or vice-versa, that's also time spent occupying the bus. As it's doing so, the GPU won't be able to access RAM contents.

It's a bit more of a sticking point for Series systems because I don't think they have off-chip silicon with DMA to the RAM the way the PS5 does. If they did I think we'd of learned of it at Hot Chips a few years ago. So at the very least the CPU for Series consoles is more involved in that process.

As so, maybe you're right that my example is too high on CPU and audio's end accessing the bus, at least in terms of doing so for game data to be processed (and only processed, not moved in/out of RAM <-> storage). But my main intent was to look into situations where Series X would need more than 10 GB of data to access, and if so, any data in the slower pool would drag down effective bandwidth proportional to the amount of time data in that slower pool is being accessed.

The thing is, that is going to vary wildly from game to game, and even situation to situation, moment to moment. I'm just exploring a possibility that could become more regular as the generation goes on. The worst-case you bring up assumes the CPU is only occupying the bus for maybe 7.5% of a given second. But then say there's a situation a game needs 11 GB of data that is GPU-bound, and the last GB may need to be accessed say 33% of that time. It still creates a situation where total bandwidth is dragged down, and in that case, it'd be by a lot more than the CPU itself.

You probably still get a total bandwidth higher than PS5's (I got 485.2 GB/s), but how much does that counterbalance PS5 needing to access RAM less often because of the cache scrubbers? That it doesn't need the CPU to do as much heavy lifting for cache coherency because it has dedicated silicon that offloads the requirement? Stuff like this, I think should be considered as well.
The worst case scenario I am talking about is the physical limit of what the CPU can actually consume on main ram (if accessing the slowest pool obviously). It can't use more of that bandwidth because its bus is limited so it can't take 33% of main bandwidth even if a dev would be stupid enough of programming the CPU doing so.

I don't remember the numbers, just the average bandwidth lost if all CPU bandwidth was used on the main ram: about 40GB/s
 

Riky

$MSFT
Funny to see Riky still selling Mesh Shaders vs Primitive Shaders and the usage of each after the AMD interview that basically clarified it is the same HW underneath and Mesh Shaders are essentially the MS specific API to use the underlying HW 😉. See:https://www.neogaf.com/threads/amd-primitive-shaders-vs-mesh-shaders.1653359/
Your paranoia is funnier, I haven't even mentioned Primitive Shaders.
Mesh Shaders have a shorter more granular pipeline, so are better for Devs and are now the industry standard, that probably won't make any huge difference to performance but it's there, this troll thread is about Xbox hardware so it's a valid point. Same as when some clowns claimed hardware assisted Tier 2 VRS was just a DX12U term until Doom Eternal came along.
Full RDNA2 is one thing but Series consoles go beyond even that, larger grouping than RDNA2 for Mesh Shaders, SFS Filters and core adjustment for ML.
Bespoke forward looking hardware as I said.
 

Pedro Motta

Member
Your paranoia is funnier, I haven't even mentioned Primitive Shaders.
Mesh Shaders have a shorter more granular pipeline, so are better for Devs and are now the industry standard, that probably won't make any huge difference to performance but it's there, this troll thread is about Xbox hardware so it's a valid point. Same as when some clowns claimed hardware assisted Tier 2 VRS was just a DX12U term until Doom Eternal came along.
Full RDNA2 is one thing but Series consoles go beyond even that, larger grouping than RDNA2 for Mesh Shaders, SFS Filters and core adjustment for ML.
Bespoke forward looking hardware as I said.
How is it bespoke if it's 100% AMD's approach?
 

Panajev2001a

GAF's Pleasant Genius
Your paranoia is funnier, I haven't even mentioned Primitive Shaders.
Mesh Shaders have a shorter more granular pipeline, so are better for Devs and are now the industry standard, that probably won't make any huge difference to performance but it's there, this troll thread is about Xbox hardware so it's a valid point. Same as when some clowns claimed hardware assisted Tier 2 VRS was just a DX12U term until Doom Eternal came along.
Full RDNA2 is one thing but Series consoles go beyond even that, larger grouping than RDNA2 for Mesh Shaders, SFS Filters and core adjustment for ML.
Bespoke forward looking hardware as I said.
Whoa even much beyond that while PS5 is RDNA 1.5 right? Tools are coming narrative will not die three years later hehe. Whatever man…
 

Riky

$MSFT
Whoa even much beyond that while PS5 is RDNA 1.5 right? Tools are coming narrative will not die three years later hehe. Whatever man…
I never said anything about RDNA 1.5, that's your Paranoia once again, AMD decide what RDNA2 is and they say all three consoles are, so that's that.
Didn't mention tools either, that's cheap from someone I thought was intelligent which is why I still bother responding to you.
The hardware features haven't been used yet apart from fitting Tier 2 VRS into old last gen engines. Forza will be the first with at least plumbed in Tier 2 VRS and guess what, we're getting 4k 60 with in game RT reflections, so you can be snarky and call it "tools" I just see it as using all the features of the hardware.
 

Lysandros

Member
Who would’ve thought that the Series X “full RDNA2 feature set” had a Cell like architecture…
Yep, i consider it the true spiritual successor to PS2/PS3 with an architecture being based partly on Emotion Engine/VUs partly on CELL/SPUs with even some Saturn flavored processors thrown at the board without any context. The Tempest Engine is a true joke compared to XSX' SIMD capabilities.
 
Last edited:
I never said anything about RDNA 1.5, that's your Paranoia once again, AMD decide what RDNA2 is and they say all three consoles are, so that's that.
Didn't mention tools either, that's cheap from someone I thought was intelligent which is why I still bother responding to you.
The hardware features haven't been used yet apart from fitting Tier 2 VRS into old last gen engines. Forza will be the first with at least plumbed in Tier 2 VRS and guess what, we're getting 4k 60 with in game RT reflections, so you can be snarky and call it "tools" I just see it as using all the features of the hardware.
Both are RDNA2 but MS is more forward thinking. Your arguments don't make sense.
 

MikeM

Member
I never said anything about RDNA 1.5, that's your Paranoia once again, AMD decide what RDNA2 is and they say all three consoles are, so that's that.
Didn't mention tools either, that's cheap from someone I thought was intelligent which is why I still bother responding to you.
The hardware features haven't been used yet apart from fitting Tier 2 VRS into old last gen engines. Forza will be the first with at least plumbed in Tier 2 VRS and guess what, we're getting 4k 60 with in game RT reflections, so you can be snarky and call it "tools" I just see it as using all the features of the hardware.
Meh i’ll believe it when I see it. Wait for launch my man.
 

Hoddi

Member
Thanks, I appreciate your clarification. So from your description, those textures are arranged into 64 KB portions on the storage itself. The texture is still stored as a contiguous asset but since it's in a format SFS can understand, the system can access the portions needed.

I can see the advantages, for sure. But there are probably some disadvantages as well. For example, any accesses of data to storage still incur magnitudes more latency access than in RAM, which incur magnitudes more than data in the caches. If you're going to be using SFS for parts of a texture that effectively result in the whole texture being used in a given scene (at some point), you're probably better off storing the whole texture in RAM anyway and that's where the advantage of SFS would end (for that particular texture).

If that repeats for a group of unique textures, then you're just better off keeping that group of textures in RAM anyway, rather than incur the access latency penalty of the SSD. I can also see how, at least in theory, the concept of SFS might be at odds with how artists actually tend to make their texture assets; they aren't making them as 64 KB blocks, but at 4K-quality (or larger) sizes. So work on the artist's end stays the same in terms of load.

Meanwhile managing the feedback stack on the programmer's end probably involves a bit of work, and that may explain why SFS isn't being readily used in a lot of games yet. Though I'm sure at least a few games have to be using it, rather on Xbox or PC. I'm still not convinced it totally resolves some of the concerns I listed in the OP though, for reasons I mention in this specific post.
This demo still uses memory for caching data so it's not all coming off the SSD for every single frame. You can configure the heap size in the config files and you'd normally be using different sizes at different resolutions. 1080p seems fine with ~500MB while you'd want something closer to 1.5GB at 4k, for example. And note again that disk load scales with rendering resolution so any latency will hit harder at higher resolutions like 4k or 8k.

The demo also includes a tool for converting textures into the correct format. It doesn't take any special effort and it's fairly easy to convert regular photos into textures that appear in the demo. You can then set them to appear as planets or the skybox in the .bat script.

On a side note, I don't think Forspoken is using SFS on PC. The disk load doesn't scale with resolution in any case.
 

platina

Member
Another advantage I would say is that ps5 is clocked much higher so literally every part of the architecture is running faster. That and the fact that it doesn’t have 2 hardware sku’s to deal with. Series s is going to be a bottleneck when current gen games start coming out. I’d wager that Sony first party ps5 only games will always look noticeably better.
 

Vick

Member
Whats important is what devs have told them. Which is that they dont fucking know. PS5 literally has some secret sauce in it that is making it perform better than its specs.
cerny_png_1400x0_q85.jpg


Lamest/most obvious use of this picture ever.

It's all boiled down to that Mark Cerny is a genius, who happens to be A Gamer, A Programmer, A Game Developer, A Consultant, An Architect. He understands the full life-cycle of gaming.
AshamedPoliticalHalcyon-size_restricted.gif
 
In addition to scrubbers i think we should also factor in the fact that within PS5 GPU each CU has access to ~45% more L1 cache at higher bandwidth per shader array. This should reduce RAM access by nature at least for compute processes while also increasing compute efficiency/per CU saturation.

Edit: Thinking about cache hierarchy again, there is of course the L2 cache along the way before RAM which offers (a more moderate) ~15% of additional amount per CU on PS5 which should also play a role in the matter of costlier RAM access.
Exactly this. This is really important in some engines as this will significantly improve efficiency of each CUs. Many times when devs optimize a game they are trying to simply avoid cache misses.
 

Lysandros

Member
Exactly this. This is really important in some engines as this will significantly improve efficiency of each CUs. Many times when devs optimize a game they are trying to simply avoid cache misses.
Yes. And this also naturally means that the real world compute differencial can not be 18% since respective machines don't have the exact same efficiency in compute. That is not the sole reason for it but i am personally seeing it as being the main one. Even leaving the GPU fixed function throughputs aside and looking at compute in isolation the situation is quite a bit more complicated than it seems at a shallow glance. Just like every facet of PS5/XSX case really (with I/Os being the exception).
 
Last edited:

Kumomeme

Member
u can disagree with my logic. even if a dev were to give EQUAL amount of time and care for both xbox and ps5, the GIVEN time would still have to be DIVIDED between xbox series s and x. there's no magical button that scales a game back to Series S's capabilities. it simply does not exist.
b..but! they said it is simple as just pressing a button! just turn down the setting and resolution!

the hobbit drugs GIF
 

Thirty7ven

Banned
SFS is overplayed because it assumes you don’t do SF on PS5. The only thing different is Xbox having Texture HW filters that prevent clipping at the edges when they don’t arrive in time.

Its a fugazzi when used to demonstrate an advantage against PS5 because it will already process data at a much higher speed than Series.
 
Last edited:
Is this one of those users who makes long posts pretending they have content to discuss and they are technical, but when someone who knows about game tech, development, or coding gets involved, they flee into the woods?

A lot of this posters speculation is based on nothing, and they are coming to conclusions that only manifest by the OP having know idea how anything works while also hoping nobody else does, so is repeating console war talking points you can find on YouTube by guys who still think the registry in Windows, is when you extent.your warranty.
 

ToTTenTranz

Banned

Anyway, if there are other tech insights on the PS5 and Series systems you all would want to share to add on top of this, whether to explain what you feel could create probable performance advantages/disadvantages for PlayStation OR Xbox, feel free to share them. Just try to be as accurate as possible and, please, no FUD. There's enough of that on Twitter from influencers 😂


On one hand, the Series X would probably gain in development effectiveness by just putting a 2GB GDDR6 on all channels ,giving it the full 560GB/s on all memory.
On the other hand, during the hardware design they might have concluded that the console doesn't really need 560GB/s to start with, and they didn't predict the scale of the memory contention issues they ended up having.


If anything, the PS5 seems to be pretty well balanced and it "only" uses 448GB/s. There's a common (mis)conception that memory bandwidth should be adjusted to compute throughput in GPUs, so a Series X with 18% higher shader throughput than the PS5 should also get higher memory bandwidth. However, IIRC shader processors aren't the most memory bandwidth intensive components in a GPU, those would be the ROPs which are usually hardwired to the memory controllers in discrete GPUs. There's the notable case of the PS4 Pro being a "monster" in theoretical pixel fillrate with its 64 ROPs but official documentation being clear about the chip not being able to reach anywhere close to its limit because of a memory bandwidth bottleneck.

The PS5 and the Series X have the same pixel rasterizer (ROP) throughput per clock but the PS5 has higher clocks, so the PS5's design might actually be more bandwidth-demanding than the Series X.

So It could be that the reason the Series X uses a 320bit memory controller has little to do with running videogames.
The PS5 has one purpose alone, which is to run videogames. The Series X serves two purposes: to run videogames and to accelerate compute workloads in Azure servers. The Series X chip was co-designed by the Azure Silicon Architecture team and that's actually the team that originally presented the solution at HotChips 2020. The 320bit memory controller could be there to let the SoC access a total of 20GB (or even 40GB if they use clamshell) of system memory in Azure server implementations.



Microsoft's dual-use design was obviously going to bring some setbacks and the most obvious one is the fact that they needed to produce a 20% larger chip on the same process, to run videogames with about the same target IQ.
As for the memory pools with uneven bandwidths and the memory contention issues that they brought, it might have been something Microsoft didn't see coming and perhaps they should have used only 8 channels / 256bit on the gaming implementation of the Series X SoC.
Or perhaps someone did see it coming, but the technical marketing teams wanted to have stuff to gloat about and developers were going to have to adapt to the uneven memory pools regardless, for the Series S.
 

Lysandros

Member
Is this one of those users who makes long posts pretending they have content to discuss and they are technical, but when someone who knows about game tech, development, or coding gets involved, they flee into the woods?

A lot of this posters speculation is based on nothing, and they are coming to conclusions that only manifest by the OP having know idea how anything works while also hoping nobody else does, so is repeating console war talking points you can find on YouTube by guys who still think the registry in Windows, is when you extent.your warranty.
What is so wrong about trying to analyse a console hardware's particularity in the context of game performance in light of hard data available in a gaming forum? Everyone is free to contribute without bordering in baseless childish fanboy rhetoric. What's your take on the actual subject matter besides not liking OP's post?
 
What is so wrong about trying to analyse a console hardware's particularity in the context of game performance in light of hard data available in a gaming forum? Everyone is free to contribute without bordering in baseless childish fanboy rhetoric. What's your take on the actual subject matter besides not liking OP's post?
There is no subject matter to discuss.

There's no analysis.

As I said, the OP doesn't know what he's saying but acting like he does hoping people like you believe there's knowledge and technical detail in the post, where there isn't.

You are talking about "hard" data that doesn't exist in the way you think, because you took the bait.
 

Lysandros

Member
There is no subject matter to discuss.

There's no analysis.

As I said, the OP doesn't know what he's saying but acting like he does hoping people like you believe there's knowledge and technical detail in the post, where there isn't.

You are talking about "hard" data that doesn't exist in the way you think, because you took the bait.
I was referring to the thead in its entirety not limited to the OP. I personally saw it as and opportunity to discuss the bandwidth and other technical facets with specs and current understanding about architectures at hand thus didn't need 'to take a bait'. No post is perfect, everyone can present arguments and counter arguments. There isn't an engineering degree requirement for members to post in Neogaf.
 
I was referring to the thead in its entirety not limited to the OP. I personally saw it as and opportunity to discuss the bandwidth and other technical facets with specs and current understanding about architectures at hand thus didn't need 'to take a bait'. No post is perfect, everyone can present arguments and counter arguments. There isn't an engineering degree requirement for members to post in Neogaf.

I didn't say anything about requirements, your being defensive while creating conflict in your own post.

If you were referring to the thread outside of the OP, you wouldn't accuse me of of thinking people need an engineering degree.

I brought up the strategy in the OP, and you keep proving it works. Oh well.
 

Polygonal_Sprite

Gold Member
That makes a lot of sense.

I thought it was a PS4 Pro / Xbox One X situation where Xbox mandated a higher resolution which most of the time caused more framerate drops on X than Pro even if it was only 2/3fps it made a big difference when games were targeting 30fps.
 

Mr.Phoenix

Member
More people who have no idea what they are saying pretending to be technical, making you believe that this is informing content when it's not?

I guess it worked though.
Dont know why you are so touchy. It's informative content. And its a fact.

That's not saying it's the reason why anything is anything...but its still some insight into memory architectures and how they generally work.

I do not know why you have a problem with just having more information out there.
 
Dont know why you are so touchy. It's informative content. And its a fact.

That's not saying it's the reason why anything is anything...but its still some insight into memory architectures and how they generally work.

I do not know why you have a problem with just having more information out there.

There is no insight here.

There is no informal content.

But hey, it's your decision on what you want to believe so I'll let you go on your business.

No problem here.
 

Mr.Phoenix

Member
There is no insight here.

There is no informal content.

But hey, it's your decision on what you want to believe so I'll let you go on your business.

No problem here.
Alright, GTFO with your condescending tone.

If you are gonna discredit what someone has said, then its on you to state otherwise and make your point. Then you can be reasoned with and actual insight will be shared.

All you have been doing is springing a bunch of nonsense with nothing to actually back it up while at the same time taking digs at users in an attempt to make them look stupid.

Sy something meaningful, or if you can't be bothered, then just shut up.
 
All you have been doing is springing a bunch of nonsense with nothing to actually back it up

Which is what the OP did, and you fell right into it because you didn't think critically and check into what was said or their intentions which was brought up earlier in the thread.

while at the same time taking digs at users in an attempt to make them look stupid.

I'm not attempting to make anyone look stupid, because I don't need to do that.

I told you if you believe it that's fine. I'm trying to leave you too it and step aside. Please continue your informal engagement as you say. I'm not involved.
 
Last edited:

DenchDeckard

Moderated wildly
Playstation sells so many consoles and has so many fans that wouldn't even think about touching an xbox....

...but man they love to talk about microsofts perceived weaknesses or anything they can to be honest. Phil lives rent free, the power of rhe 12tf monster, all the other bull crap marketing lol

Its kinda Wild...
 
iu

Playstation sells so many consoles and has so many fans that wouldn't even think about touching an xbox....

...but man they love to talk about microsofts perceived weaknesses or anything they can to be honest. Phil lives rent free, the power of rhe 12tf monster, all the other bull crap marketing lol

Its kinda Wild...
For a sony forum there is sure a lot of xbox talk on here.

Seems like all I'm seeing recently is negative threads about Xbox/Microsoft/Phil.

Maybe if people had some games to play...
 
Last edited:

Mahavastu

Member
I voted "so so", because it is one of these "yes, but" cases.
Yes, the slow ram might hinder the Xbox SX, but then the slower RAM should still be "fast enough" in most situations. And the fast Ram is faster then the PS5 RAM, so when done correctly it should not be that big of a problem, really...

BUT: IIRC the slow RAM on the Series S was REALLY slow and might be more of a problem. I guess this is one of the reasons that in some games the Series S has more problems to hold a stable framerate or reaching stable 60fps despite heavily reducing the resolution.

An uneducated guess: maybe the dev box for Series X is just a little bit faster than then the retail box?
Very often you see the Xbox SX versions of games having higher resolutions than PS5 but an unstable framerate, and a lower resolution would most likely lead to a stable framerate. If the devbox is slightly faster it might lead the Devs to overestimate the limits and make the dynamic resolutions not react correctly.
But as I said, just an uneducated guess :pie_eyeroll:
 
Last edited:

Mr.Phoenix

Member
Which is what the OP did, and you fell right into it because you didn't think critically and check into what was said or their intentions which was brought up earlier in the thread.



I'm not attempting to make anyone look stupid, because I don't need to do that.

I told you if you believe it that's fine. I'm trying to leave you too it and step aside. Please continue your informal engagement as you say. I'm not involved.
OK.lets try ths again.

The first mistake you are making is that you seem to think I need the OPs thread to know what is being talked about. I don't. I have a fairly good idea of this stuff and would make threads like this if I weren't lazy/busy.

Second, and I stand by this, there are straight-up factual things with what has been said by the OP. From a simple technical and architectural perspective. Eg. MS memory10/6GB split does exist. And the bandwidth differences/complications it poses exist too. That is a fact. How relevant that fact is, what special processes, APIs, and chipsets may be used as options to alleviate any potential issues this may cause...etc, is all another matter. But that such options exist,or that this difference from the PS5 may not even be an issue, does not make what has been said wrong.

All I am saying,is that if you are going to be saying such and such is nonsense, then at the very least, state why. If you can't be bothered to do that, then it's better to say nothing.
 
Last edited:

Daneel Elijah

Gold Member
There is no insight here.

There is no informal content.

But hey, it's your decision on what you want to believe so I'll let you go on your business.

No problem here.
Vulgarization is informative, and you are talking against OP motives instead of talking about what was said. If someting is factually wrong or inexact, we would welcome your posts a lot more than just
As I said, the OP doesn't know what he's saying but acting like he does hoping people like you believe there's knowledge and technical detail in the post, where there isn't.
IF he does not know what he is saying, either you do or you do not. If you do, please make a counter argument. If you do not, then please at least take one of his argument, then explain why you think that he said it in bad faith.
If you have a better idea about why there is a PS5 advantage in some third parties games like Hogwarts, atomic heart ... Please say so.

Playstation sells so many consoles and has so many fans that wouldn't even think about touching an xbox....

...but man they love to talk about microsofts perceived weaknesses or anything they can to be honest. Phil lives rent free, the power of rhe 12tf monster, all the other bull crap marketing lol

Its kinda Wild...
Part of what makes consoles great is seeing them "punching above their weight." The PS3, with games like TLOU, and the PS4 surprised us , and continue to do so with games like Horizon Forbidden West and GOW Ragnarok. Nothing wrong to talk about what allow those game to be so good graphically and what is in the way of Xbox doing the same.
 
Status
Not open for further replies.
Top Bottom