• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[DF] Hitman 3 PS5 vs Xbox Series X|S Comparison

Status
Not open for further replies.
Agreed. Playstation 5 will remain stagnate. It has no room for growth. It will not excel at anything greater than what the XSX can do. Playstation 5 is late to the party and was overclocked at the last minute in attempt to close the gap.

54SZzFw.png
Ps5 will get better optimization too. Who implied it wouldn't? Not sure why you are so hurt at the idea of Xbox pulling ahead. That's cringy.



Haven't had an Xbox since 360 though. Nice try though. If you have nothing else to bring to the discussion, you can put me on ignore. You still haven't said why gdk is bullshit?



hOYlEAC.jpg
 

sendit

Member
Ps5 will get better optimization too. Who implied it wouldn't? Not sure why you are so hurt at the idea of Xbox pulling ahead. That's cringy.



Haven't had an Xbox since 360 though. Nice try though. If you have nothing else to bring to the discussion, you can put me on ignore. You still haven't said why gdk is bullshit?



hOYlEAC.jpg

As a software developer. I am laughing at someone who thinks XSX GDK isn’t an iteration (someone who believes this is something entirely brand new). Keep up the good fight.
 
As a software developer. I am laughing at someone who thinks XSX GDK isn’t an iteration (someone who believes this is something entirely brand new). Keep up the good fight.
I never said it's new. Assumptions aren't your strong point, which you are proving time after time. The transition for Xbox is harder than it is for ps5. It's been said time and time again, yet this software developer is going to go against what several GAME developers said. Who do I believe more? The warrior software developer in neogaf, or the ones who make the games? Hard choice. But keep up the good fight warrior. Good night.
 
Last edited:
44% more pixels is no small thing, it needs much more GPU power and memory bandwidth.
You don't know the actual difference in GPU performance in this game, to know, or even have a clue on the performance difference between two systems on a capped 60fps game we would need a scene where both drop frames so we could work out the relative performance of each machines. Now, the scenario we have is:
2180p
1800p

Less shadown resolution on the PS5

Perfect 60fps lock on PS5
Dropped frames in a few areas on series X

What this means is we don't know how far above 60 the PS5 runs the game in the scenarios where the series X drops, like we don't know the framerate when both hold their 60fps target.

So people infer a 44% power difference are just plain wrong, we don't know the motivation of the development team (why were they more sensitive to frame rate drops on the PS5 version?)

I'm pretty confident that if they dropped the seriex to 1800p it would hold a solid 60, just like the PS5, but I'm fairly sure that if they bumped the PS5 version to full 4k it would not drop below 30fps in the scenes where the series X has its drops to 50 (which is what a 44% power difference implies).
 
I never said it's new. Assumptions aren't your strong point, which you are proving time after time. The transition for Xbox is harder than it is for ps5. It's been said time and time again, yet this software developer is going to go against what several GAME developers said. Who do I believe more? The warrior software developer in neogaf, or the ones who make the games? Hard choice. But keep up the good fight warrior. Good night.
Pundits said it time and time again, developers have been much more nuanced, when they are not contradicting one another.

It's one more excuse for the disappointing xbox track record.
 

StreetsofBeige

Gold Member
You don't know the actual difference in GPU performance in this game, to know, or even have a clue on the performance difference between two systems on a capped 60fps game we would need a scene where both drop frames so we could work out the relative performance of each machines. Now, the scenario we have is:
2180p
1800p

Less shadown resolution on the PS5

Perfect 60fps lock on PS5
Dropped frames in a few areas on series X

What this means is we don't know how far above 60 the PS5 runs the game in the scenarios where the series X drops, like we don't know the framerate when both hold their 60fps target.

So people infer a 44% power difference are just plain wrong, we don't know the motivation of the development team (why were they more sensitive to frame rate drops on the PS5 version?)

I'm pretty confident that if they dropped the seriex to 1800p it would hold a solid 60, just like the PS5, but I'm fairly sure that if they bumped the PS5 version to full 4k it would not drop below 30fps in the scenes where the series X has its drops to 50 (which is what a 44% power difference implies).
I didn't watch the video, but people say in a scene X drops to 32 and PS5 37. So neither game is 100% 60 fps to start with.
 
I didn't watch the video, but people say in a scene X drops to 32 and PS5 37. So neither game is 100% 60 fps to start with.
I watched the whole video, these people are full of it.

Another thing I noticed is that DF has dropped their fixation on locked frame rates, normally I would have expected them to recommend that the series x be run at a lower resolution because of the drops, and the series s should have been either 30fps lock, or 900p (going by their old logic, I think they make more sense now).... Still, they could have pondered different scenarios, and explained the trade offs).
 

Concern

Member
You don't know the actual difference in GPU performance in this game, to know, or even have a clue on the performance difference between two systems on a capped 60fps game we would need a scene where both drop frames so we could work out the relative performance of each machines. Now, the scenario we have is:
2180p
1800p

Less shadown resolution on the PS5

Perfect 60fps lock on PS5
Dropped frames in a few areas on series X

What this means is we don't know how far above 60 the PS5 runs the game in the scenarios where the series X drops, like we don't know the framerate when both hold their 60fps target.

So people infer a 44% power difference are just plain wrong, we don't know the motivation of the development team (why were they more sensitive to frame rate drops on the PS5 version?)

I'm pretty confident that if they dropped the seriex to 1800p it would hold a solid 60, just like the PS5, but I'm fairly sure that if they bumped the PS5 version to full 4k it would not drop below 30fps in the scenes where the series X has its drops to 50 (which is what a 44% power difference implies).

You don't know either. We do know devs chose the settings they chose for a reason. Hitman has a marketing deal with Sony. I doubt they'd sabotage their game for no reason.

I watched the whole video, these people are full of it.

Another thing I noticed is that DF has dropped their fixation on locked frame rates, normally I would have expected them to recommend that the series x be run at a lower resolution because of the drops, and the series s should have been either 30fps lock, or 900p (going by their old logic, I think they make more sense now).... Still, they could have pondered different scenarios, and explained the trade offs).


Ahhh yes more conspiracies

monday night raw lol GIF by WWE
 

VanEs

Member
I didn't watch the video, but people say in a scene X drops to 32 and PS5 37. So neither game is 100% 60 fps to start with.

True, but that was a (alpha effect heavy) part of a cutscene and not gameplay. It also dropped frames on pc with anything below a 3070 or so.

Series X drops frames in gameplay where PS5 does not (afaik).
 

MonarchJT

Banned
You don't know the actual difference in GPU performance in this game, to know, or even have a clue on the performance difference between two systems on a capped 60fps game we would need a scene where both drop frames so we could work out the relative performance of each machines. Now, the scenario we have is:
2180p
1800p

Less shadown resolution on the PS5

Perfect 60fps lock on PS5
Dropped frames in a few areas on series X

What this means is we don't know how far above 60 the PS5 runs the game in the scenarios where the series X drops, like we don't know the framerate when both hold their 60fps target.

So people infer a 44% power difference are just plain wrong, we don't know the motivation of the development team (why were they more sensitive to frame rate drops on the PS5 version?)

I'm pretty confident that if they dropped the seriex to 1800p it would hold a solid 60, just like the PS5, but I'm fairly sure that if they bumped the PS5 version to full 4k it would not drop below 30fps in the scenes where the series X has its drops to 50 (which is what a 44% power difference implies).
lol no no man no..Are you basically saying that IO has purposely castrated their game on ps5? hahahha.Listen to me the game runs at 60 fps stable on ps5 precisely because they lowered the resolution and not only that!.. it is evident that lowering only that ....the performance would not have been acceptable (thinks that devs themselves have considered the few drops in fps acceptable in the xsx version therefore they were open to possible compromises) so much does it force them to lower the quality of the shadows, effect that affects the frame rate a lot. So basically no what you said is completely out of the question
 
Last edited:
I watched the whole video, these people are full of it.

Another thing I noticed is that DF has dropped their fixation on locked frame rates, normally I would have expected them to recommend that the series x be run at a lower resolution because of the drops, and the series s should have been either 30fps lock, or 900p (going by their old logic, I think they make more sense now).... Still, they could have pondered different scenarios, and explained the trade offs).
The 32 vs 37 he is referring to is in a separate video in which Alex from DF compares the PS5 version to PC. Those people are not full of it.
 

Md Ray

Member
Back on topic, Xbox has better potential for graphics, which is what devs are finally starting to exploit, as they get familiarized with the new set of tools. Not sure why this is so hard to understand.
You're acting like PS5 doesn't have a single advantage to its GPU over XSX. While XSX may have higher compute throughput, the PS5's GPU has higher pixel and rasterization throughput over XSX's GPU. So technically the PS5 also has better potential for graphics than XSX. What this means is that different games will show each console GPU's strengths and weaknesses. Games that are rasterization, pixel fillrate bound will run faster on PS5 (i.e. higher fps or higher sustained resolution if dynamic res is used), and games that favor compute, texture throughput will pull ahead on XSX.
 
Last edited:

assurdum

Banned
lol no no man no..Are you basically saying that IO has purposely castrated their game on ps5? hahahha.Listen to me the game runs at 60 fps stable on ps5 precisely because they lowered the resolution and not only that!.. it is evident that lowering only that ....the performance would not have been acceptable (thinks that devs themselves have considered the few drops in fps acceptable in the xsx version therefore they were open to possible compromises) so much does it force them to lower the quality of the shadows, effect that affects the frame rate a lot. So basically no what you said is completely out of the question
Let's make an hypothetical example: ps5 is faster so it means at lower resolution can have more benefit on fps than series X which is more powerful (better for higher res) but slower so less benefits to decrease the framebuffer compared ps5. You see how is it wrong said lower res is an intentional castration? And don't look too much to the lower resolution shadow. Even PS4 pro can handle the same setup of ps5 which means there is something wrong about it.
 
Last edited:
I have defended Alex and DF more than any other Sony fan here, but he has repeatedly downplayed the SSD, inferred Sony and Cerny were lying about hardware ray tracing, and has been called out by industry programmers. TBH, I take it back. I dont care if he has an agenda or not. It doesnt matter. He's made a fool out of himself time and time again.

So he's made a few mistakes here and there, people make mistakes. I still don't see what "downplaying the SSD" actually means without some context. WRT PS5 RT that sounds like it was a mistake or maybe a misunderstanding on his part. However it's often been speculated Sony's RT implementation is different than standard RDNA 2, that was actually one of the earliest rumors about the system. This can still be the case while still being hardware-accelerated.

I've seen many other trusted people during the course of next-gen speculation get things wrong multiple times, or say things in a way where it seemed they were lying through omission. Plenty of insiders made these kinds of mistakes, for example. A few of them as well as just other people with technical knowledge made a lot of mistakes and some also arguably misguiding not just WRT PS5 specs but also Series X and Series S specs, too.

But for some odd reason Alex seems to get a target on his back by people who have some hateboner with DF, I do find that kind of amusing and also kinda weird.

Based on what you are "sure"?

From what I've seen of their posts on other forums surrounding performance results.
You aren't unable to understand what's missed in the full bandwidth report right? They practically leave to intend the bandwidth is like to have 336+560 GBs but can't be possible. Every single time the CPU uses 1 GB, occupies half of the bandwidth of the RAM bench, excluding the GPU to it.

You're describing bus contention. PS5 has a similar issue, all hUMA designs struggle with bus contention since the bus arbitration makes it that only one processor component drives the main bus to memory at a time.

Series X's issue with this might be somewhat more exacerbated compared to PS5 but in typical usage the bandwidth accesses should never drag to an average equivalent to 448 GB/s; this kind of depends on how CPU-intensive a game is in terms of busywork outside of handling drawlists/drawcalls for the GPU.

Typically these CPUs shouldn't require more than 30 GB/s - 50 GB/s of bandwidth when in full usage. Even tacking on an extra 10 GB/s in Series X's case due to its RAM setup that's still an effective 500 GB/s - 520 GB/s for the GPU to play with, compared to 398 GB/s - 418 GB/s for PS5's GPU (factoring out what the CPU might be using).

...although this actually also doesn't account for audio or the SSD, both of which eat away at bandwidth. So it's somewhat lower effective bandwidth for both Series X and PS5's GPUs that they actually might typically get to use, but everything I'm describing also assumes non-GPU processes are occupying the bus for an entire second when in reality that is almost never true; they're constantly alternating access on the measure of frames or even more accurately, cycles.

You're acting like PS5 doesn't have a single advantage to its GPU over XSX. While XSX may have higher compute throughput, the PS5's GPU has higher pixel and rasterization throughput over XSX's GPU. So technically the PS5 also has better potential for graphics than XSX. What this means is that different games will show each console GPU's strengths and weaknesses. Games that are rasterization, pixel fillrate bound will run faster on PS5 (i.e. higher fps or higher sustained resolution if dynamic res is used), and games that favor compute, texture throughput will pull ahead on XSX.

It really depends. Series X's GPU advantage isn't just theoretical compute, but in BVH intersections for RT as well as texture fillrate (texel fillrate). The latter is interesting because while texels can be used as pixels, they don't actually have to. They can be used as LUTs for certain effect data, like shadow maps and tessellation.

PS5 definitely has advantages in pixel fillrate, culling rate and triangular rasterization rate, but we can't forget that the GDDR6 memory is going to be a big factor here because not all calculable data will be able to reside in the GPU caches. PS5 probably has 512 extra MB of GDDR6 for game data compared to Series X (14 GB vs. 13.5 GB), but its RAM is 112 GB/s slower. It's SSD is 125% faster but that "only" a fillrate of up to 22 GB/s which is only a fraction of the 112 GB/s RAM delta.

Then we also need to take into account that several games so far in terms of load times (i.e getting data from storage into memory) have shown remarkably close load times between Sony and Microsoft's systems, where if PS5 is pulling ahead it's only by a couple of seconds at most and in some cases not even that. So essentially both systems are seemingly able to populate their free RAM in virtually identical amounts of time (though I guess if PS5's specific I/O features will flex it could be in asset streaming, the question would be how big of a flex that would be over Series X and I'm thinking it's probably not going to be that big of a delta between them on this factor either).

Basically, we can't say for certain that in EVERY case a game that favors this or that will run better on Series X or favors this other stuff will run better on PS5, because you can't simply look at GPU strengths and weaknesses in isolation to the rest of the system's design from top to bottom. And just like how some people have underestimated the impact of variable frequencies being a benefit to Sony's design, I think some have underestimated the impact of RAM bandwidth (even if that's a fast/slow RAM pool kind of thing) and particulars of Microsoft's design in terms of the memory and SSD I/O.
 
Last edited:

assurdum

Banned
So he's made a few mistakes here and there, people make mistakes. I still don't see what "downplaying the SSD" actually means without some context. WRT PS5 RT that sounds like it was a mistake or maybe a misunderstanding on his part. However it's often been speculated Sony's RT implementation is different than standard RDNA 2, that was actually one of the earliest rumors about the system. This can still be the case while still being hardware-accelerated.

I've seen many other trusted people during the course of next-gen speculation get things wrong multiple times, or say things in a way where it seemed they were lying through omission. Plenty of insiders made these kinds of mistakes, for example. A few of them as well as just other people with technical knowledge made a lot of mistakes and some also arguably misguiding not just WRT PS5 specs but also Series X and Series S specs, too.

But for some odd reason Alex seems to get a target on his back by people who have some hateboner with DF, I do find that kind of amusing and also kinda weird.



From what I've seen of their posts on other forums surrounding performance results.


You're describing bus contention. PS5 has a similar issue, all hUMA designs struggle with bus contention since the bus arbitration makes it that only one processor component drives the main bus to memory at a time.

Series X's issue with this might be somewhat more exacerbated compared to PS5 but in typical usage the bandwidth accesses should never drag to an average equivalent to 448 GB/s; this kind of depends on how CPU-intensive a game is in terms of busywork outside of handling drawlists/drawcalls for the GPU.

Typically these CPUs shouldn't require more than 30 GB/s - 50 GB/s of bandwidth when in full usage. Even tacking on an extra 10 GB/s in Series X's case due to its RAM setup that's still an effective 500 GB/s - 520 GB/s for the GPU to play with, compared to 398 GB/s - 418 GB/s for PS5's GPU (factoring out what the CPU might be using).

...although this actually also doesn't account for audio or the SSD, both of which eat away at bandwidth. So it's somewhat lower effective bandwidth for both Series X and PS5's GPUs that they actually might typically get to use, but everything I'm describing also assumes non-GPU processes are occupying the bus for an entire second when in reality that is almost never true; they're constantly alternating access on the measure of frames or even more accurately, cycles.



It really depends. Series X's GPU advantage isn't just theoretical compute, but in BVH intersections for RT as well as texture fillrate (texel fillrate). The latter is interesting because while texels can be used as pixels, they don't actually have to. They can be used as LUTs for certain effect data, like shadow maps and tessellation.

PS5 definitely has advantages in pixel fillrate, culling rate and triangular rasterization rate, but we can't forget that the GDDR6 memory is going to be a big factor here because not all calculable data will be able to reside in the GPU caches. PS5 probably has 512 extra MB of GDDR6 for game data compared to Series X (14 GB vs. 13.5 GB), but its RAM is 112 GB/s slower. It's SSD is 125% faster but that "only" a fillrate of up to 22 GB/s which is only a fraction of the 112 GB/s RAM delta.

Then we also need to take into account that several games so far in terms of load times (i.e getting data from storage into memory) have shown remarkably close load times between Sony and Microsoft's systems, where if PS5 is pulling ahead it's only by a couple of seconds at most and in some cases not even that. So essentially both systems are seemingly able to populate their free RAM in virtually identical amounts of time (though I guess if PS5's specific I/O features will flex it could be in asset streaming, the question would be how big of a flex that would be over Series X and I'm thinking it's probably not going to be that big of a delta between them on this factor either).

Basically, we can't say for certain that in EVERY case a game that favors this or that will run better on Series X or favors this other stuff will run better on PS5, because you can't simply look at GPU strengths and weaknesses in isolation to the rest of the system's design from top to bottom. And just like how some people have underestimated the impact of variable frequencies being a benefit to Sony's design, I think some have underestimated the impact of RAM bandwidth (even if that's a fast/slow RAM pool kind of thing) and particulars of Microsoft's design in terms of the memory and SSD I/O.
Isn't it just matter of bus contention. And I like to know what sort of advantage as series X in his SSD I/O memory setup because I heard for the first time about it by you, outside the software optimization which MS sell as the new Jesus coming. MS want a virtually splitted setup just to emulate the pc environment, but doing this CPU and GPU need to respect the limits imposed to the virtual splitted environment as it was physical. PS5 hasn't that sword on the head and it's far from a minor advantage for the RAM/bandwidth usage. Some developers already showed perplexity about the MS choice. There isn't anything of beneficial for an unified architecture to be splitted ; MS tried to sell it as a genial move but series X would have been a lot better without it and we will see if will not to be problematic in some games
 
Last edited:

DJ12

Member
Let's go over this again, one more time for you. The transition from xb1 to xsx GDK is more complicated than ps4 to ps5 SDK. You are talking out your ass if you think otherwise.

GDK is MORE of a simple UPDATE than the PS5 environment, in fact I would even call it just a rename to be honest.

Direct X 12 ultimate was already in the GDK and is well known to all PC devs anyway (it's also a basic update of Direct X 12 which has been out for years)

PS5 has GNMX which is a high level API much like Direct X, pretty simple to use and master that also comes with a performance hit, but unlike GDK it also includes SPECIFIC TO PS5 hardware GNM which is a low level API.

So as you are clearly in the know, please indicate just why you think the jump from SDK to GDK is more complicated than from PS4 to PS5?

Is it because since the Xbox 360 which had a superb dev platform, MS have really dropped the ball with their dev environment and lag WAY, WAY WAY behind Sony who's dev kit was utter shite on PS3?

Sony's devkit is probably far better than GDK, but that's not because of any changes, it's because MS's changes have not caught up.

This of course doesn't change the fact that Direct X 12 (Ultimate if you prefer) is still Direct X 12 (Ultimate), and Visual Studio is still Visual Studio. In the wash, when all the clowns stop lapping up this tools nonsense, the only real changes made from GDK to SDK will be to aid cross platform games, and have very little, if anything to do with something specific to Series X|S.
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
Yea cuz devs would purposely leave power on the table for the Ps5 version which they have a marketing deal with as opposed to going for parity lol
They did though, that is not debatable. When a game has a cap and the frame rate doesn't dip, internally the game is rendering at an unknown frame rate that meets or exceeds the cap.

Any PC gamer knows this, they can see the frame rate rendered (vs the monitor max). I can game on my 60Hz monitor, but see the game is rendering at 70, 90, 110fps. It can literally be anything depending on your CPU/GPU and settings. If I want to run Half-Life at 640x480 and render at 500fps I can, but the monitor will only show 60 fps.

So we know the PS5 is sitting at 60fps. We literally do not know how many additional frames are being thrown away. By setting the frame buffer to 4k, you can at least push the system to potentially drop frames. Only then do you know that you have reached a limit, like with the XSX.
 

assurdum

Banned
They did though, that is not debatable. When a game has a cap and the frame rate doesn't dip, internally the game is rendering at an unknown frame rate that meets or exceeds the cap.

Any PC gamer knows this, they can see the frame rate rendered (vs the monitor max). I can game on my 60Hz monitor, but see the game is rendering at 70, 90, 110fps. It can literally be anything depending on your CPU/GPU and settings. If I want to run Half-Life at 640x480 and render at 500fps I can, but the monitor will only show 60 fps.

So we know the PS5 is sitting at 60fps. We literally do not know how many additional frames are being thrown away. By setting the frame buffer to 4k, you can at least push the system to potentially drop frames. Only then do you know that you have reached a limit, like with the XSX.
Speaking which I/O given the freedom to choice between uncapped FPS, or graphic quality or both in the menu setting in the previous hitman. Why now we can just play at 1800p 60 FPS on ps5? Weird
 
Last edited:

DJ12

Member
They did though, that is not debatable. When a game has a cap and the frame rate doesn't dip, internally the game is rendering at an unknown frame rate that meets or exceeds the cap.

Any PC gamer knows this, they can see the frame rate rendered (vs the monitor max). I can game on my 60Hz monitor, but see the game is rendering at 70, 90, 110fps. It can literally be anything depending on your CPU/GPU and settings. If I want to run Half-Life at 640x480 and render at 500fps I can, but the monitor will only show 60 fps.

So we know the PS5 is sitting at 60fps. We literally do not know how many additional frames are being thrown away. By setting the frame buffer to 4k, you can at least push the system to potentially drop frames. Only then do you know that you have reached a limit, like with the XSX.
It would be interesting if io actually responded to the comments and offered a reason for their choice.

Is it because they could/can only use gnmx, is it because Sony said it must not drop frames. Is it because ps5 cannot handle it.

A few months ago they said both versions were 4k so what's changed. I think they owe at least some explanation. Hopefully when someone more objective analyses the game they will actually ask them instead of accepting that finally xbox gets the win it's 12 tfs deserves.

I have no problem if that's the best that they can do, but lower shadow quality, which is against the grain when there's been a difference in shadows in all the other head to heads is a big indicator not all is as it seems here.

I am also very surprised no one else has tackled the game yet.

I am pleased it's happened though, nice to get some xbox fans out the woodwork. Variety is the spice of life. Anyone keeping score? 8-1 now or 7-1?
 
Last edited:
Isn't it just matter of bus contention. And I like to know what sort of advantage as series X in his SSD I/O memory setup because I heard for the first time about it by you, outside the software optimization which MS sell as the new Jesus coming. MS want a virtually splitted setup just to emulate the pc environment, but doing this CPU and GPU need to respect the limits imposed to the virtual splitted environment as it was physical. PS5 hasn't that sword on the head and it's far from a minor advantage for the RAM/bandwidth usage. Some developers already showed perplexity about the MS choice. There isn't anything of beneficial for an unified architecture to be splitted ; MS tried to sell it as a genial move but series X would have been a lot better without it and we will see if will not to be problematic in some games

If it's not just a matter of bus contention then clarify, because I've done enough research on this myself plus spoken with other people who are more knowledgeable on matters of the memory setup and specifics regarding GDK issues with Series X that basically solidify the idea that whatever issues with fast/slow bandwidth allocation is related to GDK elements that will probably be sorted out soon, if they haven't already.

The SSD I/O things I'm referring to are based on a mix of actual games we have that provide data, and R&D papers from MS namely things such as FlashMap. If you read through those papers you'll see a lot of ideas and tests done there which have most likely been leveraged by the Xbox development team for the new systems. I've always said that the I/O designs between MS and Sony were apples to oranges looking to achieve similar results through different means leveraging their own particular strengths and needs, and what we've been seeing from actual games so far seems to bear that out.

No, PC is a nUMA design, DDR and GDDR are physically separate memories in physically separate parts of the system where the data is passed through via PCIe link. On Series X the fast and "slow" pools of memory are essentially the same in terms of physical design and locality, the system just optimizes assignment of the wider pool to GPU and the narrower pool to CPU & audio. I mentioned actual CPU bandwidth requirements for a reason because even though Series X is allocated 336 GB/s, in effect it's only going to use around 50 GB/s at most and a lot of the time not even that much.

By comparison Sony already specified Tempest Engine can consume up to 20 GB/s of memory bandwidth but again, realistically that isn't going to be the number it will be smaller unless a game is streaming in audio from RAM to the Tempest caches the whole time which I don't see why it would. This is also why I brought up the SSD I/O bandwidth; it has to pull the bandwidth from somewhere and that "somewhere" will be the system main memory bandwidth of 448 GB/s. So in some case a game is actually maximizing or near-maximizing the 22 GB/s bandwidth on PS5's SSD then that's 22 GB/s less bandwidth for the GPU to work with.

Again, these are things that affect Series X but in different weights. We know it's SSD's throughput is "only" 6 GB/s max so if the GPu is accessing the 560 GB/s pool and the SSD would need to do its stuff (and I'm just speaking theoretical performances here btw), it would be reduced to 554 GB/s. If the audio needed to do some stuff, let's just say it pulls 15 GB/s in bandwidth, that drops to 539 GB/s. And let's say the CPU needed to do stuff, that's 50 GB/s plus maybe 10 GB/s overhead for pool-switching, now that's 479 GB/s for GPU, but this is not a realistic number because it'd assume every processor component is doing maximum workload during a given second of game processing time and that is generally not the case.

That same scenario on PS5 would see GPU effective bandwidth go from 448 GB/s to 428 GB/s if Tempest Engine is factored in, from 428 GB/s to 406 GB/s if the SSD decompression is factored in, and from 406 GB/s to 356 GB/s if the CPU is factored in but, once again, this is not a realistic scenario just for illustrative purposes.

One thing I will agree with you on is that, even if Series X's pool is not "split" the way people try convincing it is similar to PC or older consoles like PS3 etc., they would definitely be better off with 10x 2 GB modules for 20 GB of memory all at 560 GB/s instead of the fast/slow setup they've compromised with. What I'm trying to discuss here, though, is the idea that the setup they have chosen will lead to massive bandwidth deficits because if you have an understanding on how the system components actually work and how they will impact bus contention, you do some calculations and in reality it's not THAT much of a hit for Series X and it still retains a notable bandwidth advantage over PS5 regardless when it comes to the RAM. If it didn't, Microsoft would've simply eaten the costs and gone with 20 GB of RAM.
 

assurdum

Banned
If it's not just a matter of bus contention then clarify, because I've done enough research on this myself plus spoken with other people who are more knowledgeable on matters of the memory setup and specifics regarding GDK issues with Series X that basically solidify the idea that whatever issues with fast/slow bandwidth allocation is related to GDK elements that will probably be sorted out soon, if they haven't already.

The SSD I/O things I'm referring to are based on a mix of actual games we have that provide data, and R&D papers from MS namely things such as FlashMap. If you read through those papers you'll see a lot of ideas and tests done there which have most likely been leveraged by the Xbox development team for the new systems. I've always said that the I/O designs between MS and Sony were apples to oranges looking to achieve similar results through different means leveraging their own particular strengths and needs, and what we've been seeing from actual games so far seems to bear that out.

No, PC is a nUMA design, DDR and GDDR are physically separate memories in physically separate parts of the system where the data is passed through via PCIe link. On Series X the fast and "slow" pools of memory are essentially the same in terms of physical design and locality, the system just optimizes assignment of the wider pool to GPU and the narrower pool to CPU & audio. I mentioned actual CPU bandwidth requirements for a reason because even though Series X is allocated 336 GB/s, in effect it's only going to use around 50 GB/s at most and a lot of the time not even that much.

By comparison Sony already specified Tempest Engine can consume up to 20 GB/s of memory bandwidth but again, realistically that isn't going to be the number it will be smaller unless a game is streaming in audio from RAM to the Tempest caches the whole time which I don't see why it would. This is also why I brought up the SSD I/O bandwidth; it has to pull the bandwidth from somewhere and that "somewhere" will be the system main memory bandwidth of 448 GB/s. So in some case a game is actually maximizing or near-maximizing the 22 GB/s bandwidth on PS5's SSD then that's 22 GB/s less bandwidth for the GPU to work with.

Again, these are things that affect Series X but in different weights. We know it's SSD's throughput is "only" 6 GB/s max so if the GPu is accessing the 560 GB/s pool and the SSD would need to do its stuff (and I'm just speaking theoretical performances here btw), it would be reduced to 554 GB/s. If the audio needed to do some stuff, let's just say it pulls 15 GB/s in bandwidth, that drops to 539 GB/s. And let's say the CPU needed to do stuff, that's 50 GB/s plus maybe 10 GB/s overhead for pool-switching, now that's 479 GB/s for GPU, but this is not a realistic number because it'd assume every processor component is doing maximum workload during a given second of game processing time and that is generally not the case.

That same scenario on PS5 would see GPU effective bandwidth go from 448 GB/s to 428 GB/s if Tempest Engine is factored in, from 428 GB/s to 406 GB/s if the SSD decompression is factored in, and from 406 GB/s to 356 GB/s if the CPU is factored in but, once again, this is not a realistic scenario just for illustrative purposes.

One thing I will agree with you on is that, even if Series X's pool is not "split" the way people try convincing it is similar to PC or older consoles like PS3 etc., they would definitely be better off with 10x 2 GB modules for 20 GB of memory all at 560 GB/s instead of the fast/slow setup they've compromised with. What I'm trying to discuss here, though, is the idea that the setup they have chosen will lead to massive bandwidth deficits because if you have an understanding on how the system components actually work and how they will impact bus contention, you do some calculations and in reality it's not THAT much of a hit for Series X and it still retains a notable bandwidth advantage over PS5 regardless when it comes to the RAM. If it didn't, Microsoft would've simply eaten the costs and gone with 20 GB of RAM.
25% of more bandwidth is not "notable" at all and again you continue to sell software optimization as hardware customisation when there isn't anything on series X of particularly different to a normal pc for I/O RAm or bandwidth. Furthermore you completely ignore the fact Navi gpu works better with less bandwidth and better caches system than higher bandwidth and, surprise, ps5 has cache scrubbers, more or less the principle which inspires the infinity cache system adopted by AMD.
About the bandwidth question seems you continue to ignore of what I'm talking about;
I don't know how to explain to you better use a 560GBs as it was 560+336GBs it's like to split half cake as it was the whole. Please we can stop here 🙏 thanks
 
Last edited:
25% of more bandwidth is not "notable" at all and again you continue to sell software optimization as hardware customisation when there isn't anything on series X of particularly different to a normal pc for I/O RAm or bandwidth. Furthermore you completely ignore the fact Navi gpu works better with less bandwidth and better caches system than higher bandwidth and, surprise, ps5 has cache scrubbers which more or less is the principle which inspires the infinity cache system adopted by AMD.
About the bandwidth question seems you continue to ignore of what I'm talking about but I give up.
We can stop here now?

25% of memory bandwidth is clearly notable, don't understand how you can that's not the case. I/O management is completely differe't on series X against PC, on PC GPU and CPU have their own dedicated memory pool which are managed separatly. On Series X, as it is the case on PS5, there is only one memory controller in the APU connected to GDDR6. The only differences is that with the Series X he will "virtually" separate the memory in two memory pools. You lost the possibility to exchange/share datas directly in the memory, but honestly, that's not the big bad things as many people are claiming...

Cache scrubbers and infinity cache are completely different technology and have nothing in common. Cache scrubbing is a way to win few bandwidth with better management of cache memory, infinity cache is "simply" a big level of cache memory with big bandwidth and low latency added to avoid using a too big bus connected to VRAM.

Honestly, I don't understand how you can say navi works better with less bandwidth... Yes you have infinity cache on rx6800, but if you increase the bandwidth, you will increase the performances. But you don't have infinity cache on PS5 and XsX, so that's clearly wrong with these two navi base APUs.
 

assurdum

Banned
25% of memory bandwidth is clearly notable, don't understand how you can that's not the case. I/O management is completely differe't on series X against PC, on PC GPU and CPU have their own dedicated memory pool which are managed separatly. On Series X, as it is the case on PS5, there is only one memory controller in the APU connected to GDDR6. The only differences is that with the Series X he will "virtually" separate the memory in two memory pools. You lost the possibility to exchange/share datas directly in the memory, but honestly, that's not the big bad things as many people are claiming...

Cache scrubbers and infinity cache are completely different technology and have nothing in common. Cache scrubbing is a way to win few bandwidth with better management of cache memory, infinity cache is "simply" a big level of cache memory with big bandwidth and low latency added to avoid using a too big bus connected to VRAM.

Honestly, I don't understand how you can say navi works better with less bandwidth... Yes you have infinity cache on rx6800, but if you increase the bandwidth, you will increase the performances. But you don't have infinity cache on PS5 and XsX, so that's clearly wrong with these two navi base APUs.
25% is not exactly notable especially considered the amount of CUs on series X. And again too many understimate the splitted configuration consequence in the bandwidth effectiveness. Not sure what is ti special in the hardware I/O setup of series X. Anyway it's paradigmatic read infinity cache has nothing common with the ps5 cache system when in the same you told it's just for a "few" amount of bandwidth win, the similar purpose of infinity cache.
But the most annoying thing it's hear you already knows the real benefit of cache scrubbers.
Whatever is inside the ps5 give a minimal benefit on perfomance but what's about series X which mostly is software management, give miraculous advantage and it's brilliant engineering without any possible issue to discuss.
 
Last edited:
25% is not exactly notable especially considered the amount of CUs on series X. And again you understimate the splitted configuration consequence in a less bandwidth effectiveness. What exactly is special in the hardware I/O of series X? And I don't know who told you on ps5 cache scrubbers free are just for a "few" amount bandwidth win and not helps VRAM at all. Where coming such data? Let's me guess, you look to the past cache scrubbers right?

That's not the number of CUs you need to take in account, you need to take also the frequency plus all the other units/blocks that access to VRAM. What is the most import is obviously the amount of raw data manage by the GPU, and clearly, 25% of increased bandwidth is NOTABLE.

Yes we need to underestimate the impact of splitted memory consequence. Splitted memory is not someting new you know...
The differences between PS5 and XsX is simply that the acces to memory pool needs to be done in respect to some priorities : On PS5, the GPU and the CPU can access to the same 16GB "workspace" sharing the same total 448GB/s bus distributed on 8 2GB Memory chip with 56GB/s path for each, everytime the CPU access to the memory, it use part of the bandwidth (same for the Audio chip etc....).
On Series X, the GPU can see and access to the complete 16GB, but "we" simply keep the GPU accessing to the 4 1GB memory chip with four 56GB/s path (exclusive to the GPU) + half of each six 2GB Memory chip connected also with 56GB/s path each, and the CPU can see/access only the second half of these 2GB Memory. When the CPU access to the memory, it has exactly the same impact than it is the case on PS5, the differencies will be that on XsX the CPU part will only have a max bandwidth of 336GB/s (that's clearly a lot for a CPU), fixed amount of RAM (as it seems 2.5GB is used by the system, 3.5GB shared with the audio chip, enough for pure gaming purpose) and finally the fact that with separate pools, we can't share directly in the memory the datas stored between the GPU and CPU, but I repeat, that's clearly not the big thing that many people think it is, you have several ways to get around this quite efficiently contrary to what is claimed, especially with an APU that uses a single memory controller. Some people clearly thinks that on AMD and MS side they are completely dumb...

You really think that the cache scrubber used in the PS5 GPU works in a complete different way than it is the case since a long time in CPU cache, and not just an evolved version adapted for GPU usage and streaming purpose ?? Let me clearly doubt it. Yes, the cache scrubbing in PS5 is a "new" thing at GPU level, but it is obvious that it is here to give cache bandwidth improvement, and that is mainly linked to streaming purpose, having a better cache management to avoid more finely data redundancy between RAM and cache, as well as a more precise management of obsolete datas in the cache etc...
 
Last edited:
25% is not exactly notable especially considered the amount of CUs on series X. And again too many understimate the splitted configuration consequence in the bandwidth effectiveness. Not sure what is ti special in the hardware I/O setup of series X. Anyway it's paradigmatic read infinity cache has nothing common with the ps5 cache system when in the same you told it's just for a "few" amount of bandwidth win, the similar purpose of infinity cache.
But the most annoying thing it's hear you already knows the real benefit of cache scrubbers.
Whatever is inside the ps5 give a minimal benefit on perfomance but what's about series X which mostly is software management, give miraculous advantage and it's brilliant engineering without any possible issue to discuss.

WTF is your edit and your last paragraph ??? Where we said that the PS5 is bladly engineered ??? I have always claimed that PS5 is a more balanced harware with advantages at some levels than XsX, and that XsX has some advantages over PS5. In this case, we are just saying that many overestimate the impact of Memory split in the Series X and contrary others are simply overestimating for example how the cache scrubbing will improve the PS5 GPU. What I was also saying about the cache scrubbers and the infinity cache is that it has simply not really the same purpose. Infinity cache is only here to compensate the "small" memory bandwidth used with RX6800 GPU, the cache scrubbing is mainly linked to streaming purpose with cache management improvement.
And NO the Series X is not mostly sofware management, I don't understand if you are speaking about the Split memory usage, but that's completely false !
 
Last edited:
It would be interesting if io actually responded to the comments and offered a reason for their choice.
That's the whole point of it, until either the game is patched or they provide a reason (I personally think neither choice needs "an explanation", because if the series x was in the ps5's seat we would not ear the end of the frame rate dropping and how smoother locked 60fps is from some of them).

If both machines perform as they do in games where the settings are better matched, then if the game was native 4K it should run a little better on ps5, but it's likely it would still drop frames, maybe in this case a bit more than series x... but we can only speculate.

I think vrr support could have to do with it, even if only a few very recent TV models support the feature.
 
Last edited:

assurdum

Banned
WTF is your edit and your last paragraph ??? Where we said that the PS5 is bladly engineered ??? I have always claimed that PS5 is a more balanced harware with advantages at some levels than XsX, and that XsX has some advantages over PS5. In this case, we are just saying that many overestimate the impact of Memory split in the Series X and contrary others are simply overestimating for example how the cache scrubbing will improve the PS5 GPU. What I was also saying about the cache scrubbers and the infinity cache is that it has simply not really the same purpose. Infinity cache is only here to compensate the "small" memory bandwidth used with RX6800 GPU, the cache scrubbing is mainly linked to streaming purpose with cache management improvement.
And NO the Series X is not mostly sofware management, I don't understand if you are speaking about the Split memory usage, but that's completely false !
Can you specify to me what exactly as of particular the series X hardware in the circuit ? You continue to say no no no it's custom specialized and you don't say anything more. And again where do you read 25 % of more bandwidth is substantial? Yeah if it was like per like series X would have clearly an advantage but with the same number of Cu and the same frequency of ps5; in this case isn't it exactly clear how much... and you are quite convinced splitted RAM/bandwidth wouldn't have negative consequences but it's never happened an hardware to use an unified architecture as it was splitted and there are different interesting article about it where is explained how the impact isn't it exactly minimal as you said repeatedly, because it's not so clear how much interfers as you oversimplified with MS panel. About cache scrubbers will see if they will do exactly just what cache scrubbers was in the past. I have great doubt about it.
Anyway I never said you claimed ps5 is bad engineering but everything inside the ps5 listen you give a couple of benefit but my gosh 25% of more bandwidth is notable and no way the splitted virtual configuration will impact negatively it because contention is the same which it's not true at all.
Can I give an example? You said CPU doesn't need of 336GBs, ok so why MS claimed it for the CPU if it will never be intended to use? You think CPU will keep occupied just 48GBs when needed more than 2 GB? I really doubt it.
 
Last edited:

DJ12

Member
That's the whole point of it, until either the game is patched or they provide a reason (I personally think neither choice needs "an explanation", because if the series x was in the ps5's seat we would not ear the end of the frame rate dropping and how smoother locked 60fps is from some of them).
Df would've mentioned it in there video and have already asked the question if the shoe was on the other foot
 

Dibils2k

Member
H2H were looking boring, this really spiced things up :messenger_grinning_squinting: this thread probably hasnt reached the heights of BF4 XOne vs PS4 but its making a good fight of it
 

MonarchJT

Banned
So he's made a few mistakes here and there, people make mistakes. I still don't see what "downplaying the SSD" actually means without some context. WRT PS5 RT that sounds like it was a mistake or maybe a misunderstanding on his part. However it's often been speculated Sony's RT implementation is different than standard RDNA 2, that was actually one of the earliest rumors about the system. This can still be the case while still being hardware-accelerated.

I've seen many other trusted people during the course of next-gen speculation get things wrong multiple times, or say things in a way where it seemed they were lying through omission. Plenty of insiders made these kinds of mistakes, for example. A few of them as well as just other people with technical knowledge made a lot of mistakes and some also arguably misguiding not just WRT PS5 specs but also Series X and Series S specs, too.

But for some odd reason Alex seems to get a target on his back by people who have some hateboner with DF, I do find that kind of amusing and also kinda weird.



From what I've seen of their posts on other forums surrounding performance results.


You're describing bus contention. PS5 has a similar issue, all hUMA designs struggle with bus contention since the bus arbitration makes it that only one processor component drives the main bus to memory at a time.

Series X's issue with this might be somewhat more exacerbated compared to PS5 but in typical usage the bandwidth accesses should never drag to an average equivalent to 448 GB/s; this kind of depends on how CPU-intensive a game is in terms of busywork outside of handling drawlists/drawcalls for the GPU.

Typically these CPUs shouldn't require more than 30 GB/s - 50 GB/s of bandwidth when in full usage. Even tacking on an extra 10 GB/s in Series X's case due to its RAM setup that's still an effective 500 GB/s - 520 GB/s for the GPU to play with, compared to 398 GB/s - 418 GB/s for PS5's GPU (factoring out what the CPU might be using).

...although this actually also doesn't account for audio or the SSD, both of which eat away at bandwidth. So it's somewhat lower effective bandwidth for both Series X and PS5's GPUs that they actually might typically get to use, but everything I'm describing also assumes non-GPU processes are occupying the bus for an entire second when in reality that is almost never true; they're constantly alternating access on the measure of frames or even more accurately, cycles.



It really depends. Series X's GPU advantage isn't just theoretical compute, but in BVH intersections for RT as well as texture fillrate (texel fillrate). The latter is interesting because while texels can be used as pixels, they don't actually have to. They can be used as LUTs for certain effect data, like shadow maps and tessellation.

PS5 definitely has advantages in pixel fillrate, culling rate and triangular rasterization rate, but we can't forget that the GDDR6 memory is going to be a big factor here because not all calculable data will be able to reside in the GPU caches. PS5 probably has 512 extra MB of GDDR6 for game data compared to Series X (14 GB vs. 13.5 GB), but its RAM is 112 GB/s slower. It's SSD is 125% faster but that "only" a fillrate of up to 22 GB/s which is only a fraction of the 112 GB/s RAM delta.

Then we also need to take into account that several games so far in terms of load times (i.e getting data from storage into memory) have shown remarkably close load times between Sony and Microsoft's systems, where if PS5 is pulling ahead it's only by a couple of seconds at most and in some cases not even that. So essentially both systems are seemingly able to populate their free RAM in virtually identical amounts of time (though I guess if PS5's specific I/O features will flex it could be in asset streaming, the question would be how big of a flex that would be over Series X and I'm thinking it's probably not going to be that big of a delta between them on this factor either).

Basically, we can't say for certain that in EVERY case a game that favors this or that will run better on Series X or favors this other stuff will run better on PS5, because you can't simply look at GPU strengths and weaknesses in isolation to the rest of the system's design from top to bottom. And just like how some people have underestimated the impact of variable frequencies being a benefit to Sony's design, I think some have underestimated the impact of RAM bandwidth (even if that's a fast/slow RAM pool kind of thing) and particulars of Microsoft's design in terms of the memory and SSD I/O.
thanks for having taken the floor and having enlightened really well on how the bw works. It was a long post and at that moment I just didn't feel like it. Let me add that the xsx also has hardware tricks to significantly save bandwidth especially the Sampler Feedback Streaming or vrs tier 2 (both practically still not used)
In particular vrs tier 2 used (only in these two games to date) in Hivebusters and Gear tactics gave performance improvements up to 14% !!

here instead an explanation on how the sfs works

 
Last edited:

MonarchJT

Banned
Can you specify to me what exactly as of particular the series X hardware in the circuit ? You continue to say no no no it's custom specialized and you don't say anything more. And again where do you read 25 % of more bandwidth is substantial? Yeah if it was like per like series X would have clearly an advantage but with the same number of Cu and the same frequency of ps5; in this case isn't it exactly clear how much... and you are quite convinced splitted RAM/bandwidth wouldn't have negative consequences but it's never happened an hardware to use an unified architecture as it was splitted and there are different interesting article about it where is explained how the impact isn't it exactly minimal as you said repeatedly, because it's not so clear how much interfers as you oversimplified with MS panel. About cache scrubbers will see if they will do exactly just what cache scrubbers was in the past. I have great doubt about it.
Anyway I never said you claimed ps5 is bad engineering but everything inside the ps5 listen you give a couple of benefit but my gosh 25% of more bandwidth is notable and no way the splitted virtual configuration will impact negatively it because contention is the same which it's not true at all.
Can I give an example? You said CPU doesn't need of 336GBs, ok so why MS claimed it for the CPU if it will never be intended to use? You think CPU will keep occupied just 48GBs when needed more than 2 GB? I really doubt it.
when games willl fully utilize dx12u will be well over 25%
 
Last edited:
Can you specify to me what exactly as of particular the series X hardware in the circuit ? You continue to say no no no it's custom specialized and you don't say anything more. And again where do you read 25 % of more bandwidth is substantial? Yeah if it was like per like series X would have clearly an advantage but with the same number of Cu and the same frequency of ps5; in this case isn't it exactly clear how much... and you are quite convinced splitted RAM/bandwidth wouldn't have negative consequences but it's never happened an hardware to use an unified architecture as it was splitted and there are different interesting article about it where is explained how the impact isn't it exactly minimal as you said repeatedly, because it's not so clear how much interfers as you oversimplified with MS panel. About cache scrubbers will see if they will do exactly just what cache scrubbers was in the past. I have great doubt about it.
Anyway I never said you claimed ps5 is bad engineering but everything inside the ps5 listen you give a couple of benefit but my gosh 25% of more bandwidth is notable and no way the splitted virtual configuration will impact negatively it because contention is the same which it's not true at all.
Can I give an example? You said CPU doesn't need of 336GBs, ok so why MS claimed it for the CPU if it will never be intended to use? You think CPU will keep occupied just 48GBs when needed more than 2 GB? I really doubt it.

Never said that there is something particular and custom on Series X, since the beginning you are twisting my word. As if I'm saying that XsX is a engineering miracle and PS5 a garbage. That's false, I'm just explaining you that the Split memory is an overestimated problem, as in fact the way of the APUs access to the VRAM is done in the same way than on PS5, with more constraints due to separate pools, but sharing the same buses to each RAM dies, etc... Seems many people watch the memory as a something "global"... Also I never said that it has no impact for XsX against PS5, but that it is simply overestimated.
Never happened to have an hardware which use an unified architecture as it was splitted ? How do you think the CPU with discrete GPU are working ? A part of the RAM is allocated to the GPU, another part to the CPU... So I don't understand your message.

I simply try to explain you why you are mixing the datas. We does not care about the number of CUs or the frequency, what we need to care is the level of RAW datas. The more bandwidth you have the less the GPU will be constraint to compute these RAW datas, that's obvious. With more bandwidth, the XsX with a game engine that is limited mainly by the memory bandwidth could push more datas, so more fps than PS5 for example, that's simple... With a game engine more constraint by pixel fillrate, PS5 will push more fps than XsX etc...
You seems to think that the cache scrubber will compensate the 25% of bandwidth, and I can simply say no. Honestly, you have company such as AMD and NVIDIA that are doing GPUs since more than 20 years, with memory bandwidth constraints existing since the start, you really think that they don't try to integrate/develop cache scrabbers in their GPUs if it could win so much bandwidth ? Not like it is existing in CPU since 20 years. And, another point, just remember during Cerny's presentation when exactly he spoke about the cache scrubbers and what was his words about it, does it not sound really similar to what is done in the CPUs, and is it not mainly focused to PS5 streaming part ?

Finally, do you now that a CPU is mainly dependant to latency than bandwidth ? A CPU like Ryzen does clearly not use 336GBs of bandwidth... MS is more claiming that about the 3.5GB of memory allocated, not the bandwidth. And don't forget that's in this "slow" memory bandwidth, there is not only the CPU but olso other components using it.
Just for information, yes I'm not an expert on memory usage (software part), but I have some knowledges and reasons to discuss about hardware memory conception, memory controller functionnality for example. Not here to tell I know everything (that's clearly not the case), just sharing my personal experience.
 
Last edited:

assurdum

Banned
Never said that there is something particular and custom on Series X, since the beginning you are twisting my word. As if I'm saying that XsX is a engineering miracle and PS5 a garbage. That's false, I'm just explaining you that the Split memory is an overestimated problem, as in fact the way of the APUs access to the VRAM is done in the same way than on PS5, with more constraints due to separate pools, but sharing the same buses to each RAM dies, etc... Seems many people watch the memory as a something "global"... Also I never said that it has no impact for XsX against PS5, but that it is simply overestimated.
Never happened to have an hardware which use an unified architecture as it was splitted ? How do you think the CPU with discrete GPU are working ? A part of the RAM is allocated to the GPU, another part to the CPU... So I don't understand your message.

I simply try to explain you why you are mixing the datas. We does not care about the number of CUs or the frequency, what we need to care is the level of RAW datas. The more bandwidth you have the less the GPU will be constraint to compute these RAW datas, that's obvious. With more bandwidth, the XsX with a game engine that is limited mainly by the memory bandwidth could push more datas, so more fps than PS5 for example, that's simple... With a game engine more constraint by pixel fillrate, PS5 will push more fps than XsX etc...
You seems to think that the cache scrubber will compensate the 25% of bandwidth, and I can simply say no. Honestly, you have company such as AMD and NVIDIA that are doing GPUs since more than 20 years, with memory bandwidth constraints existing since the start, you really think that they don't try to integrate/develop cache scrabbers in their GPUs if it could win so much bandwidth ? Not like it is existing in CPU since 20 years. And, another point, just remember during Cerny's presentation when exactly he spoke about the cache scrubbers and what was his words about it, does it not sound really similar to what is done in the CPUs, and is it not mainly focused to PS5 streaming part ?

Finally, do you now that a CPU is mainly dependant to latency than bandwidth ? A CPU like Ryzen does clearly not use 336GBs of bandwidth... MS is more claiming that about the 3.5GB of memory allocated, not the bandwidth. And don't forget that's in this "slow" memory bandwidth, there is not only the CPU but olso other components using it.
Just for information, yes I'm not an expert on memory usage (software part), but I have some knowledges and reasons to discuss about hardware memory conception, memory controller functionnality for example. Not here to tell I know everything (that's clearly not the case), just sharing my personal experience.
I give up. And I haven't said nothing of false so let's stop there if you even doesn't know fully of what you are talking about
 
Last edited:

assurdum

Banned
I have just worked few years on memory conception and memory controller but yeah, "I even doesn't know fully of what I am talking about".
And still you continue to substains the series X configuration will not affect negatively perfomance and the contention is the same of a "normal" unified architecture.
 
Last edited:
And still you continue to substains the series X configuration wasn't affect negatively perfomance

And you still continue twisting what I said.
=> "that's not the big bad things as many people are claiming..."
=> "Yes we need to underestimate the impact of splitted memory consequence"
=> "Also I never said that it has no impact for XsX against PS5, but that it is simply overestimated."

To resume, I never said it has no impact, simply that's the impact is overestimated. Thanks.
We can stop discussing here.
 

assurdum

Banned
And you still continue twisting what I said.
=> "that's not the big bad things as many people are claiming..."
=> "Yes we need to underestimate the impact of splitted memory consequence"
=> "Also I never said that it has no impact for XsX against PS5, but that it is simply overestimated."

To resume, I never said it has no impact, simply that's the impact is overestimated. Thanks.
We can stop discussing here.
No I said you we don't know how much impact has such configuration differently to you and personally I think quite the contrary the thing is understimate. In what way I have twisted your words exactly? Seems more the contrary to me
 
Last edited:

sircaw

Banned
'
thanks for having taken the floor and having enlightened really well on how the bw works. It was a long post and at that moment I just didn't feel like it. Let me add that the xsx also has hardware tricks to significantly save bandwidth especially the Sampler Feedback Streaming or vrs tier 2 (both practically still not used)
In particular vrs tier 2 used (only in these two games to date) in Hivebusters and Gear tactics gave performance improvements up to 14% !!

here instead an explanation on how the sfs works


Is this the pigeon guy, i cant keep up.
 

JackMcGunns

Member
I take you sit with your nose to the screen.

fHm01UK.jpg


This is hysterical!

I can't believe we've come FULL CIRCLE. These charts were presented by Xbox fanboys back when PS4 had the resolution advantage.

This has to be the most ridiculous chart out there. Who sits more than 45ft from their gaming set? Oh oh, I know :lollipop_raising_hand::lollipop_raising_hand::lollipop_raising_hand: people trying to prove a ridiculous argument. I sit 6ft away so I guess I'm completely fine with my 85" screen.

Dates back to Xbox 360/PS3 days. You'll only notice the difference if you have an HDTV. Sucks to be you I guess?
 
Last edited:
Seems like the PS5 game is just the PS4 Pro version, again. Can’t wait to see more third party native PS5 games in action.

This is a straight up lie. Please stop spreading misinformation. Stop getting your takes from Joe Miller. No matter how many times he tweets differently, it doesn't change the FACT that this is a PS5 NATIVE game.

I won't argue against the idea that the shadow setting is bugged. That's possible, but this is NOT the Pro version running in BC.
 
25% of more bandwidth is not "notable" at all and again you continue to sell software optimization as hardware customisation when there isn't anything on series X of particularly different to a normal pc for I/O RAm or bandwidth. Furthermore you completely ignore the fact Navi gpu works better with less bandwidth and better caches system than higher bandwidth and, surprise, ps5 has cache scrubbers, more or less the principle which inspires the infinity cache system adopted by AMD.
About the bandwidth question seems you continue to ignore of what I'm talking about;
I don't know how to explain to you better use a 560GBs as it was 560+336GBs it's like to split half cake as it was the whole. Please we can stop here 🙏 thanks

No, I'd like to continue, because I think you're scared of your rhetoric getting blown open. Welp, my C4s are already placed and I'm about to detonate.

Reducing RAM bandwidth to a percentage is a ridiculous way to compare bandwidths, because you're doing it from a reductive POV. Operations like RT and higher-resolution asset feeding to the GPU rely heavily on bandwidth. The AMD cards are only competing with Nvidia's in terms of traditional rasterization performance, and it's arguable if the narrower GDDR6 bandwidth helps or hurts; you always have to keep in mind RDNA 2 cards have more RAM capacity than the Nvidia cards, in some cases almost 2x more memory capacity, and that's almost as important as the actual bandwidth.

You do realize that features of Series X's I/O, like DirectStorage, are not yet deployed widely on PC, right? DirectStorage is a restructuring of the filesystem as a whole, it's not just a means of getting more realized bandwidth from NVMe SSD drives. Other features of the system tied to the I/O but not necessarily in the I/O (like SFS) have no equivalents in hardware on PC. So again, you're completely incorrect on that one as well. There hasn't been confirmation if the RDNA 2 GPUs have cache scrubbers; IC as it's known so far is just a fat 128 MB L3$ embedded on the GPU. Very similar to the eSRAM the XBO had; they were both for framebuffers. IC is enough for 4x 4K framebuffers or 1x 8K framebuffer, and we don't exactly know what the bandwidth or latency of IC is either.

Seeing that, again, in only rasterization performance, AMD RDNA 2 equivalents can keep up with the new Nvidia cards then we can assume both bandwidth and latency on IC is quite good. You don't actually need cache scrubbers for a L3$, but we can throw a bone and say that RDNA 2 GPUs might have that. The point is all of this is only benefiting AMD on rasterized performance; anything that can leverage RT or DLSS image upscaling techniques, AMD loses out on massively. That's because things like RT benefit from bandwidth, capacity AND the actual hardware acceleration in the GPU; IC only helps with one, maybe two of those, but you need all three.

As for what inspired IC, I'm sure Sony may've had some influence but a fat L3$/L4$ is nothing new in system designs, at all. Again, MS did this with the 32 MB embedded eSRAM for XBO, but Intel also did this with embedded eSRAM (maybe eDRAM?) cache on some of their CPUs from the early 2010s'. AMD just took an age-old idea and put their own spin on it, I can guarantee you they weren't simply looking at what Sony was doing and you'd have to verify the RDNA 2 PC cards have cache scrubbers to accredit this to Sony actually.

You've been going all over the place with what you think you're talking about, because you don't understand how bus contention works (seemingly) nor do you seem to understand that you cannot aggregate access to the fast and slow pools as an average because that doesn't reflect how the two pools will be used in practice. We should also assume that if effective bandwidth from fast & slow pool access would drop to a level even near PS5's peak bandwidth, let alone lower than that, then Microsoft would have clearly taken the hit and gone with 20 GB. It's not like they're a team of gasping seals that just learned about electronics yesterday 🤷‍♂️

This is a straight up lie. Please stop spreading misinformation. Stop getting your takes from Joe Miller. No matter how many times he tweets differently, it doesn't change the FACT that this is a PS5 NATIVE game.

I won't argue against the idea that the shadow setting is bugged. That's possible, but this is NOT the Pro version running in BC.

The amount of willing ignorance and disinformation surrounding Hitman 3 not being a native port on PS5 going on in the thread is astounding.

I bet someone is probably calling me a Microsoft fanboy as we speak, even though I was just pointing out technical deficiencies in The Medium yesterday :LOL: .
 
Last edited:
Status
Not open for further replies.
Top Bottom