• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Microsoft Xbox Series X's AMD Architecture Deep Dive at Hot Chips 2020

It's called a "boost" ( you noticing simbol ") but it should not be compared with PC because it works differently. You got that now?
yes, I know. its not a boost. its a chef cerny magical boost, therefore it defies logic.

Oh fuck not this bullshit again.
It was impossible to produce a system that will be constantly clocked at 2.0GHz (in the given engineering constraints).
so, it was impossible to produce a system that would run at constant 2.0Ghz,
but it was very possible to create one that has a constant clockspeed at 2.23Ghz,
and at the "worst case game" -per cerny sweet talk for fools for a demanding cpu/gpu game- it would cut the clock "by a few percent" , aka cut to 2.2Ghz or 2.1Ghz
right?

you guys are going to feel real stupid when the machine is out.




edit:
I wont even bother to comment again on FAKE <<insider developer matt>>'s stupid comments, dear geordiemp geordiemp
we've been through that multiple times. guy is a phonie that somehow is presented as a "rare case developer that has access to both consoles" just to damage control ps5.
just recently he was saying that there is no series S (even an acquaintance of a developer on xbox would know that there was),
and he flipped-flopped once again on that RE8 news, where from "no way its not 4k" he went to "yes, you will see many non-native 4k games, but its for your own good".
sorry but I wont waste any more time commenting on this sucker.
 
Last edited:

psorcerer

Banned
but it was very possible to create one that has a constant clockspeed at 2.23Ghz,

Yup because power is variable and when there are degenerate cases where too much power is used, the system will underclock itself (instead of spinning fans like crazy, which XBSX will do,).

and at the "worst case game" -per cerny sweet talk for fools for a demanding cpu/gpu game- it would cut the clock "by a few percent" , aka cut to 2.2Ghz or 2.1Ghz
right?

He talks about degenerate cases, where simple workload is running uncapped on the hardware.
You cannot eliminate that. And there ware multiple case of that last gen.
 
Last edited:
Yup because power is variable and when there are degenerate cases where too much power is used, the system will underclock itself (instead of spinning fans like crazy, which XBSX will do,).
well, now you are just blatantly trolling without even trying to be surface-serious.

so let me answer to you in a way you may undesrand:
xbox x is whisper quiet, in direct contrast to that junk quality that is ps4pro, that makes all this noise and shuts down or freezes.
microsoft has already assured everyone that series x has the same sound levels as xbox x.
a few days ago spencer himself went public saying that he got the final release xbox, complete with final packaging and all, he set it up and its just as quiet as x.

on the contrary, ps5 having a much smaller chip, but coming in the largest ever console casing, well that is a very good reason for anyone to have doubts about the sound levels and behaviour, especially given sony's track record.

now stop trolling
 
Last edited:

Allandor

Member
But i can´t access both SeriesX pools at the same time. That ´s my point. The only chips is "free access" is 4GB VRAM in 4 chips, 1GB each.

In PS5 i can access CPU and GPU data at the same time, because each 2GB chip has their independent 32bit access.
No you can't.
If one component access the memory pool, others must wait. So it makes no difference if you access the 10gb or 6gb pool. But from the 10gb pool you can get your data faster.
You will always want to split your data accross all chips of a pool (well expect on xbox with it's 2 pools) because than you can get your data faster. But well, it would have been much better than if you could access both pools simultaneously.
 
Last edited:

geordiemp

Member
yes, I know. its not a boost. its a chef cerny magical boost, therefore it defies logic.


so, it was impossible to produce a system that would run at constant 2.0Ghz,
but it was very possible to create one that has a constant clockspeed at 2.23Ghz,
and at the "worst case game" -per cerny sweet talk for fools for a demanding cpu/gpu game- it would cut the clock "by a few percent" , aka cut to 2.2Ghz or 2.1Ghz
right?

you guys are going to feel real stupid when the machine is out.




edit:
I wont even bother to comment again on FAKE <<insider developer matt>>'s stupid comments, dear geordiemp geordiemp
we've through that multiple times. guy is a phonie that somehow is presented as a "rare case developer that has access to both consoles" just to damage control ps5.
just recently he was saying that there is no series S (even an acquaintance of a developer on xbox would know that there was),
and he flipped-flopped once again on that RE8 news, where from "no way its not 4k" he went to "yes, you will see many non-native 4k games, but its for your own good".
sorry but I wont waste any more time commenting on this sucker.

Answer me this, if you bombarded XSX with AVX 256 in a malicoius loop would it sit at 3.6 GHz ?

No, I think you know that. Zen and intel also downclock - Cerny said so for AVX 256, MS just has not talked about it YET.

Think your getting excited over circumstances that do not exist and all CPUs do the same shit, or mS blocks such occurances. I doubt XSX has a CPU at 3.6 Ghz which is magically better than Zen, Ps5 and intel for AVX 256 lol.

Uf you cant see the logic in ps5 admitting 3 GHz is a problem for AVX, Intel and Zen also, and mS quiet at 3.6 Ghz, I dont know what to tell you.
 
Last edited:

Seph-

Member
Are we really still pushing the narrative that Sony simply botched their console? (Which anyone using common sense would know that likely isn't the case)
The console likely CAN run at max clocks on both chips consider Cerny himself stated there's enough power and cooling to do. It probably just wouldn't net as good of performance as if Devs took the time to optimize their game. Same can be said on the Series X. Any pretending that a gpu or cpu runs at max load 100% of the time is just silly, especially since it can vary per frame. I doubt either console ever runs at the maximum honestly.

Seems to me, people just can't swallow that 1 company went for the normal console build while another went with something different. One has more compute, one computes faster. End result likely will still be 3rd parties have a slight resolution difference and 1st parties do their normal jobs. But either way on topic, Hot chips was neat, I would recommend it to anyone who is actually interested in that type of thing, not just interested in using it for some silly console war.
 
Are we really still pushing the narrative that Sony simply botched their console? (Which anyone using common sense would know that likely isn't the case)
The console likely CAN run at max clocks on both chips consider Cerny himself stated there's enough power and cooling to do. It probably just wouldn't net as good of performance as if Devs took the time to optimize their game. Same can be said on the Series X. Any pretending that a gpu or cpu runs at max load 100% of the time is just silly, especially since it can vary per frame. I doubt either console ever runs at the maximum honestly.

Seems to me, people just can't swallow that 1 company went for the normal console build while another went with something different. One has more compute, one computes faster. End result likely will still be 3rd parties have a slight resolution difference and 1st parties do their normal jobs. But either way on topic, Hot chips was neat, I would recommend it to anyone who is actually interested in that type of thing, not just interested in using it for some silly console war.

One platform is more powerful at sustained performance and smaller vs a weaker console with variable performance and bigger. One is just better designed. Doesn't mean the weaker platform is botched.
 
Last edited:
Seems to me, people just can't swallow that 1 company went for the normal console build while another went with something different. One has more compute, one computes faster. End result likely will still be 3rd parties have a slight resolution difference and 1st parties do their normal jobs. But either way on topic, Hot chips was neat, I would recommend it to anyone who is actually interested in that type of thing, not just interested in using it for some silly console war.
it is more than obvious what is hard to swallow and for whom exactly
 
Last edited:

psorcerer

Banned
well, now you are just blatantly trolling without even trying to be surface-serious.

so let me answer to you in a way you may undesrand:
xbox x is whisper quiet, in direct contrast to that junk quality that is ps4pro, that makes all this noise and shuts down or freezes.
microsoft has already assured everyone that series x has the same sound levels as xbox x.
a few days ago spencer himself went public saying that he got the final release xbox, complete with final packaging and all, he set it up and its just as quiet as x.

on the contrary, ps5 having a much smaller chip, but coming in the largest ever console casing, well that is a very good reason for anyone to have doubts about the sound levels and behaviour, especially given sony's track record.

now stop trolling

I'm not sure you understand: any constant freq hardware will spin its fans like crazy and overheat if presented with high power load, it's physics.
If it doesn't -> the load was not high power enough.
I.e. you actually improve my argument right now.
 

Seph-

Member
One platform is more powerful at sustained performance vs a weaker console with variable performance. One is just better designed. Doesn't mean the weaker platform is botched.
That reply is subjective and has a basis of 0 proof of 1 being designed better as both clearly have different goals. One is more powerful in compute, one is faster in compute. This sustained vs variable debate it just silly.

it is more than obvious what is hard to swallow and for whom exactly
Are you trying to insinuate something here?
 
Last edited:

Zathalus

Member
I wont even bother to comment again on FAKE <<insider developer matt>>'s stupid comments, dear geordiemp geordiemp
we've been through that multiple times. guy is a phonie that somehow is presented as a "rare case developer that has access to both consoles" just to damage control ps5.
just recently he was saying that there is no series S (even an acquaintance of a developer on xbox would know that there was),
and he flipped-flopped once again on that RE8 news, where from "no way its not 4k" he went to "yes, you will see many non-native 4k games, but its for your own good".
sorry but I wont waste any more time commenting on this sucker.

You keep accusing Matt of being fake to dismiss his claims of the variable clock and the PS5 being a 10 TFLOP console. What evidence do you have to back this up? I would love to see the original statements if you would. Right off the bat, the statement that RE8 is 4k and many other games will be non-native 4k is not contradictory in of itself.
 
You keep accusing Matt of being fake to dismiss his claims of the variable clock and the PS5 being a 10 TFLOP console. What evidence do you have to back this up? I would love to see the original statements if you would. Right off the bat, the statement that RE8 is 4k and many other games will be non-native 4k is not contradictory in of itself.
I've already posted screenshots of his statements in other threads talking about this subject. I will not be bothered to do that again.
 

GODbody

Member
I think it does.

R0tvsIR.jpg
This is pure FUD. All of the chips have the same speed of 56GB/s. If you read from 10 chips at once you get 560 GB/s. If you read from 6 chips at once you get 336 GB/s. They make the false assumption that the half the bus will just go unused which is not going to be the case. There are 20 channels, meaning you are always going to be able to read/write from 20 different locations on the memory at all times. While you're reading from 6 chips maybe you're reading from the other 4 and vice versa. Since there are ten chips guess what? It's always going to be 56GB/s x 10 (560 GB/s) as long as the memory is managed well. The Memory bus doesn't just stop running at 320 bits just because it needs to read/write from the standard memory it's able to utilize all of it's bitrate every cycle.

PaintTinJr PaintTinJr I probably should have elaborated on this as well. Maybe this will give you a better picture of the memory config.
 
Last edited:
If you already have the screenshots then it certainly won't be much effort to post them again. Or just link the post in question.
JUST FOR YOU, I went into the trouble finding and re-uploading his comment that "he reported that series S was cancelled, because <<everyone else did also>> "

matt-clown3.jpg

for the rest -if you feel you still need more proof on the subject- just go read the dust golem thread here in gaf.

now, those of you that still believe this stupid clown to be a next-gen developer, frankly I don't have any more words for you, as they will be 100% wasted.
 

Blight0r

Neo Member
Finally some xbox specs.

From a high level it seems sony and ms took really different approaches in the design of thier audio chips!

Now I'm no hardware guru, so would like some correction if its appropriate.

Sony made like a general purpose (I use the term loosely!) programmable SIMD execution unit in Tempest?

MS made several similar units units dedicated to different things (MOVAD-OPUS decode, Logan-XMA decode,CFPU2-programmable SIMD with 4 FP? engines)?

Both of them seem like a generational leap in available processing for audio, and I wonder how these implementations will be used!
 

Seph-

Member
JUST FOR YOU, I went into the trouble finding and re-uploading his comment that "he reported that series S was cancelled, because <<everyone else did also>> "

matt-clown3.jpg

for the rest -if you feel you still need more proof on the subject- just go read the dust golem thread here in gaf.

now, those of you that still believe this stupid clown to be a next-gen developer, frankly I don't have any more words for you, as they will be 100% wasted.
You are aware that MULTIPLE people did report Lockhart being cancelled right? I've followed what Matt has said and honestly the way you're coming off just feels like you have a vendetta or something, even though he was basically one of the only ppl to actually say the Xbox was stronger before anything was revealed and in the ~15% range. Which both of those ended up being pretty exact. But either way, carry on, just letting you know that post really didn't prove your point.
 
Last edited:
You are aware that MULTIPLE people did report Lockhart being cancelled right? I've followed what Matt has said and honestly the way you're coming off just feels like you have a vendetta or something, even though he was basically one of the only ppl to actually say the Xbox was stronger before anything was revealed and in the ~15% range. Which both of those ended up being pretty exact. But either way, carry on, just letting you know that post really didn't prove your point.
EVERYBODY who saw the amd leaks knew xbox was to be the stronger machine. except fanboys of course, they always have a problem with reality.

now answer me this:
how in the hell is it possible to be developing on next-gen consoles, or even be close to that environment, and say that series s is cancelled?
if your logic fails you even on this, then I believe we haven't got much to say.
 
Last edited:

Seph-

Member
EVERYBODY who saw the amd leaks knew. except fanboys of course, they always have a problem with reality.

now answer me this:
how in the hell is it possible to be developing on next-gen consoles, or even be close to that environment, and say that series s is cancelled?
if your logic fails you even on this, then I believe we haven't got much to say.
Well if you're to believe that (and this is a popular believed statement here) that Dev kits for the Xbox arrived "Much later" than the PS5, then by that logic Lockhart would have merely been only mentioned on paper without a dev kit in site at that point because its devkits likely would have arrived at the same time as the Series X considering they use practically the same tech just at different clocks and CU count. So by that assumption it would become something that wasn't communicated and could have been cancelled at any time. The AMD leaks also had some stuff accurate sure, but it wasn't then end all be all.
 
Last edited:
Well if you're to believe that (and this is a popular believed statement here) that Dev kits for the Xbox arrived "Much later" than the PS5, then by that logic Lockhart would have merely been only mentioned on paper without a dev kit in site at that point because its devkits likely would have arrived at the same time as the Series X considering they use practically the same tech just at different clocks and CU count. The AMD leaks had some stuff accurate sure, but it wasn't then end all be all.
dude, you are daydreaming here.
xbox dev kits have had a mode for s.
and at some point this was deemed by some developers as "little confusing", so ms sent s kits.

now, I've had enough of nonsense for the last couple of hours, so please excuse me for not continuing the nonsense
thank you
 
Last edited:

Zathalus

Member
JUST FOR YOU, I went into the trouble finding and re-uploading his comment that "he reported that series S was cancelled, because <<everyone else did also>> "

for the rest -if you feel you still need more proof on the subject- just go read the dust golem thread here in gaf.

now, those of you that still believe this stupid clown to be a next-gen developer, frankly I don't have any more words for you, as they will be 100% wasted.

He reported that Lockhart was cancelled, not that it never existed. If you bothered to do research, you will find that this was reported via multiple sources well over a year ago, as Microsoft had offered no Lockhart development kits to anyone. They offered no hardware, and no information. There was even a vetted thread on this very forum about it:

https://www.neogaf.com/threads/deve...ack-next-gen-and-ps5-vs-xsx-dev-kits.1553103/

The first reference to it was in a profiling mode in the normal dev kits earlier this year:


No wonder a lot of industry insiders said it was cancelled a year ago. Because they had nothing from Microsoft, and then obviously the situation changed.

Dude, the man called the performance difference between Xbox Series X and PS5 exactly last year:


That was months before the PS5 Cerny presentation.
 

Seph-

Member
dude, you are daydreaming here.
xbox dev kits have had a mode for s.
and at some point this was deemed by some developers as "little confusing", so ms sent s kits.

now, I've had enough of nonsense for the last couple of hours, so please excuse me for not continuing the nonsense
thank you
Hey, whatever helps you sleep at night. I presented the argument. But I guess it just didn't fit your narrative. Have a good one.
 

Seph-

Member
He reported that Lockhart was cancelled, not that it never existed. If you bothered to do research, you will find that this was reported via multiple sources well over a year ago, as Microsoft had offered no Lockhart development kits to anyone. They offered no hardware, and no information. There was even a vetted thread on this very forum about it:

https://www.neogaf.com/threads/deve...ack-next-gen-and-ps5-vs-xsx-dev-kits.1553103/

The first reference to it was in a profiling mode in the normal dev kits earlier this year:


No wonder a lot of industry insiders said it was cancelled a year ago. Because they had nothing from Microsoft, and then obviously the situation changed.

Dude, the man called the performance difference between Xbox Series X and PS5 exactly last year:


That was months before the PS5 Cerny presentation.
It's no use, I can see where he's going at this point. Solid research though man. *Ooops meant to add this as an edit*
 
Last edited:

Zathalus

Member
It's no use, I can see where he's going at this point. Solid research though man.
Thanks, not only that, I digged up a post from last year that Matt corrected himself:


Considering how accurate Matt has been on numerous things, and the reports from multiple sources, it does appear that Microsoft cancelled Lockhart and then un-cancelled it.

It is a perfect explanation for the dev kits issue.
 
This is pure FUD. All of the chips have the same speed of 56GB/s. If you read from 10 chips at once you get 560 GB/s. If you read from 6 chips at once you get 336 GB/s. They make the false assumption that the half the bus will just go unused which is not going to be the case. There are 20 channels, meaning you are always going to be able to read/write from 20 different locations on the memory at all times. While you're reading from 6 chips maybe you're reading from the other 4 and vice versa. Since there are ten chips guess what? It's always going to be 56GB/s x 10 (560 GB/s) as long as the memory is managed well. The Memory bus doesn't just stop running at 320 bits just because it needs to read/write from the standard memory it's able to utilize all of it's bitrate every cycle.

PaintTinJr PaintTinJr I probably should have elaborated on this as well. Maybe this will give you a better picture of the memory config.

Why they have 2 channels for 1GB GPU dedicated VRAM chip? Remember, there are 4 chips 1GB dedicated for GPU, and 6 chips 2GB in pool CPU/GPU.

It sounds for me one channel to read and other to write in RAM. Not to seek in 2 different 32 bit address at the same time.
 

I start to feel sorry for my time wasted on you.
that tweet you are posting above, even a stupid person can understand that its not the first reference of the s mode, because it addresses exactly one of the things that some devs found "confusing" while developing for the s, i.e. how the series s profile that uses less memory, cannot capture the entirety of larger amount of memory series x has.

anyway, as I wrote above, I have had enough of nonsense for today.

Considering how accurate Matt has been on numerous things, and the reports from multiple sources,
it does appear that Microsoft cancelled Lockhart and then un-cancelled it.
:messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy:

Hey, whatever helps you sleep at night. I presented the argument. But I guess it just didn't fit your narrative. Have a good one.

off to my ignore list you go
 
Last edited:

Seph-

Member
Thanks, not only that, I digged up a post from last year that Matt corrected himself:


Considering how accurate Matt has been on numerous things, and the reports from multiple sources, it does appear that Microsoft cancelled Lockhart and then un-cancelled it.

It is a perfect explanation for the dev kits issue.
Some people will only accept what fits their narrative and reality. Unfortunately in this case it's over a piece of plastic. Frankly Matt and Heisenberg have been the most reliable sources we have gotten. Not to sure why people are angry we have 2 solid and possibly great performing consoles.

I start to feel sorry for my time wasted on you.
that tweet you are posting above, even a stupid person can understand that its not the first reference of the s mode, because it addresses exactly one of the things that some devs found "confusing", i.e. how the s profile that uses less memory, cannot capture the entirety of larger amount of memory series x has.

anyway, as I wrote above, I have had enough of nonsense for today.


:messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy:



off to my ignore list you go
Okie Dokie
 

Zathalus

Member
I start to feel sorry for my time wasted on you.
that tweet you are posting above, even a stupid person can understand that its not the first reference of the s mode, because it addresses exactly one of the things that some devs found "confusing", i.e. how the s profile that uses less memory, cannot capture the entirety of larger amount of memory series x has.

anyway, as I wrote above, I have had enough of nonsense for today.

The June 2020 Gamecore Development Kit update was the first reference to a performance profile for Lockhart. Even that vetted GAF post mentioned it as such :

We didn't get any type of Lockhart hardware until very recently. Before we actually had the hardware we were given a profile on Anaconda dev kits that would mimic what Lockhart would be.

This is almost a year after the claims that Lockhart was cancelled. Why do you think it took so long to get Lockhart profiles and dev kits? Could it be that Microsoft changed their plans?


Rumors for Lockhart, then nothing from Microsoft leading to cancellation rumors, and then it popped up again only this very year. Nobody was developing for Lockhart back mid 2019, because the dev kits didn't exist.

That vetted post also furthers Matt's statement up that the PS5 and XSX are close to each other in power:

The difference will come down to effects over resolution for us. We have both dev kits pushing 4K/60 on Borderlands 3 and we have almost zero loading times on both kits. Looking at them side by side the image is very similar.

I see you are also ignoring the fact that Matt knew the PS5 TFLOPs months before it was announced.

The only reason you are trying so hard to discredit Matt is you think this massive performance gap exists between the two consoles, when multiple sources have stated that their simply isn't.
 

GODbody

Member
Why they have 2 channels for 1GB GPU dedicated VRAM chip? Remember, there are 4 chips 1GB dedicated for GPU, and 6 chips 2GB in pool CPU/GPU.

It sounds for me one channel to read and other to write in RAM. Not to seek in 2 different 32 bit address at the same time.

The memory is unified. I think you are over complicating things. I've explained the configuration in my previous posts.

We do know the details of the arrangement. It was posted earlier in the thread.

h22uypU.jpg


It's unified memory with asymetrical chips. There are 10 memory chips in total. four 1 GB chips and six 2 GB chips. Twenty 16 bit channels.

Data in the GPU optimal space can span 1 GB from all the chips giving a speed of 560 GB/s (560 ÷ 10 = 56 GB/s per chip).

Data in the standard memory space is only in the 2 GB chips, data can only span the second GB of the 2 GB chips giving a speed of 336 GB/s (336 ÷ 6 = 56 GB/s per chip).

uCQsgut.png


All of the chips have the same underlying GDDR6 speed (56 GB/s).

All channels are capable of reads and write. Since there are 10 chips and 20 channels, each chip gets 2 channels. 20 x 16 bit channel = 320 bit bus. Even if data is being read from the standard memory which occupies the second GB of the six 2 GB chips, data can still be read from the four 1 GB chips as well. That post from Lady Gaia assumes that for some reason the bus becomes a 192 bit bus when accessing the slow performing memory but that is simply not going to be the case as the other four 1 GB chips are still connected. And it's GDDR6 memory so technically 2 read/writes operations can be performed every clock. 14 GB/s chips x 2 operations per clock x 2 channels = 56 GB/s.
 
Last edited:
The memory is unified. I think you are over complicating things. I've explained the configuration in my previous posts.



All channels are capable of reads and write. Since there are 10 chips and 20 channels, each chip gets 2 channels. 20 x 16 bit channel = 320 bit bus. Even if data is being read from the standard memory which occupies the second GB of the six 2 GB chips, data can still be read from the four 1 GB chips as well. That post from Lady Gaia assumes that for some reason the bus becomes a 192 bit bus when accessing the slow performing memory but that is simply not going to be the case as the other four 1 GB chips are still connected. And it's GDDR6 memory so technically 2 read/writes operations can be performed every clock. 14 GB/s chips x 2 operations per clock x 2 channels = 56 GB/s.

16 bits = 2^16 = 64kb.

How can i address 1GB from CPU or GPU with 16 bits?
 

Zathalus

Member
I hereby DECLARE the un-cancellation of Scalebound.
Canceled may be a bit strong a word lol. But it certainly seems they paused it for unknown reasons. It explains why developers only got Lockhart profiles and dev kits this year. Companies change strategies and products all the time, it certainly wouldn't be strange.
 
I'm not sure if you're trolling or not but that's 16 bits per clock per channel. 320 bits total per clock. So you can address that 1 GB in 1/560th of a second if you're spanning that data across all 10 memory chips.

Forget 320 bits or 10 chips. Let´s talk about only one 2GB chip.

You have 2GB RAM chip ... 2 channels for this chip with 16 bits each. Ok, that´s your point.

To address 1GB CPU or 1GB GPU in this chip, i need at least 30 bits 2^30 = 1GB.

I ask again. How can i adress 1GB with 16 bits?
 

psorcerer

Banned
Forget 320 bits or 10 chips. Let´s talk about only one 2GB chip.

You have 2GB RAM chip ... 2 channels for this chip with 16 bits each. Ok, that´s your point.

To address 1GB CPU or 1GB GPU in this chip, i need at least 30 bits 2^30 = 1GB.

I ask again. How can i adress 1GB with 16 bits?

64 bit address bus and 16bit data bus.
 

GODbody

Member
Forget 320 bits or 10 chips. Let´s talk about only one 2GB chip.

You have 2GB RAM chip ... 2 channels for this chip with 16 bits each. Ok, that´s your point.

To address 1GB CPU or 1GB GPU in this chip, i need at least 30 bits 2^30 = 1GB.

I ask again. How can i adress 1GB with 16 bits?

You wouldn't address 1 GB in 1 chip alone in a unified mempry system. That's not how you maximize the bandwidth. If you placed the entirety of 1 GB in one 1 GB chip you would only achieve a bandwidth of 56 GB/s. They use a technique called interleave to spread 1 GB of data across all 10 chips to achieve the maximum bandwidth. Thus you are able to place 1/10th of a GB in each of the 10 chips and pull from each chip at 56 GB/s x 10 which is 560 GB/s. I've already gone over this in other posts. You can read those for more information.
 
Last edited:

jimbojim

Banned
You wouldn't address 1 GB in 1 chip alone in a unified mempry system. That's not how you maximize the bandwidth. If you placed the entirety of 1 GB in one 1 GB chip you would only achieve a bandwidth of 56 GB/s. They use a technique called interleave to spread 1 GB of data across all 10 chips to achieve the maximum bandwidth. Thus you are able to place 1/10th of a GB in each of the 10 chips and pull from each chip at 56 GB/s x 10 which is 560 GB/s. I've already gone over this in other posts. You can read those for more information.

Looks like NXGamer NXGamer also agree with Lady's Gaia post :


6KZLlW5.jpg


What's your call on this NXGamer NXGamer ?

Check the post #966 and onwards
 
Last edited:

GODbody

Member
Looks like NXGamer NXGamer also agree with Lady's Gaia post :


6KZLlW5.jpg


What's your call on this NXGamer NXGamer ?
Oh damn. The legend himself. Maybe I'm wrong and he can correct me. But that post is from March 29th so he might also have a different opinion on the matter. We shall see.


Edit: Adding my claim
This is pure FUD. All of the chips have the same speed of 56GB/s. If you read from 10 chips at once you get 560 GB/s. If you read from 6 chips at once you get 336 GB/s. They make the false assumption that the half the bus will just go unused which is not going to be the case. There are 20 channels, meaning you are always going to be able to read/write from 20 different locations on the memory at all times. While you're reading from 6 chips maybe you're reading from the other 4 and vice versa. Since there are ten chips guess what? It's always going to be 56GB/s x 10 (560 GB/s) as long as the memory is managed well. The Memory bus doesn't just stop running at 320 bits just because it needs to read/write from the standard memory it's able to utilize all of it's bitrate every cycle.

PaintTinJr PaintTinJr I probably should have elaborated on this as well. Maybe this will give you a better picture of the memory config.
 
Last edited:
A good example of this is the Xbox Series X hardware. Microsoft two seprate pools of Ram. The same mistake that they made over Xbox one. One pool of RAM has high bandwidth and the other pool of RAM has lower bandwidth. As a result, coding for the console is sometimes problematic. Because the total number of things we have to put in the faster pool RAM is so much that it will be annoying again, and add insult to injury the 4k output needs even more bandwidth. So there will be some factors which bottleneck XSX’s GPU.

Ali Salehi, Crytek Rendering Enginner.

Ali Salehi Complete Interview
 

rnlval

Member
Can anyone explain in layman terms as to why one would go with 52 CU at a lower clock speed than having 36 CU at higher clocks ? Die size, yields and cost is the first topic of dicussion in the slides, signifying the importance. what are the tradeoffs in both cases?
When compared to RX 5700 XT, XSX's on-chip L0/L1 cache and instruction queue storage are higher for higher CU count which is backed by 5 MB L2 cache. RX 5700 XT has 4 MB L2 cache.
 

rnlval

Member
Answer me this, if you bombarded XSX with AVX 256 in a malicoius loop would it sit at 3.6 GHz ?

No, I think you know that. Zen and intel also downclock - Cerny said so for AVX 256, MS just has not talked about it YET.

Think your getting excited over circumstances that do not exist and all CPUs do the same shit, or mS blocks such occurances. I doubt XSX has a CPU at 3.6 Ghz which is magically better than Zen, Ps5 and intel for AVX 256 lol.

Uf you cant see the logic in ps5 admitting 3 GHz is a problem for AVX, Intel and Zen also, and mS quiet at 3.6 Ghz, I dont know what to tell you.
If XSX CPU's base clock is at 3.6Ghz, then can sustain AVX 2 at 3.6 Ghz.
 

rnlval

Member
You wouldn't address 1 GB in 1 chip alone in a unified mempry system. That's not how you maximize the bandwidth. If you placed the entirety of 1 GB in one 1 GB chip you would only achieve a bandwidth of 56 GB/s. They use a technique called interleave to spread 1 GB of data across all 10 chips to achieve the maximum bandwidth. Thus you are able to place 1/10th of a GB in each of the 10 chips and pull from each chip at 56 GB/s x 10 which is 560 GB/s. I've already gone over this in other posts. You can read those for more information.
XSX has a 320-bit bus hence 32 bit data type can't be strip across 320-bit bus when you need 10 32bit data payload to fully populate 320-bit bus.

I think it does.

R0tvsIR.jpg
One problem, GPUs don't operate on full 64 or 128 or 192 or 320-bit datatypes. XSX has a 320-bit bus hence 32-bit data type can't be strip across 320-bit bus when you need 10 32bit data payload to fully populate 320-bit bus.

When compared to RX 5700 XT, XSX's on-chip L0/L1 cache and instruction queue storage are 25% higher for XSX GPU which is backed by 5 MB L2 cache while RX 5700 XT has 4 MB L2 cache.

For Gears 5, XSX GPU is superior over RX 5700 XT by about +25%.

Framebuffers have the highest memory bandwidth and with low memory storage consumers.
 
Last edited:
Top Bottom