• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

SonGoku

Member
You didn't use real-life average clock speed for Turings

RTX 2080 FE has 1897 Mhz average which is ~11.66 TFLOPS


RTX 2080 FE has 1897 Mhz average which is ~11.79 TFLOPS

RTX 2080 Ti FE has 1824 Mhz average which is ~15.88 TFLOPS
Interesting that nv cards perform better than advertised how persistent are those boost clocks? Any ideas or analysis
BTW RTX2080 at 1897Mhz is 11.17TF which puts PS5 closer to 2080 than 2070S
 

Stuart360

Member
Well it seems Sony's PR is working. I was just watching a Twitch streamer i watch, an older PC gamer in his 50's, and he was talking about this video he watched on Youtube explauining about the PS5 and not only how XSX cant compete, but PC's also cant compete, and it will take years for PC gaming to catch up, if it does at ll as PC gaming may die now in his mind, due to the PS5.
I'm not even joking, i couldnt believe what i was hearing. :messenger_tears_of_joy:
 

Nickolaidas

Member
Well it seems Sony's PR is working. I was just watching a Twitch streamer i watch, an older PC gamer in his 50's, and he was talking about this video he watched on Youtube explauining about the PS5 and not only how XSX cant compete, but PC's also cant compete, and it will take years for PC gaming to catch up, if it does at ll as PC gaming may die now in his mind, due to the PS5.
I'm not even joking, i couldnt believe what i was hearing. :messenger_tears_of_joy:

Fanboy idiocy is not Sony-exclusive. Or do you somehow fail to notice some tweets from Dealer and that Bluegroho moron?
 
Last edited:

rnlval

Member
Interesting that nv cards perform better than advertised how persistent are those boost clocks? Any ideas or analysis
BTW RTX2080 at 1897Mhz is 11.17TF which puts PS5 closer to 2080 than 2070S
1. RDNA 2 is not yet shown to be on par with Turing IPC, hence PS5 should scale from RDNA v1.

2. RTX's memory bandwidth is not shared.

3. PS5 has 10.28 TFLOPS which is about 0.89 TFLOPS short of RTX 2080's 11.17 TFLOPS
 

rnlval

Member
Isn't BCPack is already accounted for in the 4.8gb/s and 6.0gb/s figures?

t3fI75I.jpg


XSX SSD IO has two decompression paths
1. General via Zlib with 4.8 GB/s
2. Textures target via BCpack with "more than 6 GB/s"
 

SonGoku

Member
1. RDNA 2 is not yet shown to be on par with Turing IPC, hence PS5 should scale from RDNA v1.

2. RTX's memory bandwidth is not shared.

3. PS5 has 10.28 TFLOPS which is about 0.89 TFLOPS short of RTX 2080's 11.17 TFLOPS
  1. RDNA1 is already on par, PS5 custom RDNA2 should be even better in this regard not worse
  2. Valid concern but we don't know the extent to which RDNA2 efficiencies and Sonys own customizations alleviate the issue
  3. Still closer to the 2080 than 2070S
 

M1chl

Currently Gif and Meme Champion
Looks like someone got an early look... (timestamped)




I’m guessing that when they travelled to Washington for the Series X hands-on video, they also got a look at Lockhart. In other words, Lockhart video/reveal is on the way folks.

XSX was unveiled by DF and Austin Evans, before PS5 GDC conference. Not sure about the date. This video is probably just a editing out another piece of video material which they already have.
 

SonGoku

Member
XSX SSD IO has two decompression paths
1. General via Zlib with 4.8 GB/s
2. Textures target via BCpack with "more than 6 GB/s"
Nope
It only has one decompression hardware block that handles both zlib and bcpack, there are no alternative io paths
The decompression hardware supports Zlib for general data and a new compression [system] called BCPack that is tailored to the GPU textures that typically comprise the vast majority of a game's package size."

Second component they mean in addition to the SSD (2.4GB/s of guaranteed throughput)
6GB/s its just a peak figure the decompression block can handle just like 22GB/s in the PS5s

That 4.8GB/s figure already accounts BCpack higher compression (100%) for textures
 
Last edited:

Sinthor

Gold Member
I could've sworn though when the dev's got used to the PS3 machine at the end of cycle that the games like Last of Us were killing 360 in fidelity. Also Just like Blu Ray, I bet a fast SSD will be the standard for this and after next generation PC's and consoles. Everyone will adopt Sony's SSD implementation if it works as it is supposed to. Dont count the pioneers of the same disc drive that are in every Xbox. Hell MS from what I've read had to scramble to get their SSD plan right due to possibly being surprised by Sony on that front. I bet they both had to adjust to each other once they caught wind. Sony with TF change (thanks AMD leaker) and MS with SSD speed changes.

They did indeed. The Mass Effect trilogy, when it was release was noticeable superior to the 360 version as well. Still, if Sony hadn't the user base it had developed with the PS2, the PS3 might have caused them some problems with the delay there was in getting devs able to use the box to it's full potential. The Last of Us was a GREAT example of this though, as it was really, really good.

I actually believe that if Sony hadn't made the mis-step on the price of the PS3 plus the complexity affecting developers, that they may well have continued their absolute total dominance that started with the PS2. IMHO, the PS3 gave Microsoft a viable market to sell to.

With the PS4, Sony has extended it's market lead. With the PS5, unless it's $600 again, I think the same will happen. Especially since it appears there will be little to no difference in the quality of the games AND Sony will have that almost instantaneous loading of games as a selling point as well. That translates to people as POWER (when they don't have to wait for something). Now of course, we'll see if this is actually the case, once we start seeing playable game demos, etc. I'm just going off what developers are saying or rumored to be saying. I think it will be a a very exciting generation for both companies however. I think sales will come down more to the the ecosystem and exclusives, not one specific tech spec or another. Can't wait to see what we'll be getting!
 
Tell me what a "non-compute component" would be.

Also, is the clock higher? Maybe, or maybe it's not being boosted. Or maybe it is, and the CPU is taking the hit. The unknowns around this variable clock setup to me seem like the most impactful (for good or bad) of the specs revealed.

I mean anything outside just raw number crunching/compute - non-compute. It was late and may have been slightly high. 😂

And this variable clocks confusion has been clarified countless times. Again, given the specs, I think series X will have the edge in output resolution and frames, but 5 will have the edge in fidelity, higher quality assets/detail.
 
Looks like someone got an early look... (timestamped)




I’m guessing that when they travelled to Washington for the Series X hands-on video, they also got a look at Lockhart. In other words, Lockhart video/reveal is on the way folks.


He was clearly speculating. He said "The biggest question....if there is another Xbox coming out along side Series X...". I doubt if DF got a look they would even reference it at all.
 
Last edited:
They did indeed. The Mass Effect trilogy, when it was release was noticeable superior to the 360 version as well. Still, if Sony hadn't the user base it had developed with the PS2, the PS3 might have caused them some problems with the delay there was in getting devs able to use the box to it's full potential. The Last of Us was a GREAT example of this though, as it was really, really good.

I actually believe that if Sony hadn't made the mis-step on the price of the PS3 plus the complexity affecting developers, that they may well have continued their absolute total dominance that started with the PS2. IMHO, the PS3 gave Microsoft a viable market to sell to.

With the PS4, Sony has extended it's market lead. With the PS5, unless it's $600 again, I think the same will happen. Especially since it appears there will be little to no difference in the quality of the games AND Sony will have that almost instantaneous loading of games as a selling point as well. That translates to people as POWER (when they don't have to wait for something). Now of course, we'll see if this is actually the case, once we start seeing playable game demos, etc. I'm just going off what developers are saying or rumored to be saying. I think it will be a a very exciting generation for both companies however. I think sales will come down more to the the ecosystem and exclusives, not one specific tech spec or another. Can't wait to see what we'll be getting!
I agree completely and I'm looking foward to the games to be shown. It was interesting that someone posted low quality vs high for raytracing because I would totally be happy with low. I'm so ready for this next gen. A little frustrated I'll have to get both systems however.
 

FranXico

Member
I sure hope PS5 allows to backup/store PS5 games on the external HDD, this is very important to juggle games around that 800GB SSD
I believe they mentioned PS4 games would be playable directly from an external HDD. I would also assume that will be usable for PS5 games backups too.
 
If the ps3 won fidelity points due to the extra storage and streaming afforded by the blu-ray/hdd (some spu offloading as well for good measure), despite the delta in raw power in relation to 360, well... I see where Cerny is going with all this.
 

ksdixon

Member
If you install a ps4 game onto your PS5 internal ssd, can it magically take advantage of the miniscule loading times, or will each game need a patch by devs to make use of it?
 

CJY

Banned
If you install a ps4 game onto your PS5 internal ssd, can it magically take advantage of the miniscule loading times, or will each game need a patch by devs to make use of it?
I believe if the data is traveling through the same bus and IO of the PS5, there would probably be some drastically faster loading in PS4 games. It might cause some problems and they might artificially limit the speed though.
 

PaintTinJr

Member
Both Zen 2 CPU and RDNA GPU has large multi-MB caches to minimize context switch overheads with the memory controllers.

XSX is already delivering RTX 2080 class result with 2 weeks raw Gears 5 port's benchmark at PC Ultra settings.

RX 5600's reduced 192 bit bus hit was relatively minor. If you scale Saphire RX 5600 XT (7.9 TFLOPS average), PS5's 10.28 TFLOPS GPU still lands on very close to RTX 2070 Super and above RX 5700 XT (average 9.66 TFLOPS) i.e. 130% level.

Scaling TP's results to XSX's GPU power lands between RTX 2080 and RTX 2080 Super range.
relative-performance_3840-2160.png
I know you are only parroting the same type of rubbish that DF promotes with misleading PC versus APU console extrapolations, but surely you can see that a game with gameplay logic designed around an old console (Xbox One) with a tiny amount of Edram and slow DDR3 isn’t thrashing any memory bandwidth with contention on a PC or XsX, or One X, no matter the resolution or GPU eye-candy fx that don’t alter gameplay, yes?
It is no test for the XsX and certainly provides none or very little insight into how an XsX’s asymmetric memory setup - with slower Cpu 192bit access and faster 320 bit gpu access (to just 10GB) - will hold up against a simpler PS5 memory setup - with regular 256 bit width access for both GPU/CPU and still 80% of the faster XsX GPU access bandwidth for a unified 16GB access.

Lets take a worst case scenario for getting data from a peripheral into the XsX GPU(10GB) compared to the PS5. The PS5 has complete access to all 16GB at the same 256 bit width, so the data is bubbled into the GDDR6 at the earliest convenience and can be subsequently accessed by the GPU at full bandwidth as necessary.

When the XsX bubbles the data to its GDDR6, the overall bandwidth drops to 336GB/s for the bubbles of getting the data into the 6GB – taking 1.33x the time it took the PS5 – and then for the internal data copy from the 6GB to the 10GB (AFAIK)the bus is then dropped down to 336GB/s for bubbling a read back to the CPU cache, and then again for writing the data back to 10GB/s resulting in 2.666x more time at the 336GB/s bandwidth, before the next read by the GPU at 560GB/s at 0.8x time of the PS5 read.

Assuming the mixed bit alignment of XsX accesses doesn’t add wastage in bandwidth padding, this crude comparison results in a setup cost for first read by the GPU on ps5 as 1x write data, 1x read data = relative time of 2

The Xsx by comparison does = 1.33 write data, 1.33 read data, 1.33 write data, and finally 0.8 read data = relative time of 4.8, so normalising gives PS5 time at 1, and XsX time at 2.4 (or 140% advantage to the PS5)

It would therefore require both GPUs to read/write that data 7 times more between memory and Gpu for the XsX to be level in bandwidth cost for that workload, and 8 times to gain a 20% advantage.

Seeing numbers like 140% setup cost more, against a 20% GPU gain per 10GB access for XSX, and 25% loss for CPU access to 6GB -ignoring the statistically increased likelihood of wasted bandwidth to workload padding for 192 and 320bit access, or the PS5 using an IO complex with 6 priority levels of SSD data streaming – probably means swithout a less crude working situation in which un-unified data access of the XsX wins out I going to struggle to believe the PS5 isn’t going to have a big advantage in memory bandwidth with real workloads,
 
I know you are only parroting the same type of rubbish that DF promotes with misleading PC versus APU console extrapolations, but surely you can see that a game with gameplay logic designed around an old console (Xbox One) with a tiny amount of Edram and slow DDR3 isn’t thrashing any memory bandwidth with contention on a PC or XsX, or One X, no matter the resolution or GPU eye-candy fx that don’t alter gameplay, yes?
It is no test for the XsX and certainly provides none or very little insight into how an XsX’s asymmetric memory setup - with slower Cpu 192bit access and faster 320 bit gpu access (to just 10GB) - will hold up against a simpler PS5 memory setup - with regular 256 bit width access for both GPU/CPU and still 80% of the faster XsX GPU access bandwidth for a unified 16GB access.

Lets take a worst case scenario for getting data from a peripheral into the XsX GPU(10GB) compared to the PS5. The PS5 has complete access to all 16GB at the same 256 bit width, so the data is bubbled into the GDDR6 at the earliest convenience and can be subsequently accessed by the GPU at full bandwidth as necessary.

When the XsX bubbles the data to its GDDR6, the overall bandwidth drops to 336GB/s for the bubbles of getting the data into the 6GB – taking 1.33x the time it took the PS5 – and then for the internal data copy from the 6GB to the 10GB (AFAIK)the bus is then dropped down to 336GB/s for bubbling a read back to the CPU cache, and then again for writing the data back to 10GB/s resulting in 2.666x more time at the 336GB/s bandwidth, before the next read by the GPU at 560GB/s at 0.8x time of the PS5 read.

Assuming the mixed bit alignment of XsX accesses doesn’t add wastage in bandwidth padding, this crude comparison results in a setup cost for first read by the GPU on ps5 as 1x write data, 1x read data = relative time of 2

The Xsx by comparison does = 1.33 write data, 1.33 read data, 1.33 write data, and finally 0.8 read data = relative time of 4.8, so normalising gives PS5 time at 1, and XsX time at 2.4 (or 140% advantage to the PS5)

It would therefore require both GPUs to read/write that data 7 times more between memory and Gpu for the XsX to be level in bandwidth cost for that workload, and 8 times to gain a 20% advantage.

Seeing numbers like 140% setup cost more, against a 20% GPU gain per 10GB access for XSX, and 25% loss for CPU access to 6GB -ignoring the statistically increased likelihood of wasted bandwidth to workload padding for 192 and 320bit access, or the PS5 using an IO complex with 6 priority levels of SSD data streaming – probably means swithout a less crude working situation in which un-unified data access of the XsX wins out I going to struggle to believe the PS5 isn’t going to have a big advantage in memory bandwidth with real workloads,

Wow.

So MS had the option of a 256-bit bus with uniform bandwidth across the address range, but after extensive simulation and analysis of game workloads decided to spend millions on engineering a slower 320-bit solution (with AMD's assistance) that was also more expensive to produce.

" I going to struggle to believe the PS5 isn’t going to have a big advantage in memory bandwidth with real workloads,"

Yep, that extra wide, expensive bus was designed specifically to make the XSX slower than the cheaper alternative MS were presented with by default.

Unlike AMD memory access, your post is not coherent.
 

PaintTinJr

Member
Wow.

So MS had the option of a 256-bit bus with uniform bandwidth across the address range, but after extensive simulation and analysis of game workloads decided to spend millions on engineering a slower 320-bit solution (with AMD's assistance) that was also more expensive to produce.

" I going to struggle to believe the PS5 isn’t going to have a big advantage in memory bandwidth with real workloads,"

Yep, that extra wide, expensive bus was designed specifically to make the XSX slower than the cheaper alternative MS were presented with by default.

Unlike AMD memory access, your post is not coherent.
I thought the bus was a fudge. They wanted 16GB GDDR6, they wanted the 12TF claim, the higher Zen2 clocks claim, but couldn't engineer a pure HSA system that worked. Is that not the case?
 

BluRayHiDef

Banned
I know you are only parroting the same type of rubbish that DF promotes with misleading PC versus APU console extrapolations, but surely you can see that a game with gameplay logic designed around an old console (Xbox One) with a tiny amount of Edram and slow DDR3 isn’t thrashing any memory bandwidth with contention on a PC or XsX, or One X, no matter the resolution or GPU eye-candy fx that don’t alter gameplay, yes?
It is no test for the XsX and certainly provides none or very little insight into how an XsX’s asymmetric memory setup - with slower Cpu 192bit access and faster 320 bit gpu access (to just 10GB) - will hold up against a simpler PS5 memory setup - with regular 256 bit width access for both GPU/CPU and still 80% of the faster XsX GPU access bandwidth for a unified 16GB access.

Lets take a worst case scenario for getting data from a peripheral into the XsX GPU(10GB) compared to the PS5. The PS5 has complete access to all 16GB at the same 256 bit width, so the data is bubbled into the GDDR6 at the earliest convenience and can be subsequently accessed by the GPU at full bandwidth as necessary.

When the XsX bubbles the data to its GDDR6, the overall bandwidth drops to 336GB/s for the bubbles of getting the data into the 6GB – taking 1.33x the time it took the PS5 – and then for the internal data copy from the 6GB to the 10GB (AFAIK)the bus is then dropped down to 336GB/s for bubbling a read back to the CPU cache, and then again for writing the data back to 10GB/s resulting in 2.666x more time at the 336GB/s bandwidth, before the next read by the GPU at 560GB/s at 0.8x time of the PS5 read.

Assuming the mixed bit alignment of XsX accesses doesn’t add wastage in bandwidth padding, this crude comparison results in a setup cost for first read by the GPU on ps5 as 1x write data, 1x read data = relative time of 2

The Xsx by comparison does = 1.33 write data, 1.33 read data, 1.33 write data, and finally 0.8 read data = relative time of 4.8, so normalising gives PS5 time at 1, and XsX time at 2.4 (or 140% advantage to the PS5)

It would therefore require both GPUs to read/write that data 7 times more between memory and Gpu for the XsX to be level in bandwidth cost for that workload, and 8 times to gain a 20% advantage.

Seeing numbers like 140% setup cost more, against a 20% GPU gain per 10GB access for XSX, and 25% loss for CPU access to 6GB -ignoring the statistically increased likelihood of wasted bandwidth to workload padding for 192 and 320bit access, or the PS5 using an IO complex with 6 priority levels of SSD data streaming – probably means swithout a less crude working situation in which un-unified data access of the XsX wins out I going to struggle to believe the PS5 isn’t going to have a big advantage in memory bandwidth with real workloads,

Why did Microsoft design the XSX with asymmetrical levels of speed in regard to accessing the memory pool?
 

rnlval

Member
  1. RDNA1 is already on par, PS5 custom RDNA2 should be even better in this regard not worse
  2. Valid concern but we don't know the extent to which RDNA2 efficiencies and Sonys own customizations alleviate the issue
  3. Still closer to the 2080 than 2070S
1. It's not on par with Turing


relative-performance_3840-2160.png



RTX 2070 FE 36 CU equivalent has 1862 Mhz average, thus ~8.58 TFLOPS average,



RX 5700 XT has 1887 Mhz average, thus 9.66 TFLOPS average,

RDNA v1 is not on par with Turing!
 

mitchman

Gold Member
If you install a ps4 game onto your PS5 internal ssd, can it magically take advantage of the miniscule loading times, or will each game need a patch by devs to make use of it?
I assume that will depend on the game. Some games might break if they are not allowed to show something during loading, while others will be fine. I assume Sony will have a configuration to disable or slow down certain features for games that break.
 
Last edited:

SmokSmog

Member
Sony loves to be misled, remember cell power ?
Speaking directly with developers working on next gen dev kits, they say that IBM pulled the wool over MS/Sony's eyes with their astronomical performance numbers and low costs. Basically, "You get what you pay for."


"...the real-world performance of the Xenon CPU is about twice that of the 733MHz processor in the first Xbox...floating point multiplies are apparently 1/3 as fast on Xenon as on a Pentium 4."

"The Cell processor doesnt get off the hook just because it only uses a single one of these horribly slow cores; the SPE array ends up being fairly useless in the majority of situations, making it little more than a waste of die space. "

"The most ironic bit of it all is that according to developers, if either manufacturer had decided to use an Athlon 64 or a Pentium D in their next-gen console, they would be significantly ahead of the competition in terms of CPU performance."

"Although both manufacturers royally screwed up their CPUs, all developers have agreed that they are quite pleased with the GPU power of the next-generation consoles."


At least we're getting state of the art graphics from ATI/Nvidia.
 

SonGoku

Member
1. It's not on par with Turing


relative-performance_3840-2160.png



RTX 2070 FE 36 CU equivalent has 1862 Mhz average, thus ~8.58 TFLOPS average,



RX 5700 XT has 1887 Mhz average, thus 9.66 TFLOPS average,

RDNA v1 is not on par with Turing!
These are performance comparisons at stock settings, i remember a german site made the comparison with downclocked cards and perf per teraflops was about on par between RDNA & Turing
The 5700XT is also power starved and doesn't mantain those high clocks it reports

edit:
Found it https://www.computerbase.de/2019-07.../4/#diagramm-performancerating-navi-vs-turing
 
Last edited:
I thought the bus was a fudge. They wanted 16GB GDDR6, they wanted the 12TF claim, the higher Zen2 clocks claim, but couldn't engineer a pure HSA system that worked. Is that not the case?

No, it's a HSA system.

MS wanted more BW than a 256-bit bus could offer with 14 gbps memory, but didn't want to pay for 20 GB of ram. They also stated that signalling issues were a consideration.

Their memory configuration is every bit as carefully considered as Sony's variable rate frequency and constant rate power consumption.

Literally every single penny is costed into the design of these systems.

Why did Microsoft design the XSX with asymmetrical levels of speed in regard to accessing the memory pool?

Because a straight forward 256-bit bus didn't give them enough bandwidth to feed the XSX efficiently. That meant they had to go wider.

But cost and/or signalling issues didn't allow for the same size of memory chip on every 64-bit section of the memory bus.

And from profiling it's clear that some areas of memory are accessed vastly more times per frame than others. Like, many many times more. So you focus your bandwidth were it's needed most.
 

PaintTinJr

Member
Why did Microsoft design the XSX with asymmetrical levels of speed in regard to accessing the memory pool?
The answer may seem obvious to most that would disagree with my thinking, but I suspect there maybe more than one answer. From a data comms perspective, simple typically wins. From a graphics programming perspective, anything that is a 2^n measure is usually more useful. On those grounds they've messed up IMHO. However, maybe their Lockhart SKU used 192 bit unified, and 320 bit was just a turning up to eleven spec sheet situation for XsX, where everything had to appear more - like a win. If being completely honest about Xbox, they rely far more on third parties for the software, so maybe winning a spec sheet battle for marketing was all they needed. PS5 looks like it is built optimally for developers in simplicity, and Sony drive AAA gaming expectations with their studios so need the box to actually deliver on the specs in the hands of developers. Xbox hasn't really done that since they were awash with money in their first gen and up until half way through their second gen.
 

rnlval

Member
These are performance comparisons at stock settings, i remember a german site made the comparison with downclocked cards and perf per teraflops was about on par between RDNA & Turing
The 5700XT is also power starved and doesn't mantain those high clocks it reports

edit:
Found it https://www.computerbase.de/2019-07.../4/#diagramm-performancerating-navi-vs-turing
Computerbase.de's performance rating used 1440p resolution while I use Techpowerup's 2160p (4K) LOL

1. PS5's 448 GB/s memory bandwidth is shared with the CPU.

2. RTX 2070's 448 GB/s memory bandwidth is dedicated.

3. Techpowerup has 22 games for its sample size while Computerbase.de has 15 games for its sample size

4. I used RX 5600 Xt chart due to AMD and NVIDIA driver upgrades.

5. Computerbase.de's performance rating avoided Unreal Engine 4 based games.
 
Last edited:

BluRayHiDef

Banned
Because a straight forward 256-bit bus didn't give them enough bandwidth to feed the XSX efficiently. That meant they had to go wider.

But cost and/or signalling issues didn't allow for the same size of memory chip on every 64-bit section of the memory bus.

And from profiling it's clear that some areas of memory are accessed vastly more times per frame than others. Like, many many times more. So you focus your bandwidth were it's needed most.

Thanks for your quick and informative answer.

Next question: Why can't I just stay home, play games, and have my financial needs magically taken care of rather than go to work five days per week? Why is life so unfair?
 

psorcerer

Banned
The answer may seem obvious to most that would disagree with my thinking, but I suspect there maybe more than one answer. From a data comms perspective, simple typically wins. From a graphics programming perspective, anything that is a 2^n measure is usually more useful. On those grounds they've messed up IMHO. However, maybe their Lockhart SKU used 192 bit unified, and 320 bit was just a turning up to eleven spec sheet situation for XsX, where everything had to appear more - like a win. If being completely honest about Xbox, they rely far more on third parties for the software, so maybe winning a spec sheet battle for marketing was all they needed. PS5 looks like it is built optimally for developers in simplicity, and Sony drive AAA gaming expectations with their studios so need the box to actually deliver on the specs in the hands of developers. Xbox hasn't really done that since they were awash with money in their first gen and up until half way through their second gen.

I think they were shooting for 20GB and fell short, because GDDR6 prices went up.
They are also probably using 4 SE, which means that they are ROP and L1 starving.
5 MCs means they get +20% L2, at least.
 

SonGoku

Member
Computerbase.de's performance rating used 1440p resolution while I use Techpowerup's 2160p (4K) LOL
The discrepancies can be explained by botlenecks in the pipeline and the 5700XT being power starved at higher clocks.
In terms of IPC or per/tflop they are about on par and i expect further improvements from Sony/MS custom RDNA2 designs.

I think MS/Sony carefully designed consoles to minimize bottlenecks and made sure the APUs will be sufficiently fed
 
Status
Not open for further replies.
Top Bottom