• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

The PlayStation 5 GPU Will Be Supported By Better Hardware Solutions, In Depth Analysis Suggests

S0ULZB0URNE

Member
The thing is though even if the 10gb can be accessed @560gb and 224gb/s for the remaining 3.5gb it will still perform better then sonys 13 - 14gb @ 448gb/s.

The ram is split, and its the video ram which requires and benifits from higher speeds.

45b0871338605098.jpg
Faster XSX's 10gb isn't better than PS5's slower 16gb.

Neither will have all available but PS5 will have more available which gives it the edge with it's ram configuration.
 

Vroadstar

Member
It's clear their system started out weaker than it is but to mitigate it they had to do all of this weird shit with shifting frequencies, parsed resources and ridiculous clock speeds.

It doesn't make any rational sense to force developers into a moving target environment on a fixed platform unless the basis for that decision was made by force.

Sure buddy and you know all this because you are insider right? you got access to PS5 hardware correct? Imagine inventing all kinds of crazy shit to fit your narrative.
 
Sure buddy and you know all this because you are insider right? you got access to PS5 hardware correct? Imagine inventing all kinds of crazy shit to fit your narrative.
A lot of us have been PC builders and users for decades, we understand the in's and out's of this technology and the implications involved for a system being the way it is. This is no different than a PC whose GPU and CPU have boost clocks, but the issue with it in a console is it's a fixed platform with a single mode of operation for the software so it doesn't make any real sense.

On a fixed platform developing around a fixed configuration makes sense, developing around a shifting target doesn't. It's really as simple as that. If I had to take an educated guess; given the small 256-bit bus of their system they couldn't push through the peak data of the "boost" clocks all at once so the CPU and GPU have to fluctuate and offset or else they would bottleneck at the bus. It wasn't designed around the CPU operating at 3.5Ghz, it wasn't designed around the GPU operating at 2.23Ghz, it was designed with operational metrics lower than that hence the bus size, and as a result of their need to overclock they hit a wall and had to work around it.

That seems fundamentally obvious.
 
Last edited:

MHubert

Member
Which is why the PS5 is so interesting.
Mark Cerny is an actual genius so I'm interested to see if he has come up with something no-one else has thought of before.



I agree, the PS5 sounds like a true leap in console technology - exciting times ahead. Cerny certainly seems to know what he is doing.
 
Weren’t clock frequencies changed last minute this gen? I could be misremembering something

I would think it’s much easier to tweak clock speeds than actually change any of the hardware

There is a huge difference. MS officially did it 2 months before launch. Sony didn't just up the clocks like what MS did with the Xbox One back in 2013. Sony's whole design is based around a power management system that dynamically adjusts CPU and GPU clocks
 
There is a huge difference. MS officially did it 2 months before launch. Sony didn't just up the clocks like what MS did with the Xbox One back in 2013. Sony's whole design is based around a power management system that dynamically adjusts CPU and GPU clocks
Yeah, I'm calling total bullshit on that. The below seems exceptionally obvious, no one would design a fixed platform like this by virtue, there's literally no advantage.

A lot of us have been PC builders and users for decades, we understand the in's and out's of this technology and the implications involved for a system being the way it is. This is no different than a PC whose GPU and CPU have boost clocks, but the issue with it in a console is it's a fixed platform with a single mode of operation for the software so it doesn't make any real sense.

On a fixed platform developing around a fixed configuration makes sense, developing around a shifting target doesn't. It's really as simple as that. If I had to take an educated guess; given the small 256-bit bus of their system they couldn't push through the peak data of the "boost" clocks all at once so the CPU and GPU have to fluctuate and offset or else they would bottleneck at the bus. It wasn't designed around the CPU operating at 3.5Ghz, it wasn't designed around the GPU operating at 2.23Ghz, it was designed with operational metrics lower than that hence the bus size, and as a result of their need to overclock they hit a wall and had to work around it.

That seems fundamentally obvious.
 

Ascend

Member
Unless i'm misunderstanding what you write, you seem to imply that the OS will be able to use a slower pool of RAM, while the GPU is simultaniously having access to 10GB at max speed.
Leaving this here...






It will not have simultaneous access, but the first data that will be shifted to the slower RAM pool will be the OS. You can see it as the RAM chips that are 2GB having a high priority for the games' data and a low priority for the OS' data. The 2.5GB for the OS is already accounted for. There is no need to subtract an additional 2.5GB of the fast 10GB to come at 7.5GB of fast RAM. And technically, even if doing this WAS right, it is highly likely that it would not be 7.5GB. Why? Because we don't know where the OS runs on the RAM chips. I doubt MS will leave this unoptimized. They could for example use a single 2GB chip with a 1 GB chip. That leaves three 1GB modules and five 2GB modules free to be able to operate freely. That makes it 8GB of fast RAM instead of 7.5GB. And that is exactly the same bandwidth that the PS5 has btw, since 8*56= 448GB/s And that is assuming the OS is really using 2.5GB. Because if it drops below 2GB, only one RAM chip is necessary, and suddenly you have 9GB available at max speed, and a bandwidth of 504 GB/s. Assuming of course that MS allows that.

And even then, it's not that simple. Instead, they could spread that 2.5GB across all six of the 2GB RAM chips. How much would that really reduce the bandwidth for games? That is around 417MB per RAM chip. If you assume the OS is constantly consuming its max RAM bandwidth capacity (which is highly unlikely), that would mean that each RAM chip would use 11.7 GB/s of the total 56GB/s bandwidth, for a total reduction of 70 GB/s. That means you would end up with 490 GB/s for the games in this case for the 10GB.

There are too many variables. But the bottom line is that there are multiple ways developers can still leverage the XSX to hold the advantage with its RAM.
 
Last edited:
Yeah, I'm calling total bullshit on that. The below seems exceptionally obvious, no one would design a fixed platform like this by virtue, there's literally no advantage.

You can call it how you want it. Cheers!
After XSX specs reveal the entire PlayStation division decided to put something together in 24hrs. /s
Yet, articles about Sony's cooling solution was published during summer last year. But you need to do patent test and that can last months, years, actually. This is not a "last minute" move like MS did with Xbone 2 months before launch.
 
Last edited:
SenjutsuSage SenjutsuSage in this thread or one of the XSX or PS5 threads...I think I corrected him then..

Yea, you did, I was way off on that assumption. Since then, however, someone far smarter on this stuff than myself has explained to me in full with a lot more detail than many of these articles have provided on just how it all works and why it won't have the limitations that would exist on PC. Rather than get into a long discussion. it's not at all how the article describes.
 
You can call how you want it. Cheers!
After XSX specs reveal the entire PlayStation division decided to put something together in 24hrs. /s
Yet, articles about Sony's cooling solution was published during summer last year. But you need to do patent test and that can last months, years, actually. This is not a "last minute" move like MS did with Xbone 2 months before launch.
I never said it was last minute, however many system designs are locked in the upwards of a year or more before production. The mainboard was no doubt constructed and fully mapped out, and Sony after the fact no doubt learned of Microsoft's system and had to make changes in the only way feasibly possible without pushing themselves into a 2021 release.

If the PS5 was essentially 'done' and wind of Microsoft's specs came into the fray how could they account for it and try to combat it without having to rework their entire system? Change the cooling setup, overclock everything, but they're still limited by their bus. This brings about the variable CPU and GPU clocks, the bus cannot handle both at peak frequency so something has to give, something has to lower its frequency, a 256-bit bus can't handle that much data all at once.

No one would design a system this way optionally, they would do it out of necessity.
 
I never said it was last minute, however many system designs are locked in the upwards of a year or more before production. The mainboard was no doubt constructed and fully mapped out, and Sony after the fact no doubt learned of Microsoft's system and had to make changes in the only way feasibly possible without pushing themselves into a 2021 release.

If the PS5 was essentially 'done' and wind of Microsoft's specs came into the fray how could they account for it and try to combat it without having to rework their entire system? Change the cooling setup, overclock everything, but they're still limited by their bus. This brings about the variable CPU and GPU clocks, the bus cannot handle both at peak frequency so something has to give, something has to lower its frequency, a 256-bit bus can't handle that much data all at once.

No one would design a system this way optionally, they would do it out of necessity.

You didn't said it, but you implied to that. No would design it? Oh, Cerny did it. But of course, you know better from your own room. Spreading FUD. You did your job in System Wars. And still doing the same.

EDIT : Btw. Switch has variable clock solution too.

New Switch mod delivers real-time CPU, GPU and thermal monitoring - and the results are remarkable
 
Last edited:

Vroadstar

Member
A lot of us have been PC builders and users for decades,

Being a PC builder for decades doesn't make a one system architect, unless you actually work on the hardware you are just spreading FUD. You don't know shit about
why they built the PS5 the way they did, so just stop with your conjecture to fit your narrative.

Fact is nobody here outside of devs who actually had access to hardware or Cerny himself knows why they designed the system like that, and the end of the day it's the output (the games) that matters and hopefully, it's not so skewed again this next-gen.
 
Last edited:

FlyyGOD

Member
The PS5 hardware seems like Sony is trying to create something very innovative after the rather safe PS4. This was out of necessity as they were on the verge of bankruptcy and had no room for error.

So the PS5 seems like they are stretching their wings again and trying to create a small revolution here. All Xbox fans can talk about it 'teraflops' which is a futile game and misses the wood for the trees.
You are clearly only seeing what you what to see. Microsoft's Machine is just as innovative as Sony's machine . Nothing on the pc market is doing the tech that Seris X has in it right now. Microsoft also has an SSD that works in conjunction with the GPU yet Sony fanboys conveniently act as if it doesn't why?
 
Last edited:
Being a PC builder for decades doesn't make a one system architect, unless you actually work on the hardware you are just spreading FUD. You don't know shit about
why they built the PS5 the way they did, so just stop with your conjecture to fit your narrative.

Fact is nobody here outside of devs who actually had access to hardware or Cerny himself knows why they designed the system like that, and the end of the day it's the output (the games) that matters and hopefully, it's not so it's so skewed again this next-gen.
I'm pretty much done with you, because you're stuck in a mental rut and make for fruitless discussion. Running to appeals to authority is weak minded, it intentionally avoids criticism or intelligent conversation of why things are the way they are.

I don't have to be a system architect to explicitly see how this scenario of design would come about.
 

thelastword

Banned
Oh man I was making a joke with that comment, wasn’t expecting someone here to have made that comment.
Ha, he made a small error in math, but I got where he was going....I corrected him nonetheless....

892GB/s of memory bandwidth. Holy shit. ;)
In which I replied....

thelastword said:
Where are you getting that from? 560Gb/s and 336 GB/s is the same ram pool, 6GB is clocked lower......It does not exceed 560Gb/s if the lower speed memory is used, it can get slightly lower then..

https://www.neogaf.com/threads/digi...-demo-showcase.1531461/page-12#post-257360277
 

THE DUCK

voted poster of the decade by bots
It's funny how the article and all of the sony bots are ignoring all of the custom architecture and features the microsoft team has put in place while in the same breath praising features of the ps5 - how about an actual factual comparison next time.
 
It's funny how the article and all of the sony bots are ignoring all of the custom architecture and features the microsoft team has put in place while in the same breath praising features of the ps5 - how about an actual factual comparison next time.
The level of innovation is equal footed, the difference in specification based upon those innovations are not.

There's a lot of bullshit reasoning floating around here.
 

Vroadstar

Member
I'm pretty much done with you, because you're stuck in a mental rut and make for fruitless discussion. Running to appeals to authority is weak minded, it intentionally avoids criticism or intelligent conversation of why things are the way they are.

I don't have to be a system architect to explicitly see how this scenario of design would come about.

Sure buddy, Heck you don't even know how it looks and yet you are so "sure" of the supposed scenario of the hardware. Your highfalutin words don't dismiss the fact you don't know shit and it's all conjecture and FUD to fit your narrative. and done..
 
Last edited:
Literally all of these people designing these systems are geniuses...

source.gif

Precisely. I've been told that the main piece of information people keep messing up on with regards to the Series X memory setup is that just because you fill it up, doesn't mean it's suddenly gone and can no longer be used again past the 10GB used mark. Developers can easily fill up and be using more than 13GB of GDDR6, or the full 13.5GB, and still achieve the full 560GB/s even once the 10GB mark is passed.

The GDDR6 doesn't suddenly disappear. There is data or information there, data which can still be accessed and used for the game at a full 560GB/s. It's as simple as the developer picking which memory addresses to use across the 10 chips. Anything requiring slower performance will address the slower "standard" memory, do so very quickly, and then be right back to accessing a full 560GB/s as if it had never left. Translation: this isn't the problem for Series X some seem to believe it is.
 
Sure buddy, Heck you don't even know how it looks and yet you are so "sure" of the supposed scenario of the hardware. Your highfalutin words don't dismiss the fact you don't know shit and it's all conjecture and FUD to fit your narrative. and done..
That's really neat, I can explain to you in detail why a system like this would come about so here's a question for you. If not for the reason I specified then why else would someone design a system like this intentionally?

Bear in mind there is NO performative advantage to this design, none whatsoever. If the system was locked both in GPU and CPU it would perform better at the peak clocks if both were affixed, so why aren't they? Why do they underclock?

It's pretty obvious, guy. Secondary manipulation to push the system harder than it was originally built.
 

Ascend

Member
That's really neat, I can explain to you in detail why a system like this would come about so here's a question for you. If not for the reason I specified then why else would someone design a system like this intentionally?

Bear in mind there is NO performative advantage to this design, none whatsoever. If the system was locked both in GPU and CPU it would perform better at the peak clocks if both were affixed, so why aren't they? Why do they underclock?

It's pretty obvious, guy. Secondary manipulation to push the system harder than it was originally built.
It might also be one of the reasons they didn't show the console yet. They might need to redesign the cooling system.
 
Being a PC builder for decades doesn't make a one system architect, unless you actually work on the hardware you are just spreading FUD. You don't know shit about
why they built the PS5 the way they did, so just stop with your conjecture to fit your narrative.

Fact is nobody here outside of devs who actually had access to hardware or Cerny himself knows why they designed the system like that, and the end of the day it's the output (the games) that matters and hopefully, it's not so skewed again this next-gen.

thats fair

Honestly I’m really excited to see the PS5. I bet Cerny has a very solid design and cooling solution. I dig reading about those details even if I don’t understand how it all works all the time
 
You only ever become limited to 6GB (technically 3.5GB) when you're transferring data into RAM past the 10GB mark, but a game can still access and use data that's present across all 10 chips at the max 560GB/s even once the game has already effectively used more than 10GB. They are having their cake and eating it too for the most part.

And a big thing everybody is overlooking is the streaming tech specifically designed for the Series X. What takes up most of a game's available VRAM? Textures. Xbox thanks to Sampler Feedback Streaming will already be loading into the GDDR6 far less data from a texture to begin with, leading to an effective increase of both physical RAM amount as well as I/O performance. Without such measures in place the asymmetrical memory setup might be a problem. Whenever people consider this memory setup, they are at their own peril forgetting all about Sampler Feedback Streaming.


In addition to drawing more heavily upon storage to make up the shortfall, Microsoft began a process of optimising how memory is actually used, with some startling improvements.

"We observed that typically, only a small percentage of memory loaded by games was ever accessed," reveals Goossen. "This wastage comes principally from the textures. Textures are universally the biggest consumers of memory for games. However, only a fraction of the memory for each texture is typically accessed by the GPU during the scene. For example, the largest mip of a 4K texture is eight megabytes and often more, but typically only a small portion of that mip is visible in the scene and so only that small portion really needs to be read by the GPU."

As textures have ballooned in size to match 4K displays, efficiency in memory utilisation has got progressively worse - something Microsoft was able to confirm by building in special monitoring hardware into Xbox One X's Scorpio Engine SoC. "From this, we found a game typically accessed at best only one-half to one-third of their allocated pages over long windows of time," says Goossen. "So if a game never had to load pages that are ultimately never actually used, that means a 2-3x multiplier on the effective amount of physical memory, and a 2-3x multiplier on our effective IO performance."

A technique called Sampler Feedback Streaming - SFS - was built to more closely marry the memory demands of the GPU, intelligently loading in the texture mip data that's actually required with the guarantee of a lower quality mip available if the higher quality version isn't readily available, stopping GPU stalls and frame-time spikes.

Bespoke hardware within the GPU is available to smooth the transition between mips, on the off-chance that the higher quality texture arrives a frame or two later. Microsoft considers these aspects of the Velocity Architecture to be a genuine game-changer, adding a multiplier to how physical memory is utilised.

Anybody discussing the Xbox Memory setup without acknowledging that this is the way games are going to actually be using the available memory, are missing a crucial part of the equation. Any article and video that I've seen address this so far ignores Sampler Feedback Streaming. I do not believe it to be marketing. I believe it will actually greatly improve the efficiency of the console's available RAM for games.

In other words, Xbox didn't just toss in an asymmetrical memory setup without a well thought out method of how best to make use of it This will be nothing like any existing setup that has existed in the PC GPU space.
 
Last edited:

Kenpachii

Member
What did microsoft say about its ram themselves?

It might also be one of the reasons they didn't show the console yet. They might need to redesign the cooling system.

Got the same feeling, its going to be hard to get that GPU to be cooled and yields are also going to be interesting at such clocks.
 

Entroyp

Member
Because it fits the narrative

Is it the part about the GTX660Ti true?

Anandtech Review

While we don’t know the SX memory setup to the detail, the article is based on information about a previous similar implementation.

Again, I’m not saying the memory system is bad by any means but it’s an interesting topic for discussion and that’s the only site providing some degree of evidence.
 
Last edited:

Kenpachii

Member
Is it the part about the GTX660Ti true?

Anandtech Review

While we don’t know the SX memory setup to the detail, the article is based on information about a previous similar implementation.

Again, I’m not saying the memory system is bad by any means but it’s an interesting topic for discussion and that’s the only site providing some degree of evidence.

Yes this happened all the time. however everybody that hates it, its better then 1,5gb at the end of the day that the card would otherwise have ended up with.

Anyway, this is what i get from the spec sheet from the microsoft website.

Memory Bandwidth10 GB @ 560 GB/s, 6GB @ 336 GB/s


Why is it not? 16gb 336 and 10gb 560? That's what u would normally state if it was a shared memory pool.

What microsoft makes it sound like here that they have a split memory pool somehow.

I would like to see some more information from microsoft on this matter. because if that memory is designed like the 660 ti, then yea that's not looking good.

If its split, then honestly it will outperform sony loyaly, no way in hell they need more then 10gb for v-ram for next generation.

From eurogamer: https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs

"Memory performance is asymmetrical - it's not something we could have done with the PC," explains Andrew Goossen "10 gigabytes of physical memory [runs at] 560GB/s. We call this GPU optimal memory. Six gigabytes [runs at] 336GB/s. We call this standard memory. GPU optimal and standard offer identical performance for CPU audio and file IO. The only hardware component that sees a difference in the GPU."

In terms of how the memory is allocated, games get a total of 13.5GB in total, which encompasses all 10GB of GPU optimal memory and 3.5GB of standard memory. This leaves 2.5GB of GDDR6 memory from the slower pool for the operating system and the front-end shell. From Microsoft's perspective, it is still a unified memory system, even if performance can vary. "In conversations with developers, it's typically easy for games to more than fill up their standard memory quota with CPU, audio data, stack data, and executable data, script data, and developers like such a trade-off when it gives them more potential bandwidth," says Goossen.

Seems like its a split pool out of this which makes sense then. So yea no issue there at all.
 
Last edited:

Entroyp

Member
Yes this happened all the time. however everybody that hates it, its better then 1,5gb at the end of the day that the card would otherwise have ended up with.

Anyway, this is what i get from the spec sheet from the microsoft website.

Memory Bandwidth10 GB @ 560 GB/s, 6GB @ 336 GB/s


Why is it not? 16gb 336 and 10gb 560? That's what u would normally state if it was a shared memory pool.

What microsoft makes it sound like here that they have a split memory pool somehow.

I would like to see some more information from microsoft on this matter. because if that memory is designed like the 660 ti, then yea that's not looking good.

If its split, then honestly it will outperform sony loyaly, no way in hell they need more then 10gb for v-ram for next generation.

From eurogamer: https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs



Seems like its a split pool out of this which makes sense then. So yea no issue there at all.

I never said it was an issue. To me sounds like a somewhat “split” memory pool that can be accessed at the same time without any performance penalties (like in the nvidia implementation). Hopefully we get more info on this soon as it sounds really cool.
 

Deto

Banned
Some words by devs;



uhauhaahuha

Massive Edge Over PS5 Says Devs

Dealer - Gaming

Xbox centric YouTuber

"xbox centric youtuber" know more than Jason schreier...

and look who appeared:

ax0k0B0.png



LOL


Interleaving doesn't appear to check out




xRuxJwC.png




Interesting:

all "developers" speaking well of the xbox SX interact with xbox fanboys.

all developers speaking well on PS5 interact with Jason Schreier.

:messenger_face_screaming::messenger_tears_of_joy:
 
Last edited:

ZywyPL

Banned
It's all just speculations, if not wishes... Other than Geometry Engine there nothing extra inside the PS5 to make it more efficient, there's a lot of work done around the SSD and audio, sure, but for CPU or GPU? Until Cerny/Sony confirm there's more custom work/secret sauce , there is none.
 
You are clearly only seeing what you what to see. Microsoft's Machine is just as innovative as Sony's machine . Nothing on the pc market is doing the tech that Seris X has in it right now. Microsoft also has an SSD that works in conjunction with the GPU yet Sony fanboys conveniently act as if it doesn't why?

It's genuinely not if you look at the details. Series X is a bit different too, with it's massive GPU and tower form factor, but PS5 is another level with its SSD and Tempest Audio. We've never seen anything like it.

The PS5 SSD, although has been stated many times, will change how people think of a console.
 

Panajev2001a

GAF's Pleasant Genius
It's all just speculations, if not wishes... Other than Geometry Engine there nothing extra inside the PS5 to make it more efficient, there's a lot of work done around the SSD and audio, sure, but for CPU or GPU? Until Cerny/Sony confirm there's more custom work/secret sauce , there is none.

GPU cache scrubbers? Lower than 1/10th of a core CPU impact for I/O (1/10th is the figure given for XSX)? Impact of higher frequency clock for command buffers, ACE’s (async compute world management), and HW scheduler, etc...? I am quite interested if and how much the Geometry Engine is from big standard Mesh Shaders.
For PS4 Pro compatibility they need (free) multi resolution render targets support, but ai wonder if they improved them at all (made them finer grained or not), if basic VRS in RDNA2 is there and much different from the updated version MS co-developed with AMD, etc... I think there are quite a few things both need to talk about.
 
Last edited:

Yoshi

Headmaster of Console Warrior Jugendstrafanstalt
Let's take a quick look at this deep analysis: Assuming Microsoft is using the slower RAM for system functionality - which is much more reasonable than the other way around, if XSX is selective in its RAM usage for the system - then the average RAM speed would be (10*560+3,5*336)/13,5=501, so significantly faster than PS5's. This is not to say the average is a particularly useful metric, but since the analyst in op is talking about mixed performance, let's do the numbers. Going by that (and also the baseless assumption that Microsoft would reserve a 2.5GB portion of the faster RAM for system functionality) we can derive all we need to know about this "in depth analysis".
 

geordiemp

Member
There's absolutely nothing efficient being done with the PS5, it's all a bunch of nonsense. This is like someone arguing the 2080 Super is the more efficient design vs the 2080ti because the 2080 Super pushes clock speed. This higher clock speed vs lower CU nonsense on a GPU needs to stop, the workloads on a GPU are highly parallel in nature. We aren't talking CPU workloads that benefit from higher single threaded performance (and even then today most things have been made multi threaded).

Glad your so confident that PS5 will be weak and not push above its weight, I would not be so confident. My prediction, for games with last gen assets XSX will dominate, with larger games and assets > 10 GB both will be similarly memory bandwidth bound and difference will be neglible.

Let's take a quick look at this deep analysis: Assuming Microsoft is using the slower RAM for system functionality - which is much more reasonable than the other way around, if XSX is selective in its RAM usage for the system - then the average RAM speed would be (10*560+3,5*336)/13,5=501, so significantly faster than PS5's. This is not to say the average is a particularly useful metric, but since the analyst in op is talking about mixed performance, let's do the numbers. Going by that (and also the baseless assumption that Microsoft would reserve a 2.5GB portion of the faster RAM for system functionality) we can derive all we need to know about this "in depth analysis".

Your calcs are different to others...and below has serious expertise on this stuff.

AjKaTko.png
 
Last edited:
I love how audio capabilities and Solid State Drives are suddenly so important since two weeks. :D
Microsoft had Atmos for 2 1/2 years and no one says shit, it's all fake, people are fake.

What people should be excited about are things like Microsoft's HDR thing that automatically maps out any game and applies HDR at a system level. That's a dream feature that's incredibly useful and yet people are virtually silent.

What have people been saying for the last 4 years? HDR is the biggest differentiator this generation, and now it's able to be on everything and no one says anything.

Fake ass people everywhere.
 
I love how audio capabilities and Solid State Drives are suddenly so important since two weeks. :D

Someone hasn't read the OP.

The general idea is both are powerful in terms of compute (teraflops) but Sony, instead of pushing even higher up to 12 tflops, have deemed this unnecessary use of their transistor budget and instead have focused on the paradigm shift of solving the memory communication bottleneck with their game-changing SSD implementation and focus on accelerators on their chip instead of CUs. Cerny/Sony is showing greater foresight imo.

Series X design is more standard = put loads of CUs in their GPU. PS5 is more forward looking, with less CUs but more accelerator cores and faster communication across components.
 

TBiddy

Member
I love how audio capabilities and Solid State Drives are suddenly so important since two weeks. :D

The damage control is really getting out of hand. "Paradigm shift", "The PS5 is ACTUALLY FASTER" and what have we. I imagine it was about the same in 2013, with the roles reversed. I assume it's the bargaining phase they are going through.
 

bitbydeath

Member
It's all just speculations, if not wishes... Other than Geometry Engine there nothing extra inside the PS5 to make it more efficient, there's a lot of work done around the SSD and audio, sure, but for CPU or GPU? Until Cerny/Sony confirm there's more custom work/secret sauce , there is none.

That’s exactly what he did confirm. He even gave an example how the custom hardware could be used to relieve an entire CPU core.
 
Last edited:
Top Bottom