• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

Garani

Member
You talking about Dark Souls 3? I just said I'd rather play it on Playstation if it's twice the framerate, you even quoted me saying that🙄
Oh, wait, shit I miss read your quote. NOOOOOOO Riky Riky won the argument!

Gotta stuff myself with sweets in sorrow!
face homer GIF
 
Let's compare other BC games, then, why not. For instance Dark souls 3:

- XSX: 900p locked 30fps
- PS5: 1080p locked 60fps

PS5 pushes 2.88x more pixels than XSX in this game. Do you think it can tell us something meaningful about the hardware of those consoles?

Definitely why I don't like using BC to gauge a systems power. To use Dark Souls 3 to argue that the XSX is vastly weaker than the PS5 is just ludicrous. I honestly hope I don't see anyone making those types of arguments here.

It's much better to compare actual next gen versions than the BC ones. Makes it fair to both systems.


You made an excellent point.
 
Last edited:
Yes I doubt PS5 has any stacking/chiplet design yet, but more like chiplet-ready for PS5 Pro. But stacking can pack too much heat, so I think they'll used dual-sided motherboard like they did now and have 2 dies one on each side with 2 heatsinks instead of one this time.
Welcome back, Bo.

I would argue that even for a PS5 Pro, a chiplet or multi-die based design would be a mistake. Inter-chiplet and inter-die latencies would be too prohibitive, and from a software perspective, trying to hide those latencies from the application such that base PS5 software will be able to run out of the box without patching, will very likely prove to be an impossible task.

Heck even a multi-die/chiplet-based PS6 would pose a real challenge in ensuring the application is able to treat the GPU as a single device. They’d have to utilise split or alternate frame rendering regimes which inherently lose them efficiency. Or explore more novel composite frame rendering regimes for multi-GPU systems.

Considering that AMD’s current big Navi RDNA 2 card on PC is just two PS5 GPUs ducktaped together on a monolithic die (and that’s on a 7nm process) there’s no reason not to go monolithic again with a PS5 Pro (if they even do one).

An 80CU PS5 Pro on a 4 or 5nm process would be reasonably sized in terms of its die area and be able to comfortably run at higher than base PS5 clocks within a nice console TDP budget.
 
Whatever dude. Like i said before. The parallel between hardcore sony fans and 3dfx fans exists

Dude, it starting to feel like your primary contribution to this thread’s discussion amounts to predominantly fanboy meta-commentary and accusations of other posters being defensive.

Just quit it. Please.

If you don’t have anything constructive to contribute to the discussion here, why not just post elsewhere? There are many threads on many topics here on NeoGaf. If everything said here triggers you, you aren’t forced to continue posting here.
 

ethomaz

Banned
Definitely why I don't like using BC to gauge a systems power. To use Dark Souls 3 to argue that the XSX is vastly weaker than the PS5 is just ludicrous. I honestly hope I don't see anyone making those types of arguments here.

It's much better to compare actual next gen versions than the BC ones. Makes it fair to both systems.


You made an excellent point.
You are correct... BC is really something should be not compared at all due the differences between implementations limitations.

Said that what you expect from Xbox fans? They doesn’t have anything more to talk or grab about.

So let them have the superior old BC games.
 

SaucyJack

Member
Just a point, stacking =/= chiplets.

We can be 100% sure the PS5 includes no chip stacking or chiplet-based design. It would be unnecessary and antithetical to the performance of the PS5 CPU and GPU.

Chiplets benefit humongous GPUs, GPUs much bigger than the reticule limits of current semiconductor manufacturing processes. As well as larger GPUs on more and more prohibitively expensive process nodes. In light of AMDs current focus on increasing GPU clocks, I think we can be confident chiplet-based GPUs are going to be even farther off in the future before they start to appear a viable path for increasing GPU performance.


Chip stacking is even further afield for ASICs. The cooling issue still hasn’t been solved.

It's been heavily rumoured now that the next generation of AMD discreet GPU's (RDNA 3) will have some sort of MCM design, however this could be for only 1 of their cards or something. The same with Nvidia, it was also leaked multiple times that Hopper which is Nvidia's next generation GPU architecture succeeding Ampere (30 Series) will also feature some sort of MCM design.

Interestingly enough, the same sources who leaked the information are now saying that Nvidia have delayed Hopper and will now focus on another architecture called "Lovelace", which will feature a monolithic design but will still have enormous performance games, I think one of the figures leaked mentioned something like 60 Tflops. Maybe AMD will follow suit because they feel MCM is not ready yet. We do have patents from both Nvidia and AMD revolving around MCM design so I could see it happening in the future. I can see the Playstation 6 use something similar in the future as well as they also have MCM patents.
 
It's been heavily rumoured now that the next generation of AMD discreet GPU's (RDNA 3) will have some sort of MCM design, however this could be for only 1 of their cards or something. The same with Nvidia, it was also leaked multiple times that Hopper which is Nvidia's next generation GPU architecture succeeding Ampere (30 Series) will also feature some sort of MCM design.

Interestingly enough, the same sources who leaked the information are now saying that Nvidia have delayed Hopper and will now focus on another architecture called "Lovelace", which will feature a monolithic design but will still have enormous performance games, I think one of the figures leaked mentioned something like 60 Tflops. Maybe AMD will follow suit because they feel MCM is not ready yet. We do have patents from both Nvidia and AMD revolving around MCM design so I could see it happening in the future. I can see the Playstation 6 use something similar in the future as well as they also have MCM patents.
The ever reliable leakster kopite7kimi has let lose the info that Nvidia is running into issues with the Mcm design and will most likely delay Hopper for now, instead replacing its release with the Ada architecture named as a nod to the legendary computer scientist, Ada Lovelace. It's said that the Nvidia Ada architecture will replace Ampere with the hopper mcm design rumoured to be delayed for now and the top gpu in the Ada lineup to be much faster than the 3090 and possibly cheaper(sorry 3090 owners). Mcm might sound easy to accomplish for cpus, but it's a lot trickier to implement on gpus because of the complexity of gpus compared to cpus. We might not see hopper until much later possibly.
 
Last edited:
The ever reliable leakster kopite7kimi has let lose the info that Nvidia is running into issues with the Mcm design and will most likely delay Hopper for now, instead replacing its release with the Ada architecture named as a nod to the legendary computer scientist, Ada Lovelace. It's said that the Nvidia Ada architecture will replace Ampere with the hopper mcm design rumoured to be delayed for now and the top gpu in the Ada lineup to be much faster than the 3090 and possibly cheaper(sorry 3090 owners). Mcm might sound easy to accomplish for cpus, but it's a lot trickier to implement on gpus because of the complexity of gpus compared to cpus. We might not see hopper until much later possibly.
This would be pretty insane if it's true.
 
This would be pretty insane if it's true.
The whispers from insider sources are pointing to a 5nm architecture with as many as 17 500 cuda cores for the absolute highest end, a 71% improvement before factoring in the additional 10-15% perf improvement granted by the 5nm process compared to Samsung's 8nm process node, which is essentially a more evolved/mature varsion of their 10nm Lpe process node.
 
Last edited:

Lunatic_Gamer

Gold Member

Microsoft Is Trying To Ramp Up Xbox Series X/S Production, Phil Spencer Says

"It really is just down to physics and engineering. We're not holding them back," Spencer said. "We're building them as fast as we can. We have all the assembly lines going. I was on the phone last week with Lisa Su at AMD [asking], 'How do we get more? How do we get more?' So it's something that we're constantly working on, but it's not just us. I think gaming has really come into its own in 2020."
 

Bo_Hazem

Banned
Welcome back, Bo.

I would argue that even for a PS5 Pro, a chiplet or multi-die based design would be a mistake. Inter-chiplet and inter-die latencies would be too prohibitive, and from a software perspective, trying to hide those latencies from the application such that base PS5 software will be able to run out of the box without patching, will very likely prove to be an impossible task.

Heck even a multi-die/chiplet-based PS6 would pose a real challenge in ensuring the application is able to treat the GPU as a single device. They’d have to utilise split or alternate frame rendering regimes which inherently lose them efficiency. Or explore more novel composite frame rendering regimes for multi-GPU systems.

Considering that AMD’s current big Navi RDNA 2 card on PC is just two PS5 GPUs ducktaped together on a monolithic die (and that’s on a 7nm process) there’s no reason not to go monolithic again with a PS5 Pro (if they even do one).

An 80CU PS5 Pro on a 4 or 5nm process would be reasonably sized in terms of its die area and be able to comfortably run at higher than base PS5 clocks within a nice console TDP budget.

You bring a lot of sense there, and I know you know tech much more than I do. Thanks a lot for your comprehensive post!:messenger_ok:
 

Paulxo87

Member
It's just amazing. The degree at which microsoft got outmaneuvered by cerny and sony.

You know - this entire time they wanted to keep the facade up that they were no longer in competition with sony. If you can't beat em change the rules, right? But it's just so obvious they take it to heart.

Just because the XSX is currently sold out that does not mean shit. Sony clowned them with the PS5 in every single way. PS5 is the more capable system. Period. The games already released/coming in next few months look stellar.

Especially the controller. The fact that MS allowed themselves to get played like this is amazing. Astrobot is a real next gen experience. MS has currently nothing.
 
It's just amazing. The degree at which microsoft got outmaneuvered by cerny and sony.

You know - this entire time they wanted to keep the facade up that they were no longer in competition with sony. If you can't beat em change the rules, right? But it's just so obvious they take it to heart.

Just because the XSX is currently sold out that does not mean shit. Sony clowned them with the PS5 in every single way. PS5 is the more capable system. Period. The games already released/coming in next few months look stellar.

Especially the controller. The fact that MS allowed themselves to get played like this is amazing. Astrobot is a real next gen experience. MS has currently nothing.

MS was quite content just building a stronger Xbox One and attempting to keep more people in the same ecosystem.

Sony tried to make it clear early with the whole "We believe in generations" statement this was going to be a different kind of console "war"

MS still has another piece of hardware they havent revealed that was at one time on the drawing board (over a year ago) and while I never got 100% confirmation what it was I was lead to believe it was just a pure streaming stick type of device.

The few things I saw off of that were things like just a Minecraft streaming stick or Fortnite only stick but those died I believe.
 
It's been heavily rumoured now that the next generation of AMD discreet GPU's (RDNA 3) will have some sort of MCM design, however this could be for only 1 of their cards or something. The same with Nvidia, it was also leaked multiple times that Hopper which is Nvidia's next generation GPU architecture succeeding Ampere (30 Series) will also feature some sort of MCM design.

Interestingly enough, the same sources who leaked the information are now saying that Nvidia have delayed Hopper and will now focus on another architecture called "Lovelace", which will feature a monolithic design but will still have enormous performance games, I think one of the figures leaked mentioned something like 60 Tflops. Maybe AMD will follow suit because they feel MCM is not ready yet. We do have patents from both Nvidia and AMD revolving around MCM design so I could see it happening in the future. I can see the Playstation 6 use something similar in the future as well as they also have MCM patents.

It wouldn’t surprise me for a desktop PC GPU product to see a chiplet based GPU out on the market in the next two years. The limitations constraining design for those segments are far more relaxed. Both AMD and Nvidia have had multi-die GPU cards on the market in the past, and so shipping an MCM product with GPU chiplets and even perhaps on-module HBM wouldn’t fare much differently. SFR and AFR were essentially staples of the Crossfire and SLI (multi-GPU technologies) and so a loss in rendering efficiency isn’t really a big deal in a market segment that epitomises the notion of brute forcing rendering (while paying a pretty penny for it).

On consoles, however, the equation is very different. That said, I think the economics of engineering design and fabrication for the console SoC on ever-exponentially-increasing cost process nodes will necessitate the use of chiplet based consoles sooner rather than later. E.g. even TSMCs current 5nm process may not even work out for Sony/MS’s next consoles with a monolithic die design. In which case chiplets become the only viable option.

Personally, I think 2.5D chiplet based GPU designs (i.e. MCM) are a stop-gap until the ASIC stacking cooling issue is finally resolved. ASIC stacking will give us ridiculous generational leaps in performance. Far far greater than we’ve ever seen before.

It's just amazing. The degree at which microsoft got outmaneuvered by cerny and sony.

You know - this entire time they wanted to keep the facade up that they were no longer in competition with sony. If you can't beat em change the rules, right? But it's just so obvious they take it to heart.

Just because the XSX is currently sold out that does not mean shit. Sony clowned them with the PS5 in every single way. PS5 is the more capable system. Period. The games already released/coming in next few months look stellar.

Especially the controller. The fact that MS allowed themselves to get played like this is amazing. Astrobot is a real next gen experience. MS has currently nothing.
Agreed.

I’m reminded of Spencer’s now immortal words, “we won’t get caught out on price or performance”, when considering the latest 3rd party game platform comparisons together with Sony’s out of the left field PS5 DE, they got sideswiped on both... again.
 
Last edited:

PaintTinJr

Member
It's just amazing. The degree at which microsoft got outmaneuvered by cerny and sony.

You know - this entire time they wanted to keep the facade up that they were no longer in competition with sony. If you can't beat em change the rules, right? But it's just so obvious they take it to heart.

Just because the XSX is currently sold out that does not mean shit. Sony clowned them with the PS5 in every single way. PS5 is the more capable system. Period. The games already released/coming in next few months look stellar.

Especially the controller. The fact that MS allowed themselves to get played like this is amazing. Astrobot is a real next gen experience. MS has currently nothing.
The biggest issue with their strategy IMHO is that IIRC Phil made some claim about not losing on price or performance, only to lose on both counts, making the build up and loss so much worse. When people aggrieved at noises from their PS5's and then saying Sony cheaped out on components - as though more than a small percentage of the first 3million units are afflicted - it feels like PlayStation was also supposed to have a higher BOM too, while winning on the other criteria.

With the limited supply that Xbox has provided for XsX - compared to PS5 - you have to wonder if their XsX BOM is massive, and that supply was constrained for launch to limit the subsidy until it came down after the RDNA2 GPU releases - and offset loses with profits on the XsS.

With the hardware they released, it is pretty likely that they didn't see the PS5 design coming out as the more powerful, and are now in weird scenario of launching with hardware plans A & B, only to find they need a plan C, too, but have no way to make that happen.

The extra CUs in the XsX keep getting mentioned as being a potential leveller for high quality ML upscaling, but I came across a link the other day about UK (Bristol based) hardware/software company: Graph Core, that have been creating 2nd gen IPU (intelligence processing units), and from the video explaining how their new 1 PetaFlop/s AI slim 1U blade (4x IPUs in one chassis) scales up, works and compares to GPUs, it was interesting to see the overlap design thinking between their new IPUs and the PS5 APU design principles (from the Road to PS5), where IO latency reduction is a major factor in the AI performance they are now offering.

It is also interesting that the monstrous total of on core memory (per IPU) gives a quick calculation that puts the per core memory in the ~600KB ballpark, which feels reminiscent of SPU local store memory solution - that provided very low latency access for an SPE.

The video is well worth a look IMO, because even if it doesn't directly impact the consoles - although I would argue that it does imply the extra XsX CUs won't be ideal for real-time per frame AI upscaling, as some hoped - it could have some insight as to changes both Nvidia and AMD will be making to their GPUs if they are at risk of losing massively on scaled performance, scaled price and easy of use for ML to a new IPU solution verses GPU, and may be forced to fork their graphics technology from their AI technology to stay competitive in AI, which may impact the PC position compared to consoles in the coming years.

 

Mr Moose

Member
The biggest issue with their strategy IMHO is that IIRC Phil made some claim about not losing on price or performance, only to lose on both counts, making the build up and loss so much worse. When people aggrieved at noises from their PS5's and then saying Sony cheaped out on components - as though more than a small percentage of the first 3million units are afflicted - it feels like PlayStation was also supposed to have a higher BOM too, while winning on the other criteria.

With the limited supply that Xbox has provided for XsX - compared to PS5 - you have to wonder if their XsX BOM is massive, and that supply was constrained for launch to limit the subsidy until it came down after the RDNA2 GPU releases - and offset loses with profits on the XsS.

With the hardware they released, it is pretty likely that they didn't see the PS5 design coming out as the more powerful, and are now in weird scenario of launching with hardware plans A & B, only to find they need a plan C, too, but have no way to make that happen.

The extra CUs in the XsX keep getting mentioned as being a potential leveller for high quality ML upscaling, but I came across a link the other day about UK (Bristol based) hardware/software company: Graph Core, that have been creating 2nd gen IPU (intelligence processing units), and from the video explaining how their new 1 PetaFlop/s AI slim 1U blade (4x IPUs in one chassis) scales up, works and compares to GPUs, it was interesting to see the overlap design thinking between their new IPUs and the PS5 APU design principles (from the Road to PS5), where IO latency reduction is a major factor in the AI performance they are now offering.

It is also interesting that the monstrous total of on core memory (per IPU) gives a quick calculation that puts the per core memory in the ~600KB ballpark, which feels reminiscent of SPU local store memory solution - that provided very low latency access for an SPE.

The video is well worth a look IMO, because even if it doesn't directly impact the consoles - although I would argue that it does imply the extra XsX CUs won't be ideal for real-time per frame AI upscaling, as some hoped - it could have some insight as to changes both Nvidia and AMD will be making to their GPUs if they are at risk of losing massively on scaled performance, scaled price and easy of use for ML to a new IPU solution verses GPU, and may be forced to fork their graphics technology from their AI technology to stay competitive in AI, which may impact the PC position compared to consoles in the coming years.


 

SlimySnake

Flashless at the Golden Globes
It's just amazing. The degree at which microsoft got outmaneuvered by cerny and sony.

You know - this entire time they wanted to keep the facade up that they were no longer in competition with sony. If you can't beat em change the rules, right? But it's just so obvious they take it to heart.

Just because the XSX is currently sold out that does not mean shit. Sony clowned them with the PS5 in every single way. PS5 is the more capable system. Period. The games already released/coming in next few months look stellar.

Especially the controller. The fact that MS allowed themselves to get played like this is amazing. Astrobot is a real next gen experience. MS has currently nothing.
I think it's too early to say MS got played, but yes it seems they did fuck up. After all that talk about being the most powerful console, they had to deliver and they failed. It's broken promises at best, and misleading marketing at worst. The trust has been broken though looking at how the xbox fans have reacted, it seems the fanbase wont care that they were lied to. they called the series s a 1440p console, and yet watchdogs is running at 900p on it. not a peep from xbox fans. i saw MrFunSocks complain about sony fans and is now leaving gaf because of us, but at least I criticize Sony when I see that they have lied to me. That's the difference between MS and Sony fans.

To me, the biggest fuck up happened way before launch when they started talking about how they will not have games for the next 2-3 years. Specs dont matter, games do. This is by far THE worst launch line up period, ever, period. And there is literally no end in sight. They wont have any AAA games until 2022 or 2023. They better fucking make Halo next gen and completely revamp the graphics or they are going to get destroyed by the PS5 exclusives coming out next year. Horizon is cross gen and it looks a gen ahead of Halo already. whats going to happen if god of war turns out to be next gen only?

I remember people here and on era downplaying the lack of new innovations in the controller and in the UI. Anyone criticizing MS for them was labelled a Sony fanboy, but to me it was clear that this was a very lazy console. It was almost like they put the minimal amount of effort into launching their console. Maybe the series s took up the R&D resources for the controller and UI, or maybe Phil just didnt give a shit, but it's clear that they simple phoned it in. No games, same controller, same UI, amazing specs that dont offer amazing performance despite being more expensive... people call cyberpunk a rushed game, but this is a rushed console. Now they are talking about releasing a controller with dual sense features, fixing the tools to offer better performance, like wtf.

I can understand being outengineered and tbh, what cerny did with the i/o was going above and beyond, and we cant expect everyone to come up with innovations like that in this industry. this happens once every few decades. What I dont care for is the dishonest way they went about highlighting their tflops advantage knowing that the extra tflops were not translating into better performance.
 

SlimySnake

Flashless at the Golden Globes
Welcome back, Bo.

I would argue that even for a PS5 Pro, a chiplet or multi-die based design would be a mistake. Inter-chiplet and inter-die latencies would be too prohibitive, and from a software perspective, trying to hide those latencies from the application such that base PS5 software will be able to run out of the box without patching, will very likely prove to be an impossible task.

Heck even a multi-die/chiplet-based PS6 would pose a real challenge in ensuring the application is able to treat the GPU as a single device. They’d have to utilise split or alternate frame rendering regimes which inherently lose them efficiency. Or explore more novel composite frame rendering regimes for multi-GPU systems.

Considering that AMD’s current big Navi RDNA 2 card on PC is just two PS5 GPUs ducktaped together on a monolithic die (and that’s on a 7nm process) there’s no reason not to go monolithic again with a PS5 Pro (if they even do one).

An 80CU PS5 Pro on a 4 or 5nm process would be reasonably sized in terms of its die area and be able to comfortably run at higher than base PS5 clocks within a nice console TDP budget.
Yeah, the 6800xt already sounds like a great fit. I believe it is roughly 26 billion transistors or a little over 2x the PS5 transistor count. They will need infinity cache in there so they will have to take the entire 26 billion transistor chip and add the CPU to it. They might settle for a 12 core 24 thread CPU to stay under 30 billion transistors which will be pushing almost 3x the PS5 chip size.

I also suspect AMD will have their own dedicated RT and Tensor cores by then which will push the size closer to 35 billion. I just dont see sony making a chip that big even at 5nm for a $500 mid gen console. Especially when they will have to increase the bandwidth to at least 760 gpbs to take full advantage of the 2x increase in raw horsepower or risk being bandwidth limited like the pro.

They will probably skip the dedicated RT and Tensor cores to reduce costs and that would be a shame imo. Looking at a last gen game like spiderman struggling to run at 1440p 60 fps with ray tracing, I am not too hopeful about the next gen games doing any kind of meaningful RT without rt cores even on a 6800xt.
 
i saw MrFunSocks complain about sony fans and is now leaving gaf because of us, but at least I criticize Sony when I see that they have lied to me. That's the difference between MS and Sony fans.

Honestly I think he did that to himself especially after all the things he said. I remember he told me that Demon Souls wasn't captured on a PS5 and the final product ended up being exactly like the footage we saw. Things like that.
 

Sinthor

Gold Member
No issues with rest mode here. Being used almost every day just now as well 👍
I've been using rest mode as well. I have an external SSD with my PS4 games on it. THAT was what caused a few crashes early on but I haven't had a crash now in about a month. System is pretty solid. I just use rest mode when I need to charge the controllers, otherwise I shut it down. Not like it saves a ton of time speed wise anyways.
 

ethomaz

Banned
Last edited:
I've been using rest mode as well. I have an external SSD with my PS4 games on it. THAT was what caused a few crashes early on but I haven't had a crash now in about a month. System is pretty solid. I just use rest mode when I need to charge the controllers, otherwise I shut it down. Not like it saves a ton of time speed wise anyways.
Had a strange bug where my console would turn itself on when in rest mode, but powering it off and on seems to have resolved. Otherwise no issues with the console. Haven’t plugged in my SSD yet but planning on doing so soon
 
Yeah, the 6800xt already sounds like a great fit. I believe it is roughly 26 billion transistors or a little over 2x the PS5 transistor count. They will need infinity cache in there so they will have to take the entire 26 billion transistor chip and add the CPU to it. They might settle for a 12 core 24 thread CPU to stay under 30 billion transistors which will be pushing almost 3x the PS5 chip size.

I also suspect AMD will have their own dedicated RT and Tensor cores by then which will push the size closer to 35 billion. I just dont see sony making a chip that big even at 5nm for a $500 mid gen console. Especially when they will have to increase the bandwidth to at least 760 gpbs to take full advantage of the 2x increase in raw horsepower or risk being bandwidth limited like the pro.

They will probably skip the dedicated RT and Tensor cores to reduce costs and that would be a shame imo. Looking at a last gen game like spiderman struggling to run at 1440p 60 fps with ray tracing, I am not too hopeful about the next gen games doing any kind of meaningful RT without rt cores even on a 6800xt.
RDNA2 cards and the consoles already have RT cores. They’re built into the TMUs and are limited to accelerating ray/box and ray/tri intersection tests, but they definitely qualify as fixed function hardware acceleration cores for RT.
 
Status
Not open for further replies.
Top Bottom