• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Possible PlayStation 5 Pro multi-GPU technology outlined in newly released patent

Bartski

Gold Member
(...)

The patent kicks things off with an interesting background and summary section that talks about the benefits of using multiple GPUs and linking them together. The summary section explicitly mentions a light version of a console (presumably the base PS5) that could use a single SoC,sand a high-end version (the PS5 Pro) that could use multiple SoCs.

Remember the PlayStation 5 uses a single 7nm SoC from AMD outfitted with a Navi GPU and Zen 2 CPU.

The patent's main goal is to present the possibility of a multi-GPU console with both local and network access to the second GPU. So there's two main ways this could work: A physical console that contains two GPUs, whether it be two SoCs/APUs, or one SoC and a second GPU; or using a GPU from a cloud server network.


The latter is how the PlayStation Now service is powered.

When it comes to physical dual-GPU setups, the patent recognizes there's significant hurdles to tackle such as frame buffer management for rendering. The patent is extremely varied and detailed and aims to cover all the bases for a solution to this problem.

So instead of outlining every single possible solution, we'll give you the gist (we'll also include a full copy of the summary at the bottom of the article for your perusal).

Some of the embodiments have the rendered video split up between the GPUs. One GPU renders one part, the other renders the other part, and the system uses multi-plexing to combine the rendered images and output them to a screen.

(...)

 
I'd love a multi-gpu console.

It would bring back SLI on pc as well with a vengeance.

And I'd love to be able to just buy two ps5/xsx and link them to render at twice the resolution / framerate, kinda like what you could do with games like Forza.
 

CAB_Life

Member
I thought dual-GPU was generally a dead concept outside of serious PCMR epeen measurements? Like most UE4 games didn’t/ don’t take advantage of SLI or Crossfire at all.
 

JonnyMP3

Member
I was thinking to myself "Yes! Tech!" and then I read 'dual GPUs' and had a major MrXMedia PTSD flashbacks...
 

CAB_Life

Member
Microstutter was the problem with SLI and Crossfire. The GPU's ended up having lag between them. So maybe because the PS5 has such fast I/O that it's eliminated the stutter.

Wasn’t even microstutter as much as the SLI/ X-fire support was just garbage and so sporadically implemented by the devs that the only way to take advantage of it was brute force though the control panel, and that had microstutter and other issues, yeah. Plus the performance gains weren’t even all that impressive, especially for CPU bound games.

Mind you, this was when I built my last pc about 3-4 years ago (1080TI SLI). It was this specific frustration that led me back into console and mobile gaming tbh—as I just couldn’t justify the price of PC gaming for all it’s hassles.
 

JonnyMP3

Member
Wasn’t even microstutter as much as the SLI/ X-fire support was just garbage and so sporadically implemented by the devs that the only way to take advantage of it was brute force though the control panel, and that had microstutter and other issues, yeah. Plus the performance gains weren’t even all that impressive, especially for CPU bound games.

Mind you, this was when I built my last pc about 3-4 years ago (1080TI SLI). It was this specific frustration that led me back into console and mobile gaming tbh—as I just couldn’t justify the price of PC gaming for all it’s hassles.
Yeah I remember reading it was up to developers to implement the Dual setups in their games which isn't even their job to do. Crappy drivers definitely didn't help but that's always the case. Really, there just wasn't the high enough RAM specs all round to make it work better. We're averaging at 16GBs now. Doing it with 4to8GB was nonsense.
 
Last edited:

JonnyMP3

Member
OMG! This actually might be mental.
If, as I've discussed above, the PS5 with its incredible I/O throughput and SSD speed... The design limitations of PC might be eliminated.
Dual GPUs didn't work as they were latency, architecture and API limited so they were never in sync enough to properly leverage 2 GPUs at the same time in proper timing especially since it was on PCIe3. So it was pointless as the pipeline was halved with low bandwidth. But if all those bottlenecks have also been removed with the custom architecture... This could actually be a thing.
:messenger_astonished:
 

Freeman

Banned
Two PS5 GPU's taped together for the PS5 Pro sounds good enough. I wonder how that would translate to framerates.

If they could build the hardware in a way it would make it easy for devs to double the framerate of everything it would be great. That way devs can focus on making the base game as gorgeous as possible at 30 or 60 fps and people after higher fps can pay extra to get it.
 
Last edited:

DESTROYA

Member
No thanks unless they fixed any short comings with SLI ( stuttering and latency ) I prefer a single more powerful GPU than 2 duct taped together.
 

supernova8

Banned
"Network" meaning you'd buy a sort of 'expansion pak' (N64 esque) that somehow connects to the base PS5? Could explain why we haven't seen the back of the system yet. Maybe there's some I/O that would give it away.

I can see it now. When the PS5 Pro comes out you either get a PS5 Pro as it is or you buy a "Pro Expansion Module" that comes with an enclosure to put your OG PS5 into. Like an external hard drive bay.

Would be pretty cool and possibly a big winner with PS5 owners since they wouldn't have to buy an entirely new console.

With all that said, I have absolutely no idea. Just riffin'
 
Ok don't kill me guys... But I'm calling it now... If this comes true for the ps5 pro, this will become a thing.



5Qwb8yu.jpg
 

Kerotan

Member
Two PS5 GPU's taped together for the PS5 Pro sounds good enough. I wonder how that would translate to framerates.

If they could build the hardware in a way it would make it easy for devs to double the framerate of everything it would be great. That way devs can focus on making the base game as gorgeous as possible at 30 or 60 fps and people after higher fps can pay extra to get it.
That would be insane. Sign me up!
 

jaysius

Banned
I thought dual-GPU was generally a dead concept outside of serious PCMR epeen measurements? Like most UE4 games didn’t/ don’t take advantage of SLI or Crossfire at all.

Having dealt with Nvidia SLI, I can tell you that it's something that should never have hit the mainstream market, it's a beta or even alpha product at best, you have to scramble for tweaks and bullshit to get games to take advantage of it, and even when they do you can have fucked up graphical glitches.

A console with SLI is perfect, another reason for developers to not know how to optimize games for it for another 2 -3 years into the generation.

I think Cerny with his boring spiel would have mentioned DUAL GPUs.
 
PSNOW related probably.

Multi GPU is a waste of silicon.

Please, read. Why people don't read? It is not smart to come in and reply to the headline without reading the content.

It's stated in the patent that this is meant to produce a more powerful version of the console. Also, the whole patent is actually to solve the multi gpu rendering issues so that it can be utilized effectively. I don't 100% understand the technical talk, but it is more like a rendering method utilzing multi chip graphics processors.
 

A.Romero

Member
Don't see the point. Even on PC is not a great idea.

Do we really want a fragmented userbase and an extension of the lifecycle of consoles? I think we are doing fine with things are they are right now.
 

JonnyMP3

Member
Everyone here quick to point out that it doesn't work on PC so it won't work on console even though PC isn't an all in 1 custom APU and last I checked, doesn't include custom memory controllers and coherency engines.
 
Last edited:

FranXico

Member
Nonsense to expect this to come in the form of a product so soon. Like many other patents, this is a "just in case" filing, otherwise a lot of R&D is seen as a waste of money.
 
Last edited:

M1chl

Currently Gif and Meme Champion
Everyone here quick to point out that it doesn't work on PC so it won't work on console even though PC isn't an all in 1 custom APU and last I checked, doesn't include custom memory controllers and coherency engines.
Oh yeah maybe it does not work because both GPU does not have same rendering time, some custom memory controllers or "coherency engines" do a jack shit in this case. Besides, as you might have readed in the article, the GPU would be in separate die.
 

JonnyMP3

Member
Oh yeah maybe it does not work because both GPU does not have same rendering time, some custom memory controllers or "coherency engines" do a jack shit in this case. Besides, as you might have readed in the article, the GPU would be in separate die.
I looked at the PDF. There's 3 ways to do it. All on 1 SOC noted as Fig.3.
Separate but pooled as in Fig. 4 or an additional APU die which is Fig. 5.
And then Then the Network server version. So yeah I did read the article, all of it.
And the coherency engines KEEP EVERYTHING IN SYNC!
 

UnNamed

Banned
Please, read. Why people don't read? It is not smart to come in and reply to the headline without reading the content.

It's stated in the patent that this is meant to produce a more powerful version of the console.
And cloud computing. Read the patent. And my reply, again.
 
Last edited:
And cloud computing. Read the patent. And my reply, again.
Regardless of application, it is on hardware and software level.
And you dismissed local hardware in your rey eventhough the focus is mainly on the console as per the introduction.

Also any hardware innovation will be applied to the cloud anyway.

Also if multi gpu is a waste of sillicon then it is not good for PSNOW either. The whole patent was made to utilize sillicon not to waste it.
 
Last edited:

Rikkori

Member
You can see for studios that care, such as Nixxes & Oxide, mGPU results are absolutely incredible! I think it's probably unlikely we'll see it on consoles simply because I don't see the extra PCB complexities and the extra die size working to the advantage of a console. Maybe if progress stagnates more & we focus much more on ray tracing, then it starts making more sense, as well as coupled with MCM breakthroughs.

 

Bryank75

Banned
What if there is a reason beyond having a cheaper sku for the two models of PS5.....

What if you can put a digital and disc version side-by-side and connect them like SLI......then the face-plates go on the outer sides of them so it looks like a single unit?

Totally off the wall but just a thought.... 20tflops yo!
 
Microstutter was the problem with SLI and Crossfire. The GPU's ended up having lag between them. So maybe because the PS5 has such fast I/O that it's eliminated the stutter.

I/O has nothing to do with microstutter on dual GPUs in a SLI/Crossfire config. The lag between the GPUs would be due to non-unified cache, low bandwidth in fabric interconnect between the GPUs, and relatively high latency.

Now, modern-day fabric interconnects like Infinity Fabric (AMD, based on HyperTransport) and NVLink (Nvidia) offer bandwidth upwards to 1 TB/s depending on configurations. They're meant mainly for logic communications of processor elements on the same silicon/chip, however. SLI and its ilk utilized PCIe connects which even today offer magnitudes lower bandwidth than NVLink or Hypertransport (and worst latency, latency issues also being important (arguably the most important) in why microstutter was a problem with SLI/CrossFire-like setups).

Another issue with the dual-GPU setups was that you only really got maybe 50% additional processing throughput despite have 2x the GPu power, due to overhead and limitations in the setup. A more ideal setup would see multiple GPUs on the same die/card and a method of managing them as if they were a single GPU transparent to the developer. By that point though you don't really need to call it a "dual-GPU' or SLI type of setup, you're arguably getting into chiplet territory.

Which, honestly, chiplet-based multi-GPU APU design isn't something I see being out of the ordinary in the near future. The question is could that work for a console design. I mean just look at the PS5's GPU, how fast that runs, and how much heat it's going to generate. And you want two of those things on the same APU? No deal. The same could be said for some hypothetical next-next gen Xbox doing that type of setup, but I used PS5's GPU due to how fast the clock is (and how that does factor into power demand and heat that is generated, at least as much (if not moreso) than GPU size).
 

Three

Member
They actually did something like this on a PS3. They had GT5 at 4k and 240fps from 4 PS3s.
 

ZywyPL

Banned
Hard for me to get excited when they can't even properly utilize Pro's extra CUs without a patch, and that's a monolithic die design, so it doesn't get any easier than that... Show us PS5 BC in action first, then we can talk about the future.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
OMG! This actually might be mental.
If, as I've discussed above, the PS5 with its incredible I/O throughput and SSD speed... The design limitations of PC might be eliminated.
Dual GPUs didn't work as they were latency, architecture and API limited so they were never in sync enough to properly leverage 2 GPUs at the same time in proper timing especially since it was on PCIe3. So it was pointless as the pipeline was halved with low bandwidth. But if all those bottlenecks have also been removed with the custom architecture... This could actually be a thing.
:messenger_astonished:

You do know NVLink has a speed of like 50GB/s
PCIE3s bandwidth was never the issue.


If they are planning a multi-GPU PS5 Pro, that pretty hectic, assuming its literally double the PS5 that would be a hell of a machine and legacy mode would be easy to program....just use one GPU.
I guess they really can patent this because its specifically related to consoles not just GPUs.


Lets see who releases multi chip module GPUs first.
AMD-Navi-GPU-Launching-in-2018-Could-Be-MCM-Based.png




The power management part seems to spell out what the PS5 management may be doing in a very layman way:
Power Management

Power management techniques may be implemented to lower thermal loads by restricting power consumption. Recognizing that power consumption varies linearly with frequency and as the square of the voltage, a computer simulation program such as a video game may be programmed to be responsible for maintaining power consumption within predetermined thresholds by reducing frequency and/or voltage automatically as frequency/voltage/power thresholds are approached. To do this, registers from the hardware such as one or more GPUs may be read to determine current usage allocation, throttling certain effects such as particle effects if needed. The same principles can apply to mobile telephones as well. Throttling may be implemented by over clock techniques, and GPUs may be throttled independently of CPUs in the architecture. Resolution of video may be reduced to maintain simulation execution while staying within power consumption-related thresholds. Audio and/or visual warnings (such as activating an LED) may be presented as power consumption-related thresholds are approached.
 
I/O has nothing to do with microstutter on dual GPUs in a SLI/Crossfire config. The lag between the GPUs would be due to non-unified cache, low bandwidth in fabric interconnect between the GPUs, and relatively high latency.

Now, modern-day fabric interconnects like Infinity Fabric (AMD, based on HyperTransport) and NVLink (Nvidia) offer bandwidth upwards to 1 TB/s depending on configurations. They're meant mainly for logic communications of processor elements on the same silicon/chip, however. SLI and its ilk utilized PCIe connects which even today offer magnitudes lower bandwidth than NVLink or Hypertransport (and worst latency, latency issues also being important (arguably the most important) in why microstutter was a problem with SLI/CrossFire-like setups).

Another issue with the dual-GPU setups was that you only really got maybe 50% additional processing throughput despite have 2x the GPu power, due to overhead and limitations in the setup. A more ideal setup would see multiple GPUs on the same die/card and a method of managing them as if they were a single GPU transparent to the developer. By that point though you don't really need to call it a "dual-GPU' or SLI type of setup, you're arguably getting into chiplet territory.

Which, honestly, chiplet-based multi-GPU APU design isn't something I see being out of the ordinary in the near future. The question is could that work for a console design. I mean just look at the PS5's GPU, how fast that runs, and how much heat it's going to generate. And you want two of those things on the same APU? No deal. The same could be said for some hypothetical next-next gen Xbox doing that type of setup, but I used PS5's GPU due to how fast the clock is (and how that does factor into power demand and heat that is generated, at least as much (if not moreso) than GPU size).
We used to have multichip GPUs like GTX290 and HD 7990. I think were still connected by PCi express and were not very good.

This sony thing is suggesting chiplet design with probably infinty fiber and I don't think it is meant for current 7 nm SoC but for futire 5nm SoC with probably RDNA3 which is more efficient and would likely produce less heat.

Also consoles don't worty about heat, tjey are limited by sillicon quality because they want cheaper sillicon which requires better yeild.
If they can manufacture smaller chips then that's a very good yield and is very cheap. They can push these perfect cheap sillicon to as much as thier best potential.

Just my thoughts and what I think is possible. Non of you genius ERA and Gaf engineers believed that 5.5 GB/s ssd was possible on consoles. I will keep my fingers crossed, but if sony made the patent then there is something. There is no smoke without fire.
 

JonnyMP3

Member
I/O has nothing to do with microstutter on dual GPUs in a SLI/Crossfire config. The lag between the GPUs would be due to non-unified cache, low bandwidth in fabric interconnect between the GPUs, and relatively high latency.

Now, modern-day fabric interconnects like Infinity Fabric (AMD, based on HyperTransport) and NVLink (Nvidia) offer bandwidth upwards to 1 TB/s depending on configurations. They're meant mainly for logic communications of processor elements on the same silicon/chip, however. SLI and its ilk utilized PCIe connects which even today offer magnitudes lower bandwidth than NVLink or Hypertransport (and worst latency, latency issues also being important (arguably the most important) in why microstutter was a problem with SLI/CrossFire-like setups).

Another issue with the dual-GPU setups was that you only really got maybe 50% additional processing throughput despite have 2x the GPu power, due to overhead and limitations in the setup. A more ideal setup would see multiple GPUs on the same die/card and a method of managing them as if they were a single GPU transparent to the developer. By that point though you don't really need to call it a "dual-GPU' or SLI type of setup, you're arguably getting into chiplet territory.

Which, honestly, chiplet-based multi-GPU APU design isn't something I see being out of the ordinary in the near future. The question is could that work for a console design. I mean just look at the PS5's GPU, how fast that runs, and how much heat it's going to generate. And you want two of those things on the same APU? No deal. The same could be said for some hypothetical next-next gen Xbox doing that type of setup, but I used PS5's GPU due to how fast the clock is (and how that does factor into power demand and heat that is generated, at least as much (if not moreso) than GPU size).
This is why it's interesting tech and has potential possibilities. And I meant the I/O between the GPUs and not the Storage, which as you said was not enough bandwidth cache and as I also mentioned using old PCIe. This is totally all speculation but it's fun to imagine certain scenarios considering the PS5 is a redesign of hardware architecture. Would it mean a chiplet formation in the end? It's kinda what the cell processor was with an SPU controlling the SPEs.

The point is that Sony eliminated PC based design bottlenecks that made SSD's be 100x faster and not just 2x faster.
Would a custom designed dual GPU SOC actually work today compared to a decade ago?
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Just my thoughts and what I think is possible. Non of you genius ERA and Gaf engineers believed that 5.5 GB/s ssd was possible on consoles. I will keep my fingers crossed, but if sony made the patent then there is something. There is no smoke without fire.

Who didnt believe in 5.5GB/s SSD in a console?
PCIE4 was already a standard by the time Sony announced its console, it wasnt out of the realm of possibility that they were fully utilizing the lane with "console optimizations" to reach sustained high speeds.
PCIE4 already has a theorectical speed of ~8GB/s at x4.
And we already knew about PCIE5 which has a speed of 16GB/s at x4.

Doubting that isnt very clever.
 
We used to have multichip GPUs like GTX290 and HD 7990. I think were still connected by PCi express and were not very good.

This sony thing is suggesting chiplet design with probably infinty fiber and I don't think it is meant for current 7 nm SoC but for futire 5nm SoC with probably RDNA3 which is more efficient and would likely produce less heat.

Also consoles don't worty about heat, tjey are limited by sillicon quality because they want cheaper sillicon which requires better yeild.
If they can manufacture smaller chips then that's a very good yield and is very cheap. They can push these perfect cheap sillicon to as much as thier best potential.

Just my thoughts and what I think is possible. Non of you genius ERA and Gaf engineers believed that 5.5 GB/s ssd was possible on consoles. I will keep my fingers crossed, but if sony made the patent then there is something. There is no smoke without fire.

Lol, I never said 5.5 GB/s SSD was impossible xD. I do question the validity of use of the 22 GB/s maximum compressed data rate in terms of what data really benefits from that level of lossy compression without particularly noticeable quality degradation, though.

It's an interesting patent, that's for sure. But companies like Sony and Microsoft have patents for many, many different things and only a fraction of them become actual commercial products. I've seen others speculating this could be for cloud streaming and that's a viable alternative worth considering, at any rate. It's still debatable whether a chiplet-based design can truly match the performance of a monolithic die on the same process node. So far conclusions show that to be "no", but the obvious advantage of a chiplet approach is scalability and modularity.

Those are big benefits and with this type of design, if Sony were to roll with it in actual production, they could actually bring back the PlayStation Portable line rather easily by simply reducing the chiplet count. Of course, there's still other things worth resolving to make it a fully viable approach, such as making the mesh of GPU chiplets transparent to devs as a single GPU, working out how framebuffer image would be built and sent out to the display device (which I know the patent touched on with as an example), and just how scalable could they make this i.e how many GPU chiplets in a mesh design for an APU design could it really handle (probably useful for something like scaling down size for a portable system spin-off design, for example), could the GPU chiplets be of different sizes or must they all be of the same general type regarding CU counts, power, etc. How do you connect them memory-wise, do they all share the same memory or have their own dedicated chunks of of-chip memory (harder to manage, technically feasible to establish some type of cache coherence between the memory pools and GPU chiplets with a cache-coherent interconnect though like Infinity Fabric which AMD already extensively uses; more GPU chiplets would probably call for reducing the bandwidth per chip for IF interconnect though), etc.

I think the patent covers some of those questions but not all of them, because some just can't probably be tested at current time. But I look forward to seeing further work from Sony on this and/or any other companies that are likely no doubt exploring very similar multi-GPU chiplet approaches with AMD or other technologies. If a commercial gaming product can come to market from it, that's all the better!
 
Top Bottom