• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Possible PlayStation 5 Pro multi-GPU technology outlined in newly released patent

JonnyMP3

Member
You do know NVLink has a speed of like 50GB/s
PCIE3s bandwidth was never the issue.


If they are planning a multi-GPU PS5 Pro, that pretty hectic, assuming its literally double the PS5 that would be a hell of a machine and legacy mode would be easy to program....just use one GPU.
I guess they really can patent this because its specifically related to consoles not just GPUs.


Lets see who releases multi chip module GPUs first.
AMD-Navi-GPU-Launching-in-2018-Could-Be-MCM-Based.png




The power management part seems to spell out what the PS5 management may be doing in a very layman way:
NVLink was introduced in 2014. So it was after SLI was first introduced. But the increase in bandwidth is definitely going to be helpful.
 

Zannegan

Member
Isn't the PS4 Pro's GPU essentially just a doubled PS4 chip? Not two GPUs stuck together but the same GPU design doubled in every way? I vaguely remember people using the term "butterfly" to describe it back in the day.
 

JonnyMP3

Member
Isn't the PS4 Pro's GPU essentially just a doubled PS4 chip? Not two GPUs stuck together but the same GPU design doubled in every way? I vaguely remember people using the term "butterfly" to describe it back in the day.
Yeah, basically they mirrored the GPU set up under the silicon board as well as above it.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
NVLink was introduced in 2014. So it was after SLI was first introduced. But the increase in bandwidth is definitely going to be helpful.

What increase in bandwidth?
Going to PCIE4?
That doesnt matter if you are using another protocol to communicate between the two GPU.
Which is why I brought up NVLink, the GPUs still communicate with the MB using PCIE3 but the link is multiple multiple times faster so you talking about bandwidth is confusing......what bandwidth exactly are you talking about and what does it have to do with PCIE?
 

Zannegan

Member
Yeah, basically they mirrored the GPU set up under the silicon board as well as above it.
Ah. Gotcha. So this could just be a new approach to the same problem. If so, comparing this patent to SLI'd GPUs and all the issues that setup is known for would be off the mark. Blame the reactions in here on the clickbaity story I guess.
 

JonnyMP3

Member
What increase in bandwidth?
Going to PCIE4?
That doesnt matter if you are using another protocol to communicate between the two GPU.
Which is why I brought up NVLink, the GPUs still communicate with the MB using PCIE3 but the link is multiple multiple times faster so you talking about bandwidth is confusing......what bandwidth exactly are you talking about and what does it have to do with PCIE?
Sorry. The interconnected cache bandwidth. As you said having more cache could help. Think we're getting confused by my use of I/O. I just mean 'data'. And that there were 2 separate problems with 2 GPUs IIRC. The Microstutter timing issues and being underutilised and wasted because there just wasn't enough data to render, so there was a lot of idle time making the dual set up pointless.
 
Lol, I never said 5.5 GB/s SSD was impossible xD. I do question the validity of use of the 22 GB/s maximum compressed data rate in terms of what data really benefits from that level of lossy compression without particularly noticeable quality degradation, though.

It's an interesting patent, that's for sure. But companies like Sony and Microsoft have patents for many, many different things and only a fraction of them become actual commercial products. I've seen others speculating this could be for cloud streaming and that's a viable alternative worth considering, at any rate. It's still debatable whether a chiplet-based design can truly match the performance of a monolithic die on the same process node. So far conclusions show that to be "no", but the obvious advantage of a chiplet approach is scalability and modularity.

Those are big benefits and with this type of design, if Sony were to roll with it in actual production, they could actually bring back the PlayStation Portable line rather easily by simply reducing the chiplet count. Of course, there's still other things worth resolving to make it a fully viable approach, such as making the mesh of GPU chiplets transparent to devs as a single GPU, working out how framebuffer image would be built and sent out to the display device (which I know the patent touched on with as an example), and just how scalable could they make this i.e how many GPU chiplets in a mesh design for an APU design could it really handle (probably useful for something like scaling down size for a portable system spin-off design, for example), could the GPU chiplets be of different sizes or must they all be of the same general type regarding CU counts, power, etc. How do you connect them memory-wise, do they all share the same memory or have their own dedicated chunks of of-chip memory (harder to manage, technically feasible to establish some type of cache coherence between the memory pools and GPU chiplets with a cache-coherent interconnect though like Infinity Fabric which AMD already extensively uses; more GPU chiplets would probably call for reducing the bandwidth per chip for IF interconnect though), etc.

I think the patent covers some of those questions but not all of them, because some just can't probably be tested at current time. But I look forward to seeing further work from Sony on this and/or any other companies that are likely no doubt exploring very similar multi-GPU chiplet approaches with AMD or other technologies. If a commercial gaming product can come to market from it, that's all the better!
I agree with you that patents do not always translate to actuall products. Actually only smaal percetage of them do.
Anyway, these chips are not made by sony, they are a client and are just patenting a method for utilizing it. Thus, I assume a product is thier on the timeline of AMD. Otherwise they won't waste time working on an imaginary product made by an imaginary partner.
 

Rippa

Member
Everyone here quick to point out that it doesn't work on PC so it won't work on console even though PC isn't an all in 1 custom APU and last I checked, doesn't include custom memory controllers and coherency engines.

I was gonna come in here to post exactly this.

Imagine Sony offers up the PS5 Pro in the all digital format. That would be some shit wouldn’t it?
 

Freeman

Banned
I wonder if there is a market for a very price and premium PS5 Pro early in the gen. NVidia is not afraid of making $1000 GPUs and most people don't hold it against them (if anything, people see it as an advantage that AMD can't compete in such an expensive segment).
 
Last edited:

yurinka

Member
I'd bet it's for PS Now. It would make sense to put multiple consoles in a server rack to save costs, but to put multiple GPUs in a console would be too expensive.
 
Last edited:

vdopey

Member
What if there is a reason beyond having a cheaper sku for the two models of PS5.....

What if you can put a digital and disc version side-by-side and connect them like SLI......then the face-plates go on the outer sides of them so it looks like a single unit?

Totally off the wall but just a thought.... 20tflops yo!

Honestly, I've been thinking there must be a reason why they haven't shown the back of the PS5 yet and I believe its this. I wanted to say are Sony thinking about doing the pro as a break out box before, or what if you can attach the PS5 to the digital version ?

Personally I think this is the most logical approach, Sony were doing thunderbolt pcie gpus with their laptops back in 2011: https://www.anandtech.com/show/4474/sony-updates-vaio-z-thinner-lighter-light-peak-and-external-gpu (Called light peak back then)

Here is the thing that most of you aren't taking into consideration both directx12 and Vulkan support multi-gpu rendering, its not the old sli mess of yore, its literally the ability to off load gpu rendering between different gpus - I dont know if PS5 supports Vulkan, but the Sony PS5 apis are very low level:

https://arstechnica.com/gadgets/201...compatibility-as-khronos-looks-to-the-future/

One feature in particular goes a long way toward filling a Vulkan gap relative to Microsoft's API: explicit multi-GPU support, which allows one program to spread its work across multiple GPUs. Unlike SLI and Crossfire of old, where the task of divvying up the rendering between GPUs was largely handled by the driver, this support gives control to the developer. With the addition, developers can create "device groups" that aggregate multiple physical GPUs into a single virtual device and choose how work is dispatched to the different physical GPUs. Resources from one physical GPU can be used by another GPU, different commands can be run on the different GPUs, and one GPU can show rendered images that were created by another GPU.

With this approach, they might have a dock design where you can buy an add-on which connects like a daughter-board provides extra RAM and GPU to the base console and that's all, so we all have our base PS5's want a pro and "8K" support, get the add-on - no need to provide anything else, a PS5 pro bundle could also be sold which contains both units together. It might be just a specialised usb-c / thunderbolt dock design - it depends if they have enabled thunderbolt support, but this would also add extra expense to the PS5 base console.
 
Last edited:
I'd bet it's for PS Now. It would make sense to put multiple consoles in a server rack to save costs, but to put multiple GPUs in a console would be too expensive.
This is something that has to do with the future of hardware in general regardless of application. You console gamers cannot afford to buy expansive consoles. Sony partner (likely AMD) needs to lower the cost of sillicon by manufacturing smaller sillicon which has high yield and eventually low price. Those smaller chips can then be connected together to produce one larger but cheaper chip called Chiplet ( cheaper than one large SoC). This may apply to PS5pro or PS6 or Cloud servers or PC GPUs and even XBOX. AMD and Nvidia both have multichip designs on their road map. Here Sony is making their own implementation of how to render single frame effectively by utilizing the setup or customizing thr chiplet.
 

yurinka

Member
This is something that has to do with the future of hardware in general regardless of application. You console gamers cannot afford to buy expansive consoles.
We can afford it but nobody is interested on them. 32X, Mega CD and stuff like that were big failures. People want just to buy their console, switch on and play games.

Even inside the nerder PC market almost nobody has two GPUs because the performance improvement isn't worth the extra cost for most people. Same goes for the overprice newest top GPU, the extra performance isn't worth the extra cost for most people who can afford them, people think the price difference is too high for what they offer. Most people get cards with a better balance between performance and price.
 
We can afford it but nobody is interested on them. 32X, Mega CD and stuff like that were big failures. People want just to buy their console, switch on and play games.

Even inside the nerder PC market almost nobody has two GPUs because the performance improvement isn't worth the extra cost for most people. Same goes for the overprice newest top GPU, the extra performance isn't worth the extra cost for most people who can afford them, people think the price difference is too high for what they offer. Most people get cards with a better balance between performance and price.
I don't understand why you brought this 32X mega CD comparison. Also just like PS4 pro, you'll get a console and switch on and play games.

Regarding the nerd PC gamers not having SLI these days is mainly because of less support. All pc hardcore gamers were proud of their SLi and Quad SLi setup back in the day. All those nice rigs with 4 GTX285 over each other 😂
 

Andodalf

Banned
Why on earth would they go with a dual GPU setup when they could just spend to get more GPU die space in the APU? Would be far more efficient.

Honestly just going with a CPU and huge DGPU would make more sense than dual GPU
 

JonnyMP3

Member
It's just a "thought experiment" as Mark Cerny says. A theory based on this hardware slide and this probable patent of how the PS5 is architectured.
If people read the article Sony have listed 3 different ways for the patent idea to maybe do a new PS5 update, probably a pro or for the PSNow servers.
CRJ18Pi.jpg
 

notseqi

Member
Having dealt with Nvidia SLI, I can tell you that it's something that should never have hit the mainstream market, it's a beta or even alpha product at best, you have to scramble for tweaks and bullshit to get games to take advantage of it, and even when they do you can have fucked up graphical glitches.
When that stuff hit I thought 'dang, sad that I don't have the cash for it'.

Wonder how that came about. Must have been a marketing guy riffing in a meeting, 'what if people bought two cards instead of just one?'.
Now they charge for two cards, but only give you one 2080ti.
 
32X, Mega CD and stuff like that were big failures. People want just to buy their console, switch on and play games.

32X was a big failure; Mega CD did very well as an add-on and sold more than the PC-Engine equivalent's disc drive expansion worldwide. It was the best-selling console peripheral up until the Wii Fit and Kinect, in fact.

Mega CD's perception problem comes with the American side pushing tons of FMV crap, the actual library of non-FMV far outweighs the FMV stuff and is pretty well-received.
 

Amiga

Member
Microstutter was the problem with SLI and Crossfire. The GPU's ended up having lag between them. So maybe because the PS5 has such fast I/O that it's eliminated the stutter.
Do you have any idea what you just said?

If this is the case AMD will use it on their next PC GPUs and then PC SLI would be back big time.
 
I said that because I understood you were talking about to expand console hardware with addons. Other example could be Saturn's memory expansion.
Expanded console is not a good idea. Here we are talking about future consoles wither it is ps5 pro or ps6.
 

JonnyMP3

Member
Do you have any idea what you just said?

If this is the case AMD will use it on their next PC GPUs and then PC SLI would be back big time.
I'm just theorising from the Sony patent in the article.
But also today, there's a post on AMD patents...


This is all just conjecture and speculation at the moment.
It just looking at the patents and theorising about the potential engineering possibilities.
 
Top Bottom