• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS3 Cell made Sonys first party what they are today.

Pedro Motta

Gold Member
Sonys strength is still their art and graphics departments. Whether their code is more efficient or better written under the hood is not really manifesting itself in groundbreaking mechanics in games.
What groundbreaking mechanics are you talking about, and who is making them?
 

Pedro Motta

Gold Member
Any, and no one is. Hence the statement. Same question to you?
Game design wise we can always go to the classic BOTW, where the approach to the open world design and freedom of choice is revered today. But thats an approach based on game design not really tech wise mechanics.

I can argue that most of the Naughty Dog games, their animation blending is bringing a lot to the table gameplay wise, it's unmatched. Also I've never seen flying in an open world game as good as Horizon FW does it and it's game changing. I don't see anyone else trying to accomplish these "mechanics" in the same way or even trying to get close.
 

cormack12

Gold Member
Game design wise we can always go to the classic BOTW, where the approach to the open world design and freedom of choice is revered today. But thats an approach based on game design not really tech wise mechanics.

I can argue that most of the Naughty Dog games, their animation blending is bringing a lot to the table gameplay wise, it's unmatched. Also I've never seen flying in an open world game as good as Horizon FW does it and it's game changing. I don't see anyone else trying to accomplish these "mechanics" in the same way or even trying to get close.
No, those games are running on x86.

Make a better argument that revolves around the cell that makes me change my mind.
 
Last edited:

Arioco

Member
Most of these points also apply to the Sega Saturn, back then the narrative was hard to code for = bad. But with PS3 it's now, hard to code for = good.

It's not good and it's never been good. How could it be good when almost every multiplat game ran and looked better on 360 throughout the entire generation, and older and cheaper console?

PS3 as a whole was a mistake on so many levels. Expensive and difficult to develope for. Probably the Cell is not the only component to blame for that, but it played a huge part.

The fact the Cerny's approach to hardware design ever since he took over is exactly the opposite of PS3 says it all. Sony realised that something like PS3 should not happen again.
 

MonarchJT

Banned
It's not good and it's never been good. How could it be good when almost every multiplat game ran and looked better on 360 throughout the entire generation, and older and cheaper console?

PS3 as a whole was a mistake on so many levels. Expensive and difficult to develope for. Probably the Cell is not the only component to blame for that, but it played a huge part.

The fact the Cerny's approach to hardware design ever since he took over is exactly the opposite of PS3 says it all. Sony realised that something like PS3 should not happen again.
absolutely this Sony if could would delete and make disappear every memory of the PS3 gen.
 

MikeM

Member
Most of these points also apply to the Sega Saturn, back then the narrative was hard to code for = bad. But with PS3 it's now, hard to code for = good.
Nah. PS3 could have been so much more if the coding environment was simplistic and not complicated. Any dev time lost to just making code work is taken from other aspects of game development.

As much as I enjoyed the PS3, its not lost to me that it could have been even better. Its also the reason why I got into Xbox platform, simply because generally games looked and played better there.
 

Amiga

Member
If Sony was Apple(r still Sony) they would have stuck to their guns. Improve the architecture, API, libraries and middleware for future Cell chips. Combine it with better integrated GPU instead of one rushed in at the last minute.

On the other hand, going full x86 and conforming with established 3rd party development conventions is a big reason they one the PS4 generation. It was the best call within Sony's financial means. They lost a big chunk of the PS2 market and mindshare and the leverage to make 3rd parties conform with them.

Even though PS3 1st party games were popular and celebrated, they had a tough time selling. The best they can manage was 2-4 million before the heavy discounts.

To me it felt more like they made great games despite the cell.
Peak PS3 graphics looked better than peak XB360 graphics. So the Cell capability was real.
 
I expect most of these studios to move to UE5. So this may have been true at one point, but I don't think it's sustainable.
Maybe we should wait for UE5 games to actually start shipping before crowning it as the only viable way to make games.

It's not like technical competence goes to waste just because you are working with Unreal.
 
Last edited:

lh032

I cry about Xbox and hate PlayStation.
cell processor is one of Sony biggest mistake.
thanks to this stupid decision PS3 BC is still not available.
 

assurdum

Member
cell processor is one of Sony biggest mistake.
thanks to this stupid decision PS3 BC is still not available.
Ironically the main issue of the pc emulator it's the RSX gpu support and not the cell CPU. Greedy Nvidia really fucked the potential of this console with this shitty mediocre overpriced gpu. Can only imagine what beast could be the ps3 hardware with a decent gpu combined with the cell.
 
Last edited:

winjer

Member
Ironically the main issue of the pc emulator it's the RSX gpu support and not the cell CPU. Greedy Nvidia really fucked the potential of this console with this shitty mediocre overpriced gpu. Can only imagine what beast could be the ps3 hardware with a decent gpu combined with the cell.

If Sony had been smart, they could have gone to ATI and got something like the Xenos GPU that equipped the X360.
It would probably cost the same as the cut down 7900GT from nVidia. But it would be more advanced in features and perform better.
 

ReBurn

Gold Member
Sony first party had direct access to the PS3 hardware engineering team, so of course they were better at developing PS3 games than anyone else. Just like they have direct access to the PS4 hardware engineering team today. This isn't a "conquering adversity" story, it's a "my parents were rich and sent me to private tutors" story.
 
Last edited:

MonarchJT

Banned
Sony seems to have some engines that are pretty good. I’m sure there are others out there.
the problem at this point for Sony it begins to be parallel development ... if you use a proprietary engines and api developed to take advantage of the peculiarities of the console .. you will spend more money, more time and you need more person on to port the game over the PC. Probably many studios after the development of the console version will not even want to try their hand at porting the game to the PC version. The unification of development process is fundamental for Sony's future and I believe that they will abandon their proprietary engines within the next gen. Let us remember that the diversification (doubling) of the development process was one of the reasons why there is no longer a Sony handheld for which Microsoft has practically integrated as os windows in the console and for which nintendo was more than happy to produce games only for a hybrid console instead of a home console and handheld and also for the same reason that Apple is putting the M1 everywhere. Since I foresee the release of Sony games on the Windows platform, a fundamental step for the future of the Japanese company, I don't think this matches the will to keep engine and proprietary API in life. This point at the investiments in Epic
 
Last edited:

TGO

Hype Train conductor. Works harder than it steams.
I'm surprised most PS3 games aren't available for BC on PS4 & PS5.
Didn't most Devs just dump everything on the PowerPC core and call it a day.
Hence the poor performance
I mean the list of games that actually use the SPE's are slim.
 

Clear

Gold Member
Most of these points also apply to the Sega Saturn, back then the narrative was hard to code for = bad. But with PS3 it's now, hard to code for = good.

Just because two things are difficult, it doesn't mean that they are difficult for the same reasons or in the same way.
The problem with Saturn is that it was basically a really good 2d console design that was augmented with 3d functionality late in the design stage, which was the opposite of PSX which was a pure 3d design.

The problem with CELL was that only the PPE core resembled a standard cpu design; the spu cores were more like programmable dsp's; blindingly fast at chewing through calculations but slow at handling logic and branching operations and with extremely limited memory access.

The theoretical benefit being insanely high peak performance for its day if you could get everything running at full occupancy, but the reality of course was that more conventionally architected code ran comparatively slowly.

Kutaragi's thinking no doubt was that this problem would solve itself over time in the same way it had on PS2, where the benefits of the Vector units went from being left completely ignored on early titles like Ridge Racer, to being crucial to higher performing late-gen titles.

These issues may have been surmountable had they not needed to split the memory pool to support the Nvidia GPU, because that introduced even greater complication due to extreme latency issues when addressing the "wrong" region. Ironically perhaps it was this aspect that was far more impactful on real-world performance than the challeng in utilizing the CELL itself.

When Cerny took leadership his focus was on relieving these issues above all else. Hence PS4's UMA and homogenous cpu cores.
 

Romulus

Member
Peak PS3 graphics looked better than peak XB360 graphics. So the Cell capability was real.


Sony studios. Give any of those premier studios an 360 and an equivalent massive budget and it would be superior to ps3 development hell. Peak ps3 visuals were mostly smoke and mirrors or had nothing to do with power, it was things like incredible animation. Even grandiose examples like God of War were designed around a fixed camera.
 
Last edited:
The Saturn also had parallel coding way before the norm too, that's one of the reasons it was difficult to code for. You're making negatives into positives on a similar environment, just because of the platform. Both machines had developers that overcame those challenges and made great software, but not because the Cell is unique.
I was all in on the Saturn. When a developer knew how to use it it could out do the PS1. Look at how good Sega Rally, VF2 and the revised Daytona were.
The most annoying thing from my point of view was the lack of transparency effects. Mesh smoke effects and shadows sucked lol.
 
this although I must say that after whole generatios (ps3/PS4 and now PS5) it seems to have the feeling that the Sony big studios have ghettoized themselves into one genre

I initially rejected this take, but it's hard to deny.

Hopefully their big multiplayer push provides some long-term variety.
 

Aion002

Member
Sony’s mantra to make cinematic video games from the beginning of the PS3 era, is why Sony is what it is today.
Nope.

People buying and praising games like Uncharted, Heavy Rain, TLU and other cinematic games while ignoring and mocking games like Siren, Wipeout, Demon's Souls, WKC, Soul Sacrifice, Freedom Wars, Gravity Rush and many other gems on Vita and Ps3 made Sony prioritize western studios.

On the ps360 gen there was a strong sentiment on so called "gamers" and "journalists " that most japanese games sucked.


Sad 90 Day Fiance GIF by TLC



I still curse those imbeciles to this day.
 
Last edited:

Romulus

Member
I was all in on the Saturn. When a developer knew how to use it it could out do the PS1. Look at how good Sega Rally, VF2 and the revised Daytona were.
The most annoying thing from my point of view was the lack of transparency effects. Mesh smoke effects and shadows sucked lol.


The thing about using examples to prove hardware prowess is we don't really know if those ps1 examples are good exploitations of the hardware. There are lots of examples of ps1 games looking better than Saturn versions but do we always chalk it up to "lazy/inexperienced devs" in only those cases? And ignore that maybe ps1 ports could have been better also?
 
Last edited:
As far as i understand the Cell CPU provided some functionality that duplicated what the GPUs provided
at the time and Sony wanted to develop their own rasterizer but that design failed so they had to get a 3rd
party GPU and selected Nvidia, I am not sure what gains would Sony get form improving on Cell frankenstein
design
Early on, EVERYTHING they said about Cell was about multiple Cells working together like cells in a body. And so all the speculation about PS3 was that it’d use multiple Cell chips with no dedicated CPU.

Then a rumor started going around that Sony was contracting Nvidia for something. I remember there was an interview with some Sony higher-up where someone asked if PS3 would use an Nvidia GPU and he laughed and said that was ridiculous, that Sony didn’t need their help (wish I could find the interview).

From that point all the speculation was that they licensed some kind of rasterizer back-end or something from Nvidia… then it was unveiled, and lo and behold, it’s a single Cell chip + a Nvidia GPU each with its own pool of memory.

Bottom line is Sony (along with Toshiba and IBM) bet big and bet wrong. Turns out the kind of stuff Cell was good at, GPUs are better at. And that what matters most in a gaming CPU is its performance in branch-y single-threaded code.
 

scydrex

Member
Wasn't the PS3 the worst console ever in terms of performance per cost wise ? 900$ to make at lunch while the Xbox 360 cost half as much at the time

Yeah because expensive Cell CPU, blue ray player, wifi, hdmi, memory card reader, ps1 and ps2 chip por BC, HDD, BT from what I remember. The 360 was only focused on gaming console and not a media machine. It didn't have hdmi when it launch or expensive blu ray player and other things.
 
Last edited:

MasterCornholio

Gold Member
the problem at this point for Sony it begins to be parallel development ... if you use a proprietary engines and api developed to take advantage of the peculiarities of the console .. you will spend more money, more time and you need more person on to port the game over the PC. Probably many studios after the development of the console version will not even want to try their hand at porting the game to the PC version. The unification of development process is fundamental for Sony's future and I believe that they will abandon their proprietary engines within the next gen. Let us remember that the diversification (doubling) of the development process was one of the reasons why there is no longer a Sony handheld for which Microsoft has practically integrated as os windows in the console and for which nintendo was more than happy to produce games only for a hybrid console instead of a home console and handheld and also for the same reason that Apple is putting the M1 everywhere. Since I foresee the release of Sony games on the Windows platform, a fundamental step for the future of the Japanese company, I don't think this matches the will to keep engine and proprietary API in life. This point at the investiments in Epic

Sony doesn’t need UE5 for that. All they need is to tweak their engines so that their games can be easily transferred to PC. Something they are currently doing BTW. Not to mention they bought studios to help out with porting so there’s that.

Anyways don’t worry Sony will be fine.
 

PaintTinJr

Member
For anyone that doesn't understand why some of us are ultra critical of DF, the comments in this thread asserting the Cell BE was trash - and a mistake for the time despite it definiting modern heterogeneous computing that Intel is now joining with P and E cores - and the RSX was trash, and somehow the 360 was allegedly "better in most games" - because Richard's DF said so - while not holding up to any fair scrutiny is the reason why.

It is boring to correct widespread misinformation about such hardware and software over and over, especially when Elden Ring, and all FromSoft souls games are made with PhyreEngine which is a by-product of the PS3 being a in-house Sony multiplatform engine rebuilt to compliment PS3 hardware , but the main reason I gather that we didn't get a chance to have a Cell BE 2 was because IBM were largely abandoning POWER to compete in the x64 desktop space and instead leaving it to be used under license by anyone in the enterprise space that had built their house on the architecture and could afford to keep going with it. Between that and the Nvidia GTX 280 being able to accelerate most of the edge case SPU algorithms in GPU Cuda faster and as power efficient, meant the CELL had largely served its purpose to drive big changes in heterogenous compute
 

winjer

Member
For anyone that doesn't understand why some of us are ultra critical of DF, the comments in this thread asserting the Cell BE was trash - and a mistake for the time despite it definiting modern heterogeneous computing that Intel is now joining with P and E cores - and the RSX was trash, and somehow the 360 was allegedly "better in most games" - because Richard's DF said so - while not holding up to any fair scrutiny is the reason why.

It is boring to correct widespread misinformation about such hardware and software over and over, especially when Elden Ring, and all FromSoft souls games are made with PhyreEngine which is a by-product of the PS3 being a in-house Sony multiplatform engine rebuilt to compliment PS3 hardware , but the main reason I gather that we didn't get a chance to have a Cell BE 2 was because IBM were largely abandoning POWER to compete in the x64 desktop space and instead leaving it to be used under license by anyone in the enterprise space that had built their house on the architecture and could afford to keep going with it. Between that and the Nvidia GTX 280 being able to accelerate most of the edge case SPU algorithms in GPU Cuda faster and as power efficient, meant the CELL had largely served its purpose to drive big changes in heterogenous compute

The heterogeneous use of P-cores and E-cores is not comparable to the Cell CPU. They have very different goals and work in very different ways.
Those E-cores can do almost everything the P-Cores can do, but slower and at a smaller power usage.
The SPE was intended as a big FPU. In a way, it's closer to what we understand as a Compute Unit of a GPU, than that of a CPU core.

IBM is still developing the Power arch and releasing CPUs. For example, the Power10 Arch was introduced in 2021.
X86 has always been the default Arch for PCs. Even when IBM invented the PC, it was using X86.
But if Sony wanted to keep using PowerPC arch, they could have hired IBM, like they did for the PS3.

The Xenos was more advanced and more powerful than the RSX.
In fact the RSX was another cause of the technical issues with the PS3, because of it's dedicated vertex and pixel shader units.
On a GPU with unified shaders, each of these units can do vertex or pixel calculations. This means that it can adjust it's load according to the scene to be rendered.
In a scene with more vertex to shade, it can dedicate more units to this task. And when there is a scene heavier on pixel shaders, it can switch to that.
But on a fixed distribution of vertex and pixel shaders, if there is a scene where the is less vertex to shade, then some vertex shading units will go idle. And there is no helping the pixel shading units.
There is a reason why all modern GPUs use a unified shader arch, doing all sort of stuff at any given time. The RSX was a GPU of a dying breed. And after 2007, neither AMD nor nVidia released another arch with dedicated shader units.
 
Last edited:

PaintTinJr

Member
The heterogeneous use of P-cores and E-cores is not comparable to the Cell CPU. They have very different goals and work in very different ways.
Those E-cores can do almost everything the P-Cores can do, but slower and at a smaller power usage.
The SPE was intended as a big FPU. In a way, it's closer to what we understand as a Compute Unit of a GPU, than that of a CPU core.

IBM is still developing the Power arch and releasing CPUs. For example, the Power10 Arch was introduced in 2021.
X86 has always been the default Arch for PCs. Even when IBM invented the PC, it was using X86.
But if Sony wanted to keep using PowerPC arch, they could have hired IBM, like they did for the PS3.
I agree, not identical, but the heterogeneous angle based on power efficiency is the same goals stated in the IBM Cell be introduction pdf from 2005, and lets not forget the SPUs are far more than an ASIC, they could run the full instruction set of POWER, just slowly without giving them code that leaned towards the expected condition of an if statement - which the absence of HT/SMT in e-cores for stream process accelerating is semantically the same IMO.

Without the scale of POWER being what it was back then -ie used heavily by IBM instead of them pushing Intel Xeon systems to their own customer more - Sony couldn't afford the additional premium of going it alone. x86_64 was really the only option left on the table IMHO.
 
Last edited:
LOL remember back in the day on GAF people were saying things like “Xbox 360: 720p, PS3: 1080p”. The hype was as exciting as the reality was disappointing.

It’s safe to say that had Sony known how Cell would pan out, PS3 would’ve been something completely different. Just imagine if you went back in time and told them:

- PS3 will launch at $600 and still be sold at a significant loss

- multi platform games will mostly run worse than on Xbox

- it’s not good for graphics rendering, you’ll still need a dedicated GPU

- it’ll never be used for many of its intended use cases (cell phones, laptops, TVs, Blu-Ray players, etc)

- PS3 will be in 3rd place behind Nintendo and MS, largely due to its price + the difficulty developing for it, eventually pulling ahead late in the generation

- but hey, after several years we’ll get a few really awesome looking games once our devs figure out how to use Cell to its full potential…. Just in time to switch to our “budget gaming PC in a box” PS4 architecture!
 
Last edited:

winjer

Member
I agree, not identical, but the heterogeneous angle based on power efficiency is the same goals stated in the IBM Cell be introduction pdf from 2005, and lets not forget the SPUs are far more than an ASIC, they could run the full instruction set of POWER, just slowly without taking account of given them code that leaned towards the expected condition of an if statement.

The goal of the cell was never to be power efficient. It was to churn out a ton of FP instructions. Alder Lake and Cell are in nothing similar.
And Cell was not some ground breaking CPU arch, that pave the future. It was a dead end that no one else, since then copied. Especially for console gaming.

Without the scale of POWER being what it was back then -ie used heavily by IBM instead of them pushing Intel Xeon systems to their own customer more - Sony couldn't afford the additional premium of going it alone. x86_64 was really the only option left on the table IMHO.

All that Sony had to do was to buy CPUs from Intel or AMD. They could probably get a couple of dual Athlon64, for cheaper than one Cell.
And they would get an Out-of-Order CPU, that would be many times simpler to program for. It would be slower than Cell, but then Sony could get a more up-to date and more powerful GPU to compensate.
 
I don't entirely agree with the OP but I think he's right to some extent.

I would mainly attribute the success of the Playstation first parties to Sony's proficiency in studio management, and allocation of resources as well as having a clear vision for where their gaming division was headed in back in 2010 onwards.

Mark Cerny's design philosophy for the PS4/5 is also a huge player in this, the ease of game development from PS3 to PS4 was massive and may have even streamlined certain parts of the game development because programmers were now wasting little time to get the graphics engine up and running. I expect this process to have gotten even easier from PS4 to PS5 as DF have stated, as well as other developers that the PS5 is ridiculously easy to develop on.

There still will be small issues such as porting first party titles to PC, especially when the games are taking advantage of the consoles more custom features such as the unified memory pool and other features such as GPGPU compute (as was the case with porting GOW over to PC). I expect Sony's acquisition of Nixxes to help mitigate the issues caused by this.
 
All that Sony had to do was to buy CPUs from Intel or AMD. They could probably get a couple of dual Athlon64, for cheaper than one Cell.
And they would get an Out-of-Order CPU, that would be many times simpler to program for. It would be slower than Cell, but then Sony could get a more up-to date and more powerful GPU to compensate.
Yeah IMO the best thing you can say about Cell was that it was one of those things that seemed really promising, and the only way to learn that it wouldn’t pan out was to try it. So thanks to STI for going all in and exploring this dead end of computing. Thankfully they all survived and recovered.

To try to spin it as something that was ahead of its time and that laid the groundwork for modern computing just seems like revisionist history to me.
 

PaintTinJr

Member
The goal of the cell was never to be power efficient. It was to churn out a ton of FP instructions. Alder Lake and Cell are in nothing similar.
And Cell was not some ground breaking CPU arch, that pave the future. It was a dead end that no one else, since then copied. Especially for console gaming.



All that Sony had to do was to buy CPUs from Intel or AMD. They could probably get a couple of dual Athlon64, for cheaper than one Cell.
And they would get an Out-of-Order CPU, that would be many times simpler to program for. It would be slower than Cell, but then Sony could get a more up-to date and more powerful GPU to compensate.
Go back and read IBM's documentation it was very power efficient and was expected to be where ARM is today in every device, which again is another reason why it wasn't continued. The individual core clockspeed in real-time - as a controlled failure feature for mission critical computing (space program?) - again was all measured off power efficiency and the super computer that held the top spot for years using ten of thousands of CELL BEs was also the green computing winner for years too.
 
Last edited:
The thing about using examples to prove hardware prowess is we don't really know if those ps1 examples are good exploitations of the hardware. There are lots of examples of ps1 games looking better than Saturn versions but do we always chalk it up to "lazy/inexperienced devs" in only those cases? And ignore that maybe ps1 ports could have been better also?
Sure, but as you say we will never know. We will also never know if Naughty Dog couldn't do TLOU better on the 360, because it was never done.
But we still compare PS3 exclusives with 360 ones.
Its all just a bit of fun and subjecture.
 

Neff

Member
Nah, it was just awkward and inappropriate. Not to mention contributing to the premium manufacturing costs of PS3 which almost bankrupted Sony.

That said, while multiplats tended to suffer on PS3 compared to 360 for the first few years, there were some examples where PS3 managed to set the standard. FFXIII in particular was jaw-dropping on PS3.

Sony from 2009 onwards where at their best in a lot of ways and that is what set the stage for the PS4.

Yep, I think at some point they lost sight of what made PlayStation so successful in the first place. For better or worse (mostly worse in Sony's case) PS3 changed the gaming landscape more than 360 did. There's a reason PS3 was the last Kutaragi PlayStation. PS4 was everything PS3 wasn't and should have been- streamlined, profitable, and easy to develop for.
 
Last edited:

winjer

Member
Go back and read IBM's documentation it was very power efficient and was expected to be where ARM is today in every device, which again is another reason why it wasn't continued. The individual core clockspeed in real-time - as a controlled failure feature for mission critical computing (space program?) - again was all measured off power efficiency and the super computer that held the top spot for years using ten of thousands of CELL BEs was also the green computing winner for years too.

Regardless of being power efficient or not, the goal of the Cell was not the same as Alder lake.
The Cell is in no way an ancestor to Alder lake. It was just a complex and expensive FPU unit.
 

Arioco

Member
The heterogeneous use of P-cores and E-cores is not comparable to the Cell CPU. They have very different goals and work in very different ways.
Those E-cores can do almost everything the P-Cores can do, but slower and at a smaller power usage.
The SPE was intended as a big FPU. In a way, it's closer to what we understand as a Compute Unit of a GPU, than that of a CPU core.

IBM is still developing the Power arch and releasing CPUs. For example, the Power10 Arch was introduced in 2021.
X86 has always been the default Arch for PCs. Even when IBM invented the PC, it was using X86.
But if Sony wanted to keep using PowerPC arch, they could have hired IBM, like they did for the PS3.

The Xenos was more advanced and more powerful than the RSX.
In fact the RSX was another cause of the technical issues with the PS3, because of it's dedicated vertex and pixel shader units.
On a GPU with unified shaders, each of these units can do vertex of pixel calculations. This means that it can adjust it's load according to the scene to be rendered.
In a scene with more vertex to shade, it can dedicate more units to this task. And when there is a scene heavier on pixel shaders, it can switch to that.
But on a fixed distribution of vertex and pixel shaders, if there is a scene where the is less vertex to shade, then some vertex shading units will go idle. And there is no helping the pixel shading units.
There is a reason why all modern GPUs use a unified shader arch, doing all sort of stuff at any given time. The RSX was a GPU of a dying breed. And after 2007, neither AMD nor nVidia released another arch with dedicated shader units.


And that's the reason why 360 could push many more triangles than PS3. In some games developers had to cut back large crowds like enemies etc when they ported their game to PS3. Ninja Gaiden 2 for instance. When a game was originally designed around 360's hardware porting it directly to PS3 could tank performance and the results were usually pretty bad. When the game was desidned with PS3 as lead platform, on the other hand, 360 proved to be way more flexible and results were much better.
 
LOL remember back in the day on GAF people were saying things like “Xbox 360: 720p, PS3: 1080p”. The hype was as exciting as the reality was disappointing.

It’s safe to say that had Sony known how Cell would pan out, PS3 would’ve been something completely different. Just imagine if you went back in time and told them:

- PS3 will launch at $600 and still be sold at a significant loss

- multi platform games will mostly run worse than on Xbox

- it’s not good for graphics rendering, you’ll still need a dedicated GPU

- it’ll never be used for many of its intended use cases (cell phones, laptops, TVs, Blu-Ray players, etc)

- PS3 will be in 3rd place behind Nintendo and MS, largely due to its price + the difficulty developing for it, eventually pulling ahead late in the generation

- but hey, after several years we’ll get a few really awesome looking games once our devs figure out how to use Cell to its full potential…. Just in time to switch to our “budget gaming PC in a box” PS4 architecture!
Throw in:

-The money throwing at 1st party games to keep PS3 alive would mean PS3, as a whole, was a loss financially that ERASED the profits PS2 had earned. And had PS4 not succeeded, Playstation brand would have ended.

PS3 is like that classic example of a family amassing a fortune over two generations, only to lose it all because their third gen heir was an idiot. The fact that the 4th gen clawed their way back up doesn't undo the sins of the 3rd gen.
 

assurdum

Member
If Sony had been smart, they could have gone to ATI and got something like the Xenos GPU that equipped the X360.
It would probably cost the same as the cut down 7900GT from nVidia. But it would be more advanced in features and perform better.
Don't know why sony opted for nvidia. Rumours said external gpu wasnt even expected on ps3 but just a derivation of the old gpu ps2. Seems first parties stopped this crazyness almost at the last minute.
 
Throw in:

-The money throwing at 1st party games to keep PS3 alive would mean PS3, as a whole, was a loss financially that ERASED the profits PS2 had earned. And had PS4 not succeeded, Playstation brand would have ended.

PS3 is like that classic example of a family amassing a fortune over two generations, only to lose it all because their third gen heir was an idiot. The fact that the 4th gen clawed their way back up doesn't undo the sins of the 3rd gen.
Sony wasn't that off the mark with the idea of a more expensive and more capable console, with tons of features that the competitor didn't have (free online, blu-ray, HDD, HDMI, wifi, rechargeable controller, motion controller, otherOS, it was able to natively run PS1 and PS2 games without requiring emulation).

The problem was that their plans for a more expensive console didn't result in a clearly more powerful console and pretty much everything that could've gone wrong did:
- Terrible PR fumbles.
- It released one year later.
- The Cell was hard to develop for and didn't live up to the hype.
- The GPU was underpowered.
- The SixAxis sucked.
-The split RAM pool was a mistake.
- It was really expensive to manufacture.
- OtherOS made it so that people were buying PS3 to do things other than gaming, since the console was sold at a loss this was the last thing Sony wanted.
- They had to cut features to lower the cost (resulting in key features like backwards compatibility being removed and them getting sued).
- The parent company was in terrible financial shape during all of this.
- Later on they got hacked and had those network outages that lasted weeks.
 
Last edited:

winjer

Member
Don't know why sony opted for nvidia. Rumours said external gpu wasnt even expected on ps3 but just a derivation of the old gpu ps2. Seems first parties stopped this crazyness almost at the last minute.

Maybe brand awareness.
But with Sony, like with MS, it seems the relation was not that good.
I remember hearing about MS having troubles with nVidia, when they decided to reduce the process node for the GPU of the original Xbox. Something that has been done all the time, by all console makers, to produce a cheaper slimmer version of the consoles.
MS was expecting to be able to cut cost, since the GPU would be made cheaper. But nVidia wanted to keep the same price.
I don't know what troubles Sony had with nVidia, but the RSX was the only time they partnered up.

I guess there is a big reason why nVidia only made one console for MS and only one for Sony.
But AMD/ATI already made several of them. Being a good partner in venture like this, is very important.
 
Last edited:
Top Bottom