• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Would the ps3 have worked out better if launched in 2007?

Would it be better or worse for Sony if they launched ps3 in 2007?

  • Yes, ps2 sales could hold them over and over developers would have liked the stronger ps3

    Votes: 10 12.2%
  • No, they couldn’t afford to let 360 be on the market an extra year

    Votes: 56 68.3%
  • No, they needed ps3 out to win the blu ray vs hddvd format war

    Votes: 36 43.9%

  • Total voters
    82
They dropped the dual Cell/Cell graphics only idea very early in the whiteboard phase, they had been developing an actual GPU with Toshiba but it wasn't up to snuff so RSX was dropped in.

The problem with Cell-only graphics would have mirrored the problems with Larrabee. Going with the many CPU core approach leaves you lacking on many of the things that make GPUs fast for graphics, like Render Output Units, Texture Mapping Units, things that are dedicated to outputting a graphics pipeline. By nearer to the end of Larrabee's life and being reworked as Xeon Phi, they were playing with adding things like ROPs back into it, but ultimately a GPU-ey CPU just wasn't as good at graphics as a GPU.

Could have done unified memory with a third party GPU, no reason preventing that, but the main thing that was lacking was planning out the GPU as early and making it as important a citizen as the CPU. In the end what we got was Cell making up for the RSX's shortcomings in the games that used it best, but it wasn't like it was a second equal GPU in its own right.

I think you've touched on a really important point. The RAM split was a consequence of using a rushed RSX part. Remember, the Rambus Flex I/O bandwidth was something like 76GB/sec and Cell contained a BEI block. Many configurations were possible given some creativity and time.

Given equivalent GPUs, Cell could have been used in some really interesting ways over Microsoft's design.

In terms of floating point, cell was better than any cpu for the whole generation on the PC side. Not that pc CPU’s were designed for graphics like cell of course, but hey.

Cell was kind of the last time an architecture focused on cpu over gpu, because afterwards gpu became more important in the PC space, or even with Xbox 360 before.

Right. That generation was the last before the move to large SoC designs. So the philosphical question when designing these things was how do you allocate die space. With the CPU, the decision to concentrate on floating-point and high-bandwidth communications was interesting and in the context of it's time, I'd say correct.
 

rnlval

Member
I doubt that they could have changed a lot. If they wanted they could have launched the console with 512 MB RAM in 2006, since the development kits came with that amount. I really wished that they could have make it because it was obvious there was a big limitations with only 256 MB of RAM. In the early years about 110 MB of RAM was reserved for the OS. That was just fucking huge. Later through optimization the they reduced this to 70 MB and finally 56 MB.

For the rest of the hardware, most things were finalized so they could not change much about it. If anything, Sony was going with a Toshiba GPU in the early hardware development but something happened and they had to switch late in development to Nvidia's GPU. Nvidia didn't had the time to develop something more specialized for consoles so the went with a "from the shelf" GeForce 7800. I wonder if Toshiba's GPU solution could have been finished as planned, how much more powerful would have been than Nvidia's GPU.
NVIDIA's R&D resource was mostly allocated for GeForce 8 (G8x). G7X's SIMD design was dumped for G8X's "many scalars" MIMD design.
 

rnlval

Member
In terms of floating point, cell was better than any cpu for the whole generation on the PC side. Not that pc CPU’s were designed for graphics like cell of course, but hey.

Cell was kind of the last time an architecture focused on cpu over gpu, because afterwards gpu became more important in the PC space, or even with Xbox 360 before.
In terms of floating-point features, PS3 SPU's FPU was designed for graphics weaker accuracy and should NOT be compared to fully supported IEEE-754 from Intel SSE/SSE2.

0PmVMng.jpg



You can NOT do pointer-based data transfers between CELL's PPE and SPU. SPU is NOT a CPU since it can NOT do pointer-based data transfers like a normal multi-core CPU.

You can do pointer-based data transfers with Xbox 360's PPE and ATI Xenos GpGPU. "Fusion" HSA processing is advanced in Xbox 360 when compared to PS3.

You can do pointer-based data transfers with X86-64 and AMD GCN under AMD's "fusion" HSA. AMD GCN's floating-point has the full IEEE-754-2008 support (like Intel SSE) and it's superior to PS3 SPU's floating-point implementation. Proper fusion between CPU and GpGPU should have the same floating-point results.

The majority of floating-point processing on DX10 gaming PC is done on DX10 GpGPU.


------

AMD GCN's Xbox 360 origins. RDNA 2 evolved from GCN.​

No Caption ProvidedNo Caption Provided
PC GCN 1.0 has two ACE units with a total of 16 queued contexts.

PC Hawaii/Tonga GCN 1.1 and PS4 have up to eight ACE units with a total of 64 queued contexts.

Game console GCNs has an extra Graphics CP (Command Processor)

No Caption Provided
ATI's fusion concept is the genesis for AMD's GCN fusion with CPU capability

No Caption Provided
CPU and GPU are pointer copy capable which is not possible for PS3's "teh CELL" i.e. SPU is incapable to read CPU's pointers.

No Caption ProvidedNo Caption Provided
This is why Xbox 360's fusion model is the real CPU/GPU fusion when compared to "teh CELL".

IBM CELL's CPU and SPUs couldn't exchange pointers.

IBM CELL's SPUs couldn't exchange pointers with NVIDIA's RSX.


For NVIDIA CUDA GpGPU, refer to https://devblogs.nvidia.com/unified-memory-cuda-beginners/
 
Last edited:

rnlval

Member
The Toshiba processor would have been quite insane when 100% maxed out... the problem being it would have been a HUGE mistake in terms of multiplatform support as even the first party team was bitching about how complicated it was. Sony's issue was buying into Nvidia's old crap when they could have had an ATI processor like Xenos, or perhaps negotiated better with Nvidia to get an 8xxx series chip. I'm guessing Nvidia wouldn't budge and it'd need to be 2007 to get those graphics options.

Even if they launched in 2006, they still could have went with ATI and been far better off. EDIT: assuming Sony would have had the option to purchase a Xenos chip or something with unified architecture, and that it wasn't kept from them due to MS's partnership with ATI.

But yeah, the 512mb/256mb split would have been a huge jump (Esp. since the 512mb of XDR would be 256 bit bus) but better memory solutions than that could have been had. A single, fast pool would have been ideal.
ATI Xenos' pointer-based implied data transfer method wouldn't optimally work with CELL's SPE (it's a DSP with manual data transfer method).
 

rnlval

Member
They dropped the dual Cell/Cell graphics only idea very early in the whiteboard phase, they had been developing an actual GPU with Toshiba but it wasn't up to snuff so RSX was dropped in.

The problem with Cell-only graphics would have mirrored the problems with Larrabee. Going with the many CPU core approach leaves you lacking on many of the things that make GPUs fast for graphics, like Render Output Units, Texture Mapping Units, things that are dedicated to outputting a graphics pipeline. By nearer to the end of Larrabee's life and being reworked as Xeon Phi, they were playing with adding things like ROPs back into it, but ultimately a GPU-ey CPU just wasn't as good at graphics as a GPU.

Could have done unified memory with a third party GPU, no reason preventing that, but the main thing that was lacking was planning out the GPU as early and making it as important a citizen as the CPU. In the end what we got was Cell making up for the RSX's shortcomings in the games that used it best, but it wasn't like it was a second equal GPU in its own right.

As far as upclocking the RSX PS3 was already really pushing the power and weight and size for the era, though the PS5 is decently larger now (lighter though), and that probably would have also pushed down yields. Just should have planned out the GPU as an equal citizen to the CPU was ultimately it, even RSX with unified shaders would have gone a long way.

FYI, GCN is AMD's "Larrabee" done the right way with X86-64 pointer, X86-64 64B cache lines, X86-64 memory page size support, and full support for IEEE-754-2008 (like Intel SSE/AVX).

w7kKBOL.jpg


Larrabee's very wide SIMD design still exists with Intel's AVX-512 and AMD's incoming Zen 4.
 
They dropped the dual Cell/Cell graphics only idea very early in the whiteboard phase, they had been developing an actual GPU with Toshiba but it wasn't up to snuff so RSX was dropped in.

The problem with Cell-only graphics would have mirrored the problems with Larrabee. Going with the many CPU core approach leaves you lacking on many of the things that make GPUs fast for graphics, like Render Output Units, Texture Mapping Units, things that are dedicated to outputting a graphics pipeline. By nearer to the end of Larrabee's life and being reworked as Xeon Phi, they were playing with adding things like ROPs back into it, but ultimately a GPU-ey CPU just wasn't as good at graphics as a GPU.

So they did have a dual Cell setup at least in the early planning stages? Interesting. There's another reason why I think they might've dropped the idea, as there were already two other prior devices on the market that tried a dual-processor setup in the 32X and Sega Saturn, both of which had lots of technical hurdles to jump through. The 32X is probably more similar to use as a comparison there because it used one of its SH-2s for graphics rendering, which is basically what it sounds like Sony'd of have the other Cell do in this early design.

Which if so there is good reason to suspect why they would add in things like ROPs and TMUs, if they wanted hardware in there to help it with a traditional graphics pipeline. My thinking though is that they were more in a mindset of a non-traditional graphics rendering pipeline. Conceptually if you think about it, they likely were thinking of a compute-driven graphics pipeline, basically what we are going to see the industry likely shift to over the next few years with the advent of mesh shaders.

I'm not saying Sony had an idea of using a 2nd Cell as a mesh shading unit for graphics rendering outside the scope of traditional graphics, but considering the actual Cell in final PS3 hardware did benefit graphics rendering to some extent (mainly for 1P games), it's possible they considered something outside of the traditional graphics pipeline which in concept Cell already had some adaptability for. The question is if it would've been effective enough especially without a traditional GPU, and maybe the concept of a graphics processor going with a compute-driven pipeline versus a fixed function pipeline, was too early to do for a mass market console with that type of design.

Could have done unified memory with a third party GPU, no reason preventing that, but the main thing that was lacking was planning out the GPU as early and making it as important a citizen as the CPU. In the end what we got was Cell making up for the RSX's shortcomings in the games that used it best, but it wasn't like it was a second equal GPU in its own right.

I'd say several parts of the design weren't well though-out (in terms of synergy) beyond just hastily putting in the Nvidia GPU. The EE for BC for example, I don't think that served any other purpose than for BC. Maybe they should've integrated EE logic into the Cell itself and built at least some parts of Cell's design with that at the heart, that way they could've had EE compatibility in the Cell directly and save on costs versus needing both a Cell and EE chip.

After all, Sony knew they were going to put in hardware-based BC anyway, it became a standard for them by that time, and they knew they'd need EE logic in the PS3 design. Planning out the Cell architecture to incorporate that logic into its design from the get-go (and enabling only that logic in a BC Mode when appropriate PS2 and PS1 hardware required it) would've gone a long way IMO, that way you could still have EE logic for BC while also having it present for PS3 games to leverage as part of the larger Cell architecture.

As far as upclocking the RSX PS3 was already really pushing the power and weight and size for the era, though the PS5 is decently larger now (lighter though), and that probably would have also pushed down yields. Just should have planned out the GPU as an equal citizen to the CPU was ultimately it, even RSX with unified shaders would have gone a long way.

Yeah, one of the biggest design mistakes with PS3 was lack of unified shaders on the GPU. They really dropped the ball there as the industry was moving towards unified shaders at the time.
 
Microsoft purposely kept the DVD-HD vs Blueray war on to keep it from establishing itself hoping streaming would prevent a new physical media.

They were right. I know one person with a blueray collection.
This doesn't make a lot of sense. Firstly, I don't think ANYONE was thinking of streaming at that point; if anything it'd of been digital. Streaming and digital are not necessarily one in the same. Secondly, if that was the intent Microsoft wouldn't of been the only company to want to prevent a new physical media being established. Thirdly, if they really wanted to do this why would they back HD-DVD, another (at the time) new physical media format? Would that not compete directly against their idea for a streaming future?

Additionally, streaming technology wasn't really up to snuff for films or television content in the mid-00s', and it wouldn't be until years after Microsoft more or less dropped HD-DVD as a backer before services like Netflix started to introduce film/television streaming to the mainstream. Lastly, the HD-DVD/Blu-Ray format war was for entertainment properties, extending to game consoles.

If there was any motivation for Microsoft to stunt Blu-Ray for HD-DVD, it wasn't for streaming of entertainment content they didn't make anyway; it would've been for purpose of limiting a runaway effect for PS3 in the mass market by having the defacto new physical media format as default support in their then-next gen console, similar to how having DVD support was a massive factor to help PS2 lead the 6th-gen console market.

As you can tell, they didn't need to provide support for streaming, either directly or through proxy, to try doing that. They just needed to support Blu-Ray's direct competitor, which they did. An idea of a tech company wanting to dissuade a new physical media adaption to push streaming doesn't even fit Microsoft's corporate mindset of the '00s (their pivot to services didn't happen until the early 2010's, by which point the Blu-Ray/HD-DVD format war was already over), that's something which'd seem more up Apple's kind of tree.

After all, iTunes was the leading digital storefront of that time, at least for music, and I wouldn't be surprised if it was one of the first to start pushing digital releases for film and television content, too.
 
In terms of floating-point features, PS3 SPU's FPU was designed for graphics weaker accuracy and should NOT be compared to fully supported IEEE-754 from Intel SSE/SSE2.

Are you serious? It's a game console. It's designed for speed in game programming tasks, not absolute accuracy.

That said, unlike the EE, Cell's SPEs have a double precision unit that is fully IEEE854 compliant, you just get reduced speed.
 

rnlval

Member
Are you serious? It's a game console. It's designed for speed in game programming tasks, not absolute accuracy.

That said, unlike the EE, Cell's SPEs have a double precision unit that is fully IEEE854 compliant, you just get reduced speed.
READ the trigger post AGAIN

StateofMajora said:
In terms of floating point, cell was better than any cpu for the whole generation on the PC side. Not that pc CPU’s were designed for graphics like cell of course, but hey.

Cell was kind of the last time an architecture focused on cpu over gpu, because afterwards gpu became more important in the PC space, or even with Xbox 360 before.


------------------------------------




Kzv6OCZ.jpg


I expected more from "teh CELL" when Jaguar @ 1.6 Ghz has superior IPC. AMD Jaguar has slightly inferior IPC when compared to Intel Core 2.
The comparison is between 6 cores Jaguar @ 1.6 Ghz vs 5 SPUs @ 3.2 Ghz.

There are 7 cores Jaguar CPUs available for the game programmer on both Xbox One and PS4.

There are 6 SPUs + 1 PPE available for the game programmer on PS3.

FYI, PS3 CELL has about 25 GFLOPS FP64 which is inferior. IBM PowerXCell 8i is different from PS3's CELL.

PS3 CELL's 25 GFLOPS FP64 is useless for the game's graphics rendering. Hint: Most PC gaming GpGPUs are designed for large-scale FP32 render workloads.

CELL's GFLOPS marketing is BS. With raster game render, PS3's raw GFLOPS is about GTX 8800 level, but GTX 8800 + Core 2 Duo/Quad 3.2Ghz crushed it.

With PS5, AMD easily created CELL SPU-like DSP with AMD's own graphics core CU IP. There's a reason why Sony effectively fired IBM.

Again,
eP5QIFF.jpg


VS


etyv68C.jpg


Notice "proper handling of NaN/Inf/Zero and full de-normal support in hardware for SP and DP". PS3 CELL FP's SP and DP IEEE-754 handling is inferior to AMD GCN's IEEE-754-2008 which Intel SSE/AVX supports.
 
Last edited:

So, we're comparing within generations and it's [34] of XBox360 verse the [105] of Playstation3 and not impressed with the performance of Cell? Again, come on. Be serious, to have 3X the performance using the same lithography node is impressive.

I expected more from "teh CELL" when Jaguar @ 1.6 Ghz has superior IPC.

Cell wasn't designed for these workloads. It emphasizes power efficiency, favors computational throughput and bandwidth.

Also, can you stop using stupid phraseologies like "teh CELL". I'd appreciate it.

FYI, PS3 CELL has about 25 GFLOPS FP64 which is inferior. IBM PowerXCell 8i is different from PS3's CELL.

Obviously.

PS3 CELL's 25 GFLOPS FP64 is useless for the game's graphics rendering. Hint: Most PC gaming GpGPUs are designed for large-scale FP32 render workloads.

I'm the one making the argument that game workloads don't even require a full committment to IEEE754, which you can't seem to grasp. The Emotion Engine and Cell both survived extremely well! I was just noting that it's there for use and was actually performant for it's time to boot.

CELL's GFLOPS marketing is BS. With raster game render, PS3's raw GFLOPS is about GTX 8800 level, but GTX 8800 + Core 2 Duo/Quad 3.2Ghz crushed it.

With PS5, AMD easily created CELL SPU-like DSP with AMD's own graphics core CU IP. There's a reason why Sony effectively fired IBM.

Ok, I have a feeling we're going nowhere. Also, the move away from Cell and STI was more complicated than just "firing IBM".

Have a good night.
 
Last edited:

rnlval

Member
1. So, we're comparing within generations and it's [34] of XBox360 verse the [105] of Playstation3 and not impressed with the performance of Cell? Again, come on. Be serious, to have 3X the performance using the same lithography node is impressive.



2. Cell wasn't designed for these workloads. It emphasizes power efficiency, favors computational throughput and bandwidth.

Also, can you stop using stupid phraseologies like "teh CELL". I'd appreciate it.



3. Obviously.



4. I'm the one making the argument that game workloads don't even require a full committment to IEEE754, which you can't seem to grasp. The Emotion Engine and Cell both survived extremely well! I was just noting that it's there for use and was actually performant for it's time to boot.



Ok, I have a feeling we're going nowhere. Also, the move away from Cell and STI was more complicated than just "firing IBM".

5. Have a good night.
1. That's bullsh*t.

https://www.gamasutra.com/view/feature/132297/processing_the_truth_an_interview_.php?print=1

But can Shippy's insight on both console's processors finally answer the age-old debate about which console is actually more powerful?

"I'm going to have to answer with an 'it depends,'" laughs Shippy, after a pause. "Again, they're completely different models. So in the PS3, you've got this Cell chip which has massive parallel processing power, the PowerPC core, multiple SPU cores… it's got a GPU that is, in the model here, processing more in the Cell chip and less in the GPU. So that's one processing paradigm -- a heterogeneous paradigm."

"With the Xbox 360, you've got more of a traditional multi-core system, and you've got three PowerPC cores, each of them having dual threads -- so you've got six threads running there, at least in the CPU. Six threads in Xbox 360, and eight or nine threads in the PS3 -- but then you've got to factor in the GPU," Shippy explains. "The GPU is highly sophisticated in the Xbox 360."

He concludes: "At the end of the day, when you put them all together, depending on the software, I think they're pretty equal, even though they're completely different processing models."


------------------

2. "Teh CELL" still has inferior real-world IPC.


3. IBM PowerXCell 8i wasn't mass-produced into mainstream markets and it was different from PS3 CELL. For in-order processing from IBM, PowerPC A2 replaced it.

4. You haven't grasp PC DX10 GpGPU (VLIW or MIMD) and Xbox 360 Xenos GpGPU (MIMD) can handle CELL SPU (SIMD)'s graphics SP FP accuracy. LOL, Furthermore, CELL SPU lacks pointer-based "HSA Fusion" like on Xbox 360's PPE+Xenos.
 
Last edited:

rnlval

Member
Dont forget this beauty claim that PS3 was 2 TF.

rsxbandwidth.jpg
GeForce 7 texture units have hardware FP texture filtering, rasterization hardware resolves FP geometry into integer-based pixel grid. Hint: extra fixed-function hardware that separates the GPU from DSP.
 
Last edited:

iHaunter

Member
PS3s main issue was cost and difficulty to develop for. Had nothing to do with launch window in any capacity.
 
Top Bottom