• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.
  • The Politics forum has been nuked. Please do not bring political discussion to the rest of the site, or you will be removed. Thanks.

Opinion Trailer Platform 15 Best Looking Gamecube Games That Were Ahead of Their Time

chaseroni

Member
Nov 27, 2018
267
301
315
Regarding the video itself, there were some really nice looking games, but to me the pinnacle was Metroid Prime/Echoes. I just wish I could play them with true dual analog on Dolphin in 4K somehow. I can't go back to that archaic and awkward control scheme.
You should check out PrimeHack, they let you do this and Mouse and Keyboard control. It's great
 
  • Like
Reactions: Mopey bloke

SSfox

Member
Jan 7, 2018
2,072
9,305
485
RER, RE0, RE4, Zelda WW

I was so fucking blown away when i see RE Remake in the TV but i was poor AF like no way i can buy the game even less the console, at that time there also Tekken 4 i was also so horny about lol . Hard times of youth but at least now it make me appreciate more the things.
 

Romulus

Member
Mar 21, 2019
5,998
7,293
555
That much i am sure of, but the OG Xbox does not have that many CoD titles that also came to Wii. Only Call of Duty 3, i believe.

Interesting that you know of Deadlight :) Yes, it had to drop some things, especially in the realm of texture quality, but it did highlight that Doom 3 eseque visuals were possible. The same tech was seen in Stolen.

So, a fascimile, really.

I have given plenty of reasons why those titles didn't appear on Gamecube. If they really wanted it, you could get something reasonably looking and feeling like those games on Cube. But it wouldn't be a direct port.

Which would be a passable port by any means. If Riddick, which was actually in development for PS2, would have been released on that platform, then no doubt it would have looked similar to Deadlight.

That is your conclusion, which i don't share. It may look different, indeed, but not unacceptable, if you ask me. By that same notion any CoD Wii port is unacceptable because it loses any shader effects and runs at half the framerate of what any CoD player is used to. Yet the ports have their fans, not in the least because of their controls.

Doom 3 was more than just lighting as ID software mentioned the other consoles didn't have GPU feature set when it came to the shaders. Deadlight is basically nothing but the lighting, and it's a small vertical slice at that. The biggest problem(and I keep saying this) is total RAM. PS2 was extremely anemic when it came to RAM for these games, and, let me emphasize this again. Xbox had to SQUEEZE the game into 64mb and its STILL suffered in places. FYI Doom 3 on Xbox with 2x RAM can do 720p native, so that tells you just how important RAM is for a game like this.

Deadlight looks a half-generation apart and is struggling to sustain the framerate and almost nothing is happening onscreen. So at that point, you're getting Doom 2.5-ish, for lack of better description and even then I'm guessing it would have suffered in performance.
Riddick being in development for PS2 sort of tells me even more. They likely scrapped it because of framerate. Xbox is far faster, more powerful, and lots more RAM than PS2 and still needed a dynamic resolution for Riddick that drops to 480i. So seeing what their vision turned out to be, it makes sense. There's not a single game on either PS2 or Gamecube with complete code that looks like Doom 3 or Riddick, or HL2 for that matter when it comes to physics. Demos can be found all day, but when it comes to actually put the game out there, they just couldn't do it.

I don't see the COD WIi comparison because it wasn't a "60fps shooter" at that time. But these PC/Xbox games were benchmark graphical games, and developers wanted to showcase their technology onscreen. I'm sure that if any of these games would have been able to run at any measure of success, they would have done it because a lot of money was to be made and again they ended up on the Xbox's tiny install base instead. Basically, all we have is tech demos not showing level streaming and failed attempts.
 
Last edited:
Dec 12, 2018
1,224
1,361
420
It didn't come to wii for probably the same reason as the gamecube. By that time more powerful systems were available and there almost no PC shooters on wii unless they were very casual. And the very few I remember ended up worse than the xbox versions. This one comes to mind and it really the only true PC shooter I can think of.
To be fair, Far Cry on Wii was a piss poor cash grab. They did the same with the Brothers in Arms games, which ran worse than on PS2. It wasn't the hardware, it was lazy developers in this instance.
 
Last edited:

Shodan09

Member
Jan 24, 2018
978
2,023
440
I've recently spent a bit of time refurbishing and getting the correct cables etc for my retro consoles. While it's not the purpose of this thread specifically, I see the conversation has turned this direction.

I think you can make a convincing argument for the Xbox being the most powerful because it's GPU supported features the others didn't. It's really open and shut. What has surprised me more than anything playing through some games again, though, is the gulf between Xbox/GameCube and PS2.

PS2 had some good looking games art style wise but everything seems flatter, blurrier, less detailed, and slightly more polygonal.
 
  • Like
Reactions: NeoIkaruGAF

angelgs90

Member
May 8, 2015
80
161
450
I've recently spent a bit of time refurbishing and getting the correct cables etc for my retro consoles. While it's not the purpose of this thread specifically, I see the conversation has turned this direction.

I think you can make a convincing argument for the Xbox being the most powerful because it's GPU supported features the others didn't. It's really open and shut. What has surprised me more than anything playing through some games again, though, is the gulf between Xbox/GameCube and PS2.

PS2 had some good looking games art style wise but everything seems flatter, blurrier, less detailed, and slightly more polygonal.
That's a very interesting topic. What are the best options to get the best image quality possible of these consoles nowadays?
 

Shodan09

Member
Jan 24, 2018
978
2,023
440
That's a very interesting topic. What are the best options to get the best image quality possible of these consoles nowadays?

I've tried a few different things for different machines so I can tell you for the consoles I've investigated.

For very retro consoles upto and including the N64, an OSSC is great for getting an hdmi image with good quality. There may well be cheaper options but I wanted the ability to tinker with settings etc. If you can get an hdmi modded N64 that would provide the best image quality but the mod is hard to find and very expensive. A cheaper option is to put in an RGB mod with a deblur feature. That gives great image quality although you might be shocked at how pixelated N64 games actually were!

Dreamcast supports 480p over vga so that's quite a simple one. I did find I had to run the VGA through the OSSC to eliminate tearing on my tv, mileage may vary for others.

For PS2 I was lucky enough to have component cables from my PS3 and they give a great clear picture. The only issue with PS2 is that it appears most games just used their own sized windows and black borders and used the overscan feature on old TVs to hide this. So you'll find that on a modern TV you get bars around all 4 sides of the image which isn't ideal. Again, if you don't have component input on your TV, an OSSC will do the job and convert component to HDMI.

I've tried 2 solutions for GameCube. My friend has actual official component cables which are like gold dust these days. They worked the best, hands down. But you can also get an HDMI converter cable - I tried one from kaico which worked reasonably well. There was a slight shimmer to the image and it appeared to be slightly off-centre but it wasn't too bad.

For original Xbox I tried the official HDAV pack which plugs into the AV port on the console and gives you your usual component output ports. This worked the best. I also tried a Kaico HDMI converter but again had issues with flickering/shimmering and the image being off-centre.

Once you've got your console outputting through component you can then hook it up to the OSSC to get an HDMI signal.

Again, there may well be a cheaper HDMI option than the OSSC, but I wanted the ability to tinker with settings and timings etc.

I tried all this on an LG OLED and everything looked fantastic. I can try to take some pictures of games running if it will be any help - but off screen photos never seem to look the same as in real life.

Edit: I should add for GameCube games, if you have a first model Wii with the GameCube controller ports you can get component cables for Wii and output the GameCube games in 480p as long as they are NTSC or Japan versions. This is potentially a cheaper option if you have a Wii - I didn't.
 
Last edited:
  • Like
Reactions: Kataploom

angelgs90

Member
May 8, 2015
80
161
450
I've tried a few different things for different machines so I can tell you for the consoles I've investigated.

For very retro consoles upto and including the N64, an OSSC is great for getting an hdmi image with good quality. There may well be cheaper options but I wanted the ability to tinker with settings etc. If you can get an hdmi modded N64 that would provide the best image quality but the mod is hard to find and very expensive. A cheaper option is to put in an RGB mod with a deblur feature. That gives great image quality although you might be shocked at how pixelated N64 games actually were!

Dreamcast supports 480p over vga so that's quite a simple one. I did find I had to run the VGA through the OSSC to eliminate tearing on my tv, mileage may vary for others.

For PS2 I was lucky enough to have component cables from my PS3 and they give a great clear picture. The only issue with PS2 is that it appears most games just used their own sized windows and black borders and used the overscan feature on old TVs to hide this. So you'll find that on a modern TV you get bars around all 4 sides of the image which isn't ideal. Again, if you don't have component input on your TV, an OSSC will do the job and convert component to HDMI.

I've tried 2 solutions for GameCube. My friend has actual official component cables which are like gold dust these days. They worked the best, hands down. But you can also get an HDMI converter cable - I tried one from kaico which worked reasonably well. There was a slight shimmer to the image and it appeared to be slightly off-centre but it wasn't too bad.

For original Xbox I tried the official HDAV pack which plugs into the AV port on the console and gives you your usual component output ports. This worked the best. I also tried a Kaico HDMI converter but again had issues with flickering/shimmering and the image being off-centre.

Once you've got your console outputting through component you can then hook it up to the OSSC to get an HDMI signal.

Again, there may well be a cheaper HDMI option than the OSSC, but I wanted the ability to tinker with settings and timings etc.

I tried all this on an LG OLED and everything looked fantastic. I can try to take some pictures of games running if it will be any help - but off screen photos never seem to look the same as in real life.

Edit: I should add for GameCube games, if you have a first model Wii with the GameCube controller ports you can get component cables for Wii and output the GameCube games in 480p as long as they are NTSC or Japan versions. This is potentially a cheaper option if you have a Wii - I didn't.
Wow, thanks for elaborating! I personally own PS1, PS2 and GameCube at home right now, and I don't own the component cables, so for my case I think the best option will be the OSSC for getting an HDMI image.
 

Shodan09

Member
Jan 24, 2018
978
2,023
440
Wow, thanks for elaborating! I personally own PS1, PS2 and GameCube at home right now, and I don't own the component cables, so for my case I think the best option will be the OSSC for getting an HDMI image.
I've had no issues with it, it's worked really well. In terms of inputs it accepts it will take vga, scart, and component so as long as your console outputs through one of those you should be fine.

Others may be able to elaborate more and suggest OSSC alternatives etc. But like I say, it's worked extremely well for me.
 

nkarafo

Member
Nov 30, 2012
16,042
7,317
1,070
Silent Hill also had self-shadows on snow flakes which was a clever trick if you ask me.
This particular thing was quite mind-blowing to me. I even made a thread about this a long time ago.

I don't think i ever saw particles like this casting shadows in any other game or console ever since.
 
  • Like
Reactions: Shodan09
Jan 29, 2019
5,706
6,147
495
Name one game on GameCube that was such a spectacle that turned heads like no game could compare on a PS2 or Xbox back then.
For xbox there is no argument to be had, but the PS2 was of news by the time the cube came out and it showed.

Just put the resident evil PS2 port aside the GameCube version, the performance and memory limitations of the PS2 were obvious.

However, it also had much less shading capabilities than the Xbox, so it could never do a lot of the Xbox games (still it did pretty well and was close enough).
 

Romulus

Member
Mar 21, 2019
5,998
7,293
555
To be fair, Far Cry on Wii was a piss poor cash grab. They did the same with the Brothers in Arms games, which ran worse than on PS2. It wasn't the hardware, it was lazy developers in this instance.

I mean the ps2 was severely underpowered by comparison to Xbox, and the Xbox version of Brothers in Arms didn't run that great. Only makes sense the ps2 would be even worse, and it was.
There are not many FPS shooters to compare between Wii and Xbox, COD3 and Far Cry are basically it. The xbox version had better visuals and framerate in both. There are rail shooters, like HoTD and Ghosts that also ran better on Xbox. Splinter Cell DA strays even further away from shooters but looked a lot better on Xbox than Wii. I feel like the Wii was probably underutilized in general, but so was the Xbox. Before developers could really push the OG Xbox, the 360 was raced to market to beat Sony to the punch.
 
Last edited:

lostinblue

Member
Dec 22, 2008
3,005
169
1,045
This would only work in a gamecube thread. They don't trade blows at all, xbox was simply more powerful all around. We had a vs thread you were banned from months ago were a multiplatform dev said the opposite of what youre claiming saying any gamecube game would run better on xbox regardless. This was an actual developer that worked on all three, not a forum guy with a 8 different Nintendo avatars and usernames.
Polygon pushing was never proved, just exclusive Nintendo devs making claims that could never be validated. Was gamecube powerful? Absolutely.
Xbox was more powerful on paper, but not always more powerful in real-world scenarios.

Gamecube really had an advantage when it came to textured polygons and lights in a scene, it could texture more per pass, so polygon throughput was halved less from the theoretical maximum (which was lower than Xbox's). This is a bit like N64 vs PSone, you won't find the games with more polygons per scene on the N64. But no one is saying the PSone was more powerful.

It was non-standard and investing on knowledge for it a dead-end for the western industry so some of the advantages were negated if you were a multiplat developer. On another front, it was obvious Microsoft paid for enhanced ports that generation, or they would get nothing but parity and a bit of extra image quality. Lots of games (even japanese ones) even got extra content on the Xbox, as there was investment in developers coming to grips with it.

GC was no good with normal maps (they had to be done on software) and likewise if you threw in Xbox DOT3 bump maps you would have a massive performance penalty as the CPU would have to get involved. So, devs usually took them out and made it closer to the PS2 version (sometimes at 60 fps) or would have to convert everything to EMBM, which was more limited and really not the same thing.

At the end of the day, Gamecube was a cheaper design, that didn't have as many modern features that devs wanted, but it was really efficient, and that efficiency made it punch above it's weight quite a few times.

Xbox couldn't have run Metroid Prime 1/2 at 60 fps, F-Zero GX would also be difficult at 60 fps, Resident Evil 4 could have better textures but would probably have cuts on polycount and perhaps lightning. And Gamecube couldn't have pulled Halo 2 or Doom 3 at all without massively reworking the bump maps (just wouldn't be the same game), note that Halo 2 pulls less polygons per second than Halo 1 so there was a penalty even on Xbox. It also couldn't pull 720p, had less memory and storage (both on optical disc and the fact it didn't have a HDD), framebuffer was crap so you often had to use dithering on it. And so on.

But it was still really powerful for what it was.


On the topic at hand, I find the video that started the thread lousy.
 
Last edited:
Dec 12, 2018
1,224
1,361
420
I mean the ps2 was severely underpowered by comparison to Xbox, and the Xbox version of Brothers in Arms didn't run that great. Only makes sense the ps2 would be even worse, and it was.
Oh, there is actually a port of the first two Brothers in Arms games on Wii too, but like Far Cry, it was a cash grab and technically pretty bad.
My point is that even the PS2 versions of games such as Call of Duty 3, Brothers in Arms, Splinter Cell were better than their Wii counterparts, thus indicating that the Wii versions weren't well made at all, and we all know that the Wii is more powerful than the PS2. Heck, COD3 didn't even have an online mode on the Wii.
 

Redneckerz

Those long posts don't cover that red neck boy
Jun 25, 2018
3,926
3,632
695
Stuck in 1Q84.
Doom 3 was more than just lighting as ID software mentioned the other consoles didn't have GPU feature set when it came to the shaders. Deadlight is basically nothing but the lighting, and it's a small vertical slice at that. The biggest problem(and I keep saying this) is total RAM. PS2 was extremely anemic when it came to RAM for these games, and, let me emphasize this again. Xbox had to SQUEEZE the game into 64mb and its STILL suffered in places. FYI Doom 3 on Xbox with 2x RAM can do 720p native, so that tells you just how important RAM is for a game like this.
Okay. That's that then.
Deadlight looks a half-generation apart and is struggling to sustain the framerate and almost nothing is happening onscreen. So at that point, you're getting Doom 2.5-ish, for lack of better description and even then I'm guessing it would have suffered in performance.
It was a playable demo in development. It was not a finished release. There is no surefire to determine how the end result would perform. But it did show that something similar looking was possible.

And it shouldn't come as a surprise when Severance gave a peek of both Doom 3's visuals and HL2's interactions in 2001, a full 3 years before any of these titles were released and doing so without shaders.
Riddick being in development for PS2 sort of tells me even more. They likely scrapped it because of framerate.
Speculation. You don't know.

Look. I get it. You really want to tell me how the Xbox is better in most respects and i have given plenty of leeway into agreement on several aspects on that. So why keep banging the drum till you have the total point? That aspect makes very very little sense to me and i find that the least interesting bit anyway.
I don't see the COD WIi comparison because it wasn't a "60fps shooter" at that time.
On a pedantic level it wasn't, indeed. But it was perceptual 60 fps on PS360. Wii dropped half the framerate and most of the shader effects and replaced them with TEV equivalents, best they could.

This particular thing was quite mind-blowing to me. I even made a thread about this a long time ago.

I don't think i ever saw particles like this casting shadows in any other game or console ever since.
Self-shadowed particles should be a thing that would exist in more games, as its definitely a thing in the demoscene.

We have GPU driven particles these days, so your guess is as good as mines as to why adding shadows to that pipeline has not seen much usage in games these days.
I mean the ps2 was severely underpowered by comparison to Xbox, and the Xbox version of Brothers in Arms didn't run that great. Only makes sense the ps2 would be even worse, and it was.
The PlayStation 2 was its own animal. it wasn't severely underpowered, it was just different. It was great at blending no thanks to that insanely fast texture fill rate. The Vector Units were a kind of hardware vertex shaders before vertex shaders were a gaming related thing. When played in the right hands, it could output magic, as Shadow of the Colossus for instance shows.

But like the Wii, its rendering paradigm was unusual, essentially being a 16 pipeline texture modifier aided by VU0 and VU1.
There are not many FPS shooters to compare between Wii and Xbox, COD3 and Far Cry are basically it. The xbox version had better visuals and framerate in both. There are rail shooters, like HoTD and Ghosts that also ran better on Xbox. Splinter Cell DA strays even further away from shooters but looked a lot better on Xbox than Wii. I feel like the Wii was probably underutilized in general, but so was the Xbox. Before developers could really push the OG Xbox, the 360 was raced to market to beat Sony to the punch.
Splinter Cell DA and CT did get some special loving indeed. But both consoles weren't put through their paces, but i feel the Xbox in general was put through it more than the Wii ever was.
 

Romulus

Member
Mar 21, 2019
5,998
7,293
555
Xbox was more powerful on paper, but not always more powerful in real-world scenarios.

Gamecube really had an advantage when it came to textured polygons and lights in a scene, it could texture more per pass, so polygon throughput was halved less from the theoretical maximum. This is a bit like N64 vs PSone, you won't find the games with more polygons per scene on the N64. But no one is saying the PSone was more powerful.

It was non-standard and investing on knowledge for it a dead-end for the western industry so some of the advantages were negated if you were a multiplat developer. On another front, it was obvious Microsoft paid for enhanced ports that generation, or they would get nothing but parity. Lots of games (even japanese ones) even got extra content on the Xbox, as there was investment in developers coming to grips with it.

GC was no good with normal maps (they had to be done on software) and likewise if you threw in Xbox DOT3 bump maps you would have a massive performance penalty as the CPU would have to get involved. So, devs usually took them out and made it closer to the PS2 version (sometimes at 60 fps) or would have to convert everything to EMBM, which was more limited and really not the same thing.

At the end of the day, Gamecube was a cheaper design, that didn't have as many modern features that devs wanted, but it was really efficient, and that efficiency made it punch above it's weight quite a few times.

Xbox couldn't have run Metroid Prime 1/2 at 60 fps. And Gamecube couldn't have pulled Halo 2 or Doom 3 at all without massively reworking the bump maps (just wouldn't be the same game). It also couldn't pull 720p, had less memory and storage (both on disk and the fact it didn't have a HDD), framebuffer was crap so you often had to use dithering on it. And so on.

But it was really powerful for what it was.

Alt account. You're saying exactly the same things as the other poster and in the 6th gen thread that was banned. It's a very specific line of thinking that is grounded zero evidence just slightly different verbiage. "More polygons!" The framing is always about GC "punching above its weight" the same thing you used several times and the exact same "bottleneck" angle used for Xbox when actual developers say the opposite. Find developers saying that said the Xbox was bottlenecked compared to GC. The same angle with the Metroid Prime only being possible on GC, which there's no evidence for either, you used that same verbiage in the 6th gen thread and I've never seen anyone use that ever.
 

lostinblue

Member
Dec 22, 2008
3,005
169
1,045
Heck, COD3 didn't even have an online mode on the Wii.
I suspect that was because Nintendo had a really bad online plan, that wasn't even implemented other than promised when the Wii launched.

Splinter Cell DA strays even further away from shooters but looked a lot better on Xbox than Wii.
Wii, GC and PS2 port was done by Ubisoft Shangai.

Some if not all Splinter Cell PS2/GC ports were really good, and a good place to study platform differences (they had a really cool material shader on the PS2) but yeah, it's based on the PS2 version. Splinter cells otherwise relied heavily on Pixel Shader compliant pipelines and DOT3 bump mapping.
I feel like the Wii was probably underutilized in general, but so was the Xbox. Before developers could really push the OG Xbox, the 360 was raced to market to beat Sony to the punch.
In a generation that had to take PS2 as lead platform, when were half to a full generation ahead when it came to features, but were being under-utilized.
Alt account. You're saying exactly the same things as the other poster and in the 6th gen thread that was banned. It's a very specific line of thinking that is grounded zero evidence just slightly different verbiage. "More polygons!" The framing is always about GC "punching above its weight" the same thing you used several times and the exact same "bottleneck" angle used for Xbox when actual developers say the opposite. Find developers saying that said the Xbox was bottlenecked compared to GC. The same angle with the Metroid Prime only being possible on GC, which there's no evidence for either, you used that same verbiage in the 6th gen thread and I've never seen anyone use that ever.
Well, it's obvious to me that some Gamecube games actually have more polygonal detail than Xbox ones, there are also plenty of evidence of Xbox having bottlenecks with transparency and texturing (mostly on ports, which you'll throw out because they're ports). It's even evident on Starfox Adventures vs Conker's Bad Fur Day on Xbox and it's fur shader, look at the amount and size of shells. 60 fps was also more usual on the gamecube than it was on Xbox, by my experience.

Looking at the specs you can also easily confirm Gamecube really does texture more per pass.


The difference in exclusives ethos of asset development is obvious, what they went for was very different (gamecube always went for high polygon, Xbox didn't), because it was different hardware with different strengths. Which is why I mention those differences. I can't obviously polycount and neither can you, but it's a no contest for most exclusives. As for what devs say, I find it useless to get into a quote war. I have no doubt they mean what they say, but I don't know what game they worked on, and why is their opinion relevant. It's obvious to me that the Gamecube was treated as a PS2 pipeline by multiplat developers, and I would do the same with the resources available.

And I do think, with less peak polygons or not Xbox image quality is superior to the GC and Wii, which in my book makes it easier to replay in 2021. I hate dithering.


I agree to disagree with your line of thought.
 
Last edited:

DaGwaphics

Member
Dec 29, 2019
3,676
4,995
505
Once I get over my initial shock that playing Nintendo is a "thing". There are some nice ones there.

I remember that RE Remake and RE 4 almost made me pick one of these up back in the day. Luckily I was able to resist. :messenger_tears_of_joy:
 

Romulus

Member
Mar 21, 2019
5,998
7,293
555
It was a playable demo in development. It was not a finished release. There is no surefire to determine how the end result would perform. But it did show that something similar looking was possible.

And it shouldn't come as a surprise when Severance gave a peek of both Doom 3's visuals and HL2's interactions in 2001, a full 3 years before any of these titles were released and doing so without shaders.

The problem with the deadlight demo(or most demos) shows a very short section of a game compared to what a full retail game has to accomplish. They don't show transitions and sprawling levels for a reason. When you're using taxing features like that lighting model and loading levels and enemies it's a different story when it pertains to the RAM pool.
Of course, I was speculating about Riddick, but it didn't come out on ps2 when it had 6x more consoles in homes. And just looking at the Ps2's library(or even demos) I don't see anything remotely similar when looking at Riddick as a whole. The heavy action, normal maps everywhere, light and shadow gameplay, massive levels, just scream development hell when thinking of it. It needed a dynamic resolution for the Xbox, which was unheard of.

Gonna ignore the thread and users here just because this convo could go on for weeks, and its really not even the right thread.
 
Last edited:
  • LOL
Reactions: lostinblue

Romulus

Member
Mar 21, 2019
5,998
7,293
555
Oh, there is actually a port of the first two Brothers in Arms games on Wii too, but like Far Cry, it was a cash grab and technically pretty bad.
My point is that even the PS2 versions of games such as Call of Duty 3, Brothers in Arms, Splinter Cell were better than their Wii counterparts, thus indicating that the Wii versions weren't well made at all, and we all know that the Wii is more powerful than the PS2. Heck, COD3 didn't even have an online mode on the Wii.

The Wii has a massive rendering advantage over the games you mentioned. It's running at 480p vs 480i on PS2 on all of them. That's extremely significant. Not only that the Wii has a better framerate while running at a higher resolution than PS2 on COD3. And with Brother in Arms, they actually made the textures better on Wii because of the extra memory, but it couldn't match Xbox's shaders.

Gonna ignore the thread and users here just because this convo could go on for weeks, and its really not even the right thread.
 
Last edited:

Redneckerz

Those long posts don't cover that red neck boy
Jun 25, 2018
3,926
3,632
695
Stuck in 1Q84.
I feel like im reliving the sixth generation console discussion again in a time where all of these consoles are discontinued.

The problem with the deadlight demo(or most demos) shows a very short section of a game compared to what a full retail game has to accomplish. They don't show transitions and sprawling levels for a reason. When you're using taxing features like that lighting model and loading levels and enemies it's a different story when it pertains to the RAM pool.
I mean what other kind of evidence do you want me to provide? I never said that the PS2 would do Doom 3 straight up the bone. I said it can pull off something similar, a fascimile. Much in the way Quake 2 for the PS1 was a fascimile of the PC version.

Im really unsure as to why its so difficult for you to even accept the notion that a fascimile could work on Wii or PS2 on the prime notion that id and Valve said it was not possible. But what do you think they mean't with that:
  • Not possible to do a direct port
  • Not possible to do something similar
Hell part of the demoscene's existence is about using tricks. Just see this demo, released this month, showcasing bloom, cubemaps that appear like normal maps, and even depth of field on a Amiga AGA 060 machine (Rougly Pentium 1) in software:

Of course, I was speculating about Riddick, but it didn't come out on ps2 when it had 6x more consoles in homes. And just looking at the Ps2's library(or even demos) I don't see anything remotely similar when looking at Riddick as a whole. The heavy action, normal maps everywhere, light and shadow gameplay, massive levels, just scream development hell when thinking of it. It needed a dynamic resolution for the Xbox, which was unheard of.
It had to be finished in time, so i reckon that's why the PS2 version was dropped. Sadly, no images exist, except for a written excerpt by a developer stating it looks eerily similar to the Xbox version. Take that as you will.
Gonna ignore the thread and users here just because this convo could go on for weeks, and its really not even the right thread.
So what is the absolute conclusion you want to arrive at?
 
  • Like
Reactions: lostinblue
Dec 12, 2018
1,224
1,361
420
And with Brother in Arms, they actually made the textures better on Wii because of the extra memory, but it couldn't match Xbox's shaders.
You're right, the PS2 version is really obscure. Looks like they remade when they ported it to the PS2.
I had in mind that the Wii versions were considered poor ports back then though so there is that.
 
Last edited:

StateofMajora

Member
Aug 7, 2020
958
1,293
360
Its a highlight game, yes. I just don't like it because its not in first person. :p

The original model is 250k. It is then modelled to a low poly model (5k to 10k) and then normal mapped so it looks like a hi-poly model.
Okay, so I went through Silent Hill and I *think* the flashlight is a projected texture. I can kind of tell by the way it was rarely jittering on the environment when I was going super slow just looking for this stuff, or where the light didn't go through one of the car's undersides despite there being visible surface underneath the car. So, the Half life 2 solution most likely. Normally, no one would notice this stuff and in practice it works really well. Which kinda makes sense, it's a solid 60fps game, and Re4 (despite running on the lower clocked cube) is using per vertex lighting at half the framerate, per pixel might have been too heavy for SH.

What was more interesting, and not expected was how the dynamic shadows from the flashlight work. So, best way to describe it would be that there's dozens of different pre determined positions for them depending on your distance from them. So if you move, it actually takes the shadow some frames to catch up, and if you move the flashlight over an object really slowly all the way to the right lets say, but then bring the light quickly back to its middle, the shadow will kind of "snap" back as if it's in a cartoon about to be caught lol. But yeah, this is just me cyborg eyeing it, it works really well just playing the game. And yeah, the shadows on the snow is particularly nice looking. But yeah, it'd be really nice if I could get a developer to confirm these findings.

So on Doom, I checked this old thread on B3D and it's listing Doom guy with 4444 polygons, in the main character model. I'm going to assume that is the upper echelon for the game, because again even to my eyes I know Willits comments don't sit right with me. Normal maps can add detail on top of what modeling is there, but it's not going to make a low poly head look smooth. Only now when we can chuck 50,000 polys on a character do Normal maps finally fill in the gaps. Or even in Gears of war, it works better than in Doom because they have 15,000 tris on Marcus Fenix or thereabouts. I would have to see Doom in wireframe to maybe list all the different poly counts for the models, but for right now I can say they're not going to have many. Esp. the demon I posted earlier.

---

Now, after I play Silent hill this week because it is pretty good, I will compare Conduit 1 and 2.
 
Last edited:
  • Thoughtful
Reactions: Shodan09

Joelenheimer

Member
Jul 24, 2018
245
302
355
Seattle
Ahead of what time? Its own time I guess? whatever that means. There's not one game presented in that video that was ahead of anything that wasn't already done better on PS2/Xbox (emphasis on Xbox).
Just xbox, ps2 wasn't as powerful as the gamecube. I remember comparing RE4 on both, it looked and sounded better on GC. Not by leaps and bounds obviously, but it was noticeable. GC's disc space though, lol!
 

StateofMajora

Member
Aug 7, 2020
958
1,293
360
Just xbox, ps2 wasn't as powerful as the gamecube. I remember comparing RE4 on both, it looked and sounded better on GC. Not by leaps and bounds obviously, but it was noticeable. GC's disc space though, lol!
Re4 was a pretty huge difference honestly. Ps2 wasn't bad if that's all you had, but the differences in modeling and lighting are kind of enormous.
 

StateofMajora

Member
Aug 7, 2020
958
1,293
360
Which Gamecube game above could not have been done for Xbox?
I'm sure they all could have been done, but they wouldn't necessarily be 1:1 ports.

I think the games with lots of texture layering like Star fox, or games like Wave race with its eDRAM reflection trick would need to be cut back somewhat. On the flipside, just the base texture resolution on many cube games could be improved on Xbox just because of the additional ram.

The game i'd really like to see on Xbox is Re4... and depending on the results of that port, it would greatly add to this debate. It is the best looking realistically proportioned game of that era. Base texture resolution could be improved on Xbox no doubt, (it's one of the games weaker points) but who knows how much the game is leaning on the embedded framebuffer. I know it uses it for reflections and heat haze effects and the like.
 
Last edited:
  • Like
Reactions: lostinblue

tr1p1ex

Member
Feb 18, 2014
877
177
495
I think the graphics in today's Nintendo games look much flatter in comparison GC games. I feel like GC games had more vibrant color. Popped more. Were more crisp and the motion was smoother as well. It's probably part the difference in graphics chips, part the hdtv vs crt difference and part nostalgia.
 
Last edited:

Redneckerz

Those long posts don't cover that red neck boy
Jun 25, 2018
3,926
3,632
695
Stuck in 1Q84.
Okay, so I went through Silent Hill and I *think* the flashlight is a projected texture. I can kind of tell by the way it was rarely jittering on the environment when I was going super slow just looking for this stuff, or where the light didn't go through one of the car's undersides despite there being visible surface underneath the car. So, the Half life 2 solution most likely. Normally, no one would notice this stuff and in practice it works really well. Which kinda makes sense, it's a solid 60fps game, and Re4 (despite running on the lower clocked cube) is using per vertex lighting at half the framerate, per pixel might have been too heavy for SH.
Different ambiance, too, despite both being horror titles.
What was more interesting, and not expected was how the dynamic shadows from the flashlight work. So, best way to describe it would be that there's dozens of different pre determined positions for them depending on your distance from them. So if you move, it actually takes the shadow some frames to catch up, and if you move the flashlight over an object really slowly all the way to the right lets say, but then bring the light quickly back to its middle, the shadow will kind of "snap" back as if it's in a cartoon about to be caught lol. But yeah, this is just me cyborg eyeing it, it works really well just playing the game. And yeah, the shadows on the snow is particularly nice looking. But yeah, it'd be really nice if I could get a developer to confirm these findings.
Perhaps its something like a shadow probe (Totally made this one up, by the way.) similar to how a light proble is placed in the world and changes light properties from those passing by or from static nearby objects. Shadow probes/particles would have a similar idea, but for particles/2d sprites that is the snow.

Render-to-texture should be thrown out of the window though - That would be a massive performance hit.
So on Doom, I checked this old thread on B3D and it's listing Doom guy with 4444 polygons, in the main character model. I'm going to assume that is the upper echelon for the game, because again even to my eyes I know Willits comments don't sit right with me. Normal maps can add detail on top of what modeling is there, but it's not going to make a low poly head look smooth. Only now when we can chuck 50,000 polys on a character do Normal maps finally fill in the gaps. Or even in Gears of war, it works better than in Doom because they have 15,000 tris on Marcus Fenix or thereabouts. I would have to see Doom in wireframe to maybe list all the different poly counts for the models, but for right now I can say they're not going to have many. Esp. the demon I posted earlier.
The german article does have a wireframe model and they do mention that Normals are used to give off the impression of a high poly model.
 

PooBone

Member
Jan 31, 2009
4,540
449
1,105
Ahead of what time? Its own time I guess? whatever that means. There's not one game presented in that video that was ahead of anything that wasn't already done better on PS2/Xbox (emphasis on Xbox)

Pffft. Some of these games came out before the Xbox even existed. REmake came out four months after the Xbox and blew away anything on any system at the time.
 

PaintTinJr

Member
Jan 30, 2020
837
2,196
445
Oxfordshire, England
..
Carmack said Doom 3 would not run on Gamecube mostly due to RAM, ps2 would have the same fate in addition to the lack of shader tech on the GPU. There's a big difference to running a tech demo of doom 3 or HL2(which probably both ps2 and GC could do) but actual full levels with NPCs is altogether a different beast. I do think the Xbox's built-in harddrive was another advantage too not often discussed for streaming and I remember a developer mentioning that as well.

I think this interview with VV sums it up nicely. Even the xbox barely had enough RAM for Doom 3 and looking at the minimum requirements for HL2 it's not a stretch to say that either.
Correct me if I am wrong, but I thought the xbox version had to drop the Carmack's reversal (creative Labs stencil shadow patent) tech because it didn't have the depth buffer or polygon bandwidth for the technique, and the subsurface scattered skin shaders were left out, too.

IIRC reading the published emails between Carmack and his Nvidia contact - when he was prototyping the carmack's reversal technique - I seem to recall the PS2 getting a mention of being ideal hardware for that very technique, or maybe it was one of the published papers on the technique that had actual coded demos showing the technique in action that I possibly downloaded from Nvidia that referred to the PS2 geometry engine hardware(reality synth?)

The PS2 was definitely a console out of step with PC hardware by the time it started to really wow with the final range of games with very advanced shading techniques beyond opengl/DX established standards of the day - MGS3, GT4, SotC, to name just a couple - but even with it being a stream processor system, the memory size limitation, coupled with no HDD as standard, and compounded by the versatility of difficult to coordinate processing units meant that the the overall finish for games that weren't built around the system - without high quality custom rasterization/texturing libraries - looked rough around the edges (IMO) compared to their refined and overly vivid cube counterpart versions, and distinctly lower frontbuffer resolution than the xbox versions.

I suspect Doom3 could probably have worked on the PS2, and retained the Carmack's reversal shadowing technique -and maybe the advance shader work seen in the brief outdoor section in the first level and subsurface scattering - but the work involved to map to the various processor and other trade-offs in resolution and frame-rate, and polygon counts and texture quality -probably 320x240@30fps with quarter sized PC textures and half the polygon counts of the xbox version, or 640x480 with quarter polygon count.

If we said that the Timesplitters 2's opening level represents a quality multi-plat FPS effort - ported to the 3 consoles with comparable presentation using PC centric graphics APIs. Doom3 on PS2 or cube would had to match that performance, resolution, geometry and texturing IMO, but still manage to add the shadowing and shader fx to be worth while for a port. The cube's fixed path rendering with tricks was nowhere close to a ATI Radeon 970 for features, so would have been an xbox original type effort - and would have needed a HDD to work IMO - and the PS2 would have needed a HDD to technically manage a proper Doom3 port, but the compromises and work involved would have been so serve, in all likelihood, that it would have been futile, as the final experience would have almost certainly been visually mashed by low resolution, low level textures and low LoD; killing all the visual benefit of the iDtech4 features the PS2 hardware had successfully implemented.
 

Alan Wake

Member
Aug 28, 2020
398
266
260
I'm sure they all could have been done, but they wouldn't necessarily be 1:1 ports.

I think the games with lots of texture layering like Star fox, or games like Wave race with its eDRAM reflection trick would need to be cut back somewhat. On the flipside, just the base texture resolution on many cube games could be improved on Xbox just because of the additional ram.

The game i'd really like to see on Xbox is Re4... and depending on the results of that port, it would greatly add to this debate. It is the best looking realistically proportioned game of that era. Base texture resolution could be improved on Xbox no doubt, (it's one of the games weaker points) but who knows how much the game is leaning on the embedded framebuffer. I know it uses it for reflections and heat haze effects and the like.
Xbox was clearly the more powerful console, but Gamecube had some stunning looking games for sure. Starfox is one of them. Personally I bought Gamecube solely to play RE4 and then realized it had much more to offer. I wouldn't say the games looked ahead of their time compared to what Xbox offered, though. Look at games like Ninja Gaiden, Riddick, Black and a few others. RE4 on Xbox would've been cool!
 

PaintTinJr

Member
Jan 30, 2020
837
2,196
445
Oxfordshire, England
Xbox was clearly the more powerful console, but Gamecube had some stunning looking games for sure. Starfox is one of them. Personally I bought Gamecube solely to play RE4 and then realized it had much more to offer. I wouldn't say the games looked ahead of their time compared to what Xbox offered, though. Look at games like Ninja Gaiden, Riddick, Black and a few others. RE4 on Xbox would've been cool!
It really depends on what your yardstick for "more powerful console" was IMHO. They each could have made that claim depending on the criteria.

The hardware in the PS2 had amazing fill-rate, depth-testing bandwidth, z precision, and was more advanced/versatile- as shown by the fur rendering technique used in SotC, which was a DX9c/Opengl 2.1 geometry shading technique IIRC, but the learning curve for mastery of the PS2 processors to build big games still looks impossible by today's high-level abstracted game development - even the PS3 is more friendly IMO going by the PS2/PS3 linux documentation.

The Xbox was more powerful in the sense it was a 800mhz x86 with PC GPU, able to render a higher resolution framebuffer with shaded polygons, and it had more memory with higher quality texturing(typically IIRC) and had a HDD making memory easier to handle, but it had poorly chosen to use a w-buffer instead of a conventional z-buffer - which I presume was because of the headline game Halo (w-buffer being better for lower precision depth sorting in sparse openworlds, probably coupled with CPU based octree/BSP eating into the CPU performance). The w-buffer also implied low fill-rate as zbuffering for hidden surface removal is (by-design)highly redundant, high performance processing, and the absence of depth cueing fog effects in xbox games probably went hand-in-hand with a oddball w-buffer.

The cube was the most conventional in terms of PC game style development IMO, with its checklist of conventional GPU features, zbuffer, anti-aliasing, anisotropic filtering, early or halfway house opengl GLSL shaders.
If the cube had used the same HDD and memory amount as the very late gen xbox, it would have been better all round than the xbox and probably developers' choice, yield superior easy results, and being considered most powerful.
 

lostinblue

Member
Dec 22, 2008
3,005
169
1,045
If the cube had used the same HDD and memory amount as the very late gen xbox, it would have been better all round than the xbox and probably developers' choice, yield superior easy results, and being considered most powerful.
This is unlikely as far as the Xbox existed, at most I feel it could have been massively popular with Japanese developers.

Multiplatform western developers were all in on standard features and on-the-fly prototyping/changing of code; Pixel Shader was huge for them as was being able to render 720p (even if they didn't use it, they valued the feature set of being able to as they understood future consoles would be able to do it). Being able to develop games primarily on PC was also something they wanted even back then (PC's are cheaper and more powerful than devkits, if the engine runs on PC you'll never develop primarily on "console environment"), it felt like the future for them. This generation was also the start of middleware, with Unreal Engine, Renderware and the like being available across all consoles. Unreal engine had a exclusive Xbox port (Unreal Engine 2X) with Xbox/DirectX optimizations built in, no doubt because it was based on a PC with a PC GPU. And if you count PC game-engines making the jump to Xbox 1 the engine/middleware list is substantially bigger with all the WRPG's and the like. This bridge was happening because it was doable without increasing costs massively.

Gamecube was was friendly enough, but it was it's own thing, devs tools didn't get in the way (compared to N64 and PS2), but they didn't make the bridge and Nintendo kept a lot of specific tools for their own engine implementations (like the Maya plugin they used for Zelda/Mario games). It was easier to treat GC as a PS2 without bottlenecks, since you had to develop with PS2 in mind anyway rather than like a Xbox. One path was easy sailing, the other wasn't. And sure enough, if you took some GC exclusives there also wasn't enough leeway to brute force them without reworking them on Xbox, but that is true even for PS2 games that relied on the vector unit fillrate - MGS2 Xbox port was crap due to it; hell... Zone of the Enders 2 ports on X360/PS3 years on were crap due to it (PS3 version was revised though, probably through similar SPE use as it didn't make it to X360)

I remember Shinji Mikami saying that on Resident Evil 4 they used the TEV pipeline extensively (TEV being the "semi-Pixel Shader" equivalent on Gamecube), but it was a chore, as they had to make a build to test every little change and would love to be able to make changes on-the-fly instead. This preview feature was implemented eventually in the Wii devkit and deemed as a TEV-Pipeline emulator, but it was late generation. (2009-2010 or so)

This was the Gamecube method:



The cartridge was where developers had to build and put their code in order to test it.

This and the fact studios didn't share a lot of knowledge between them (nor did Nintendo, as they would be reluctant to give "pre-written shaders" to third parties) was one of the reasons the TEV pipeline was so heavily underutilized. Other part of the equation on the Wii, despite Nintendo providing more support documentation and help on that end, was that developers didn't see it as useful, because it wasn't the future, it was a dead end for them.

If the Wii had Pixel Shader compliant abilities (and a feature set more in line with Xbox), western developer acceptance and results would have been much, much better. (not to say good, but better)

This on-the-fly prototyping was already possible on Xbox, since it was pixel shader compliant so the same as PC as was the rest of it's architecture for all means and purposes. Devs also saw Pixel Shader as relevant experience going forward, and I have to say, it was, a decade later Nintendo was hiring people with "shader experience" (sometimes through a moniker like "HD development experience") when they launched the Wii U.

Someone that worked on Xbox games of the era would have a better notion of conventional Shader inner workings than anyone that only worked on PS2, Gamecube and Wii.
 
Last edited:
  • Thoughtful
Reactions: PaintTinJr

PaintTinJr

Member
Jan 30, 2020
837
2,196
445
Oxfordshire, England
This is unlikely as far as the Xbox existed, at most I feel it could have been massively popular with Japanese developers.

Multiplatform western developers were all in on standard features and on-the-fly prototyping/changing of code; Pixel Shader was huge for them as was being able to render 720p (even if they didn't use it, they valued the feature set of being able to as they understood future consoles would be able to do it). Being able to develop games primarily on PC was also something they wanted even back then (PC's are cheaper and more powerful than devkits, if the engine runs on PC you'll never develop primarily on "console environment"), it felt like the future for them. This generation was also the start of middleware, with Unreal Engine, Renderware and the like being available across all consoles. Unreal engine had a exclusive Xbox port (Unreal Engine 2X) with Xbox/DirectX optimizations built in, no doubt because it was based on a PC with a PC GPU. And if you count PC game-engines making the jump to Xbox 1 the engine/middleware list is substantially bigger with all the WRPG's and the like. This bridge was happening because it was doable without increasing costs massively.

Gamecube was was friendly enough, but it was it's own thing, devs tools didn't get in the way (compared to N64 and PS2), but they didn't make the bridge and Nintendo kept a lot of specific tools for their own engine implementations (like the Maya plugin they used for Zelda/Mario games). It was easier to treat GC as a PS2 without bottlenecks, since you had to develop with PS2 in mind anyway rather than like a Xbox. One path was easy sailing, the other wasn't. And sure enough, if you took some GC exclusives there also wasn't enough leeway to brute force them without reworking them on Xbox, but that is true even for PS2 games that relied on the vector unit fillrate - MGS2 Xbox port was crap due to it; hell... Zone of the Enders 2 ports on X360/PS3 years on were crap due to it (PS3 version was revised though, probably through similar SPE use as it didn't make it to X360)

I remember Shinji Mikami saying that on Resident Evil 4 they used the TEV pipeline extensively (TEV being the "semi-Pixel Shader" equivalent on Gamecube), but it was a chore, as they had to make a build to test every little change and would love to be able to make changes on-the-fly instead. This preview feature was implemented eventually in the Wii devkit and deemed as a TEV-Pipeline emulator, but it was late generation. (2009-2010 or so)

This was the Gamecube method:



The cartridge was where developers had to build and put their code in order to test it.

This and the fact studios didn't share a lot of knowledge between them (nor did Nintendo, as they would be reluctant to give "pre-written shaders" to third parties) was one of the reasons the TEV pipeline was so heavily underutilized. Other part of the equation on the Wii, despite Nintendo providing more support documentation and help on that end, was that developers didn't see it as useful, because it wasn't the future, it was a dead end for them.

If the Wii had Pixel Shader compliant abilities (and a feature set more in line with Xbox), western developer acceptance and results would have been much, much better. (not to say good, but better)

This on-the-fly prototyping was already possible on Xbox, since it was pixel shader compliant so the same as PC as was the rest of it's architecture for all means and purposes. Devs also saw Pixel Shader as relevant experience going forward, and I have to say, it was, a decade later Nintendo was hiring people with "shader experience" (sometimes through a moniker like "HD development experience") when they launched the Wii U.

Someone that worked on Xbox games of the era would have a better notion of conventional Shader inner workings than anyone that only worked on PS2, Gamecube and Wii.
That's very interesting info, which I assume didn't apply to Japanese devs, because going by Super Monkeyball (an SEGA AM2 launch title) shaders were in full use by a third party on the cube; especially in the final level of the single player game.

In fact, comparing Super Monkeyball 2's Target mode - from the original game on cube with the Telltale Deluxe ports of SMB 1+2 for PS2 and Xbox some years later - you get a real feel for what each console did well. Contrary to the idea that shaders would be used more on xbox, the ocean surface shader fx is completely missing on xbox, and the depth cuing fog isn't implemented at all making it an unplayable mode - like some weird floating in space puzzle from some old Ulysses 31 cartoon episode where depths of things are a guess.

The Xbox's 800x600 res (iirc), stable frame-rate and quality texturing looked great in the mode, but on balance, it is the worst of all three versions, and that was from a quality developer that did great looking Lego games on all platforms.
The PS2 version looks kind of washed out, and the texture filtering and geometry aliasing looks rough on native hardware, although the massive frustum change from having a draw distance that far exceeds the cube and xbox versions because of the increased depth precision probably doesn't help. The depth cuing fog is implemented more accurately than the cube, and the ocean shader fx is a match, or possibly more detailed, but with the dualshock dead zone so large the game seemed to have been altered in a detrimental way to offset the looseness, and so on balance the cube version looks and plays the best IMHO.

As for someone working on xbox having a better understanding of conventional shaders than the other two, I'm no so sure about that. Anyone using shaders on the PS2 (or cube) would have needed far more understanding than the confines of vertex and fragment pipelines of GLSL, Microsoft's Shadermodel or Nvidia Cg of the time.

The versatility of the PS2 hardware particularly allowed for SotC to render fur in real-time, and that wasn't a capable technique on accelerated PC GPUs until Opengl 2.1 and DirectX9c or 10 released; yet obviously there was no PC gamer market with that hardware to sell a game until some years later, so some shader techniques in PS2 games were cutting edge at the time, and those working on them were probably authorities on the subject - right up until the early 360 launch put everyone on the back foot as game development became largely expensive middleware based, overnight.
 

StateofMajora

Member
Aug 7, 2020
958
1,293
360
If the cube had used the same HDD and memory amount as the very late gen xbox, it would have been better all round than the xbox and probably developers' choice, yield superior easy results, and being considered most powerful.
Cube had a couple of low hanging fruit improvements to be made, but it didn’t really need an hdd. I mean sure it could help, but the mini discs were already faster than dvd.

Really, it needed more main memory to at least match ps2’s amount, and also 1 extra Meg of eDRAM for the framebuffer would have resulted in no dithering in certain games, possibly msaa or even supersampling.

48mb 1-tsram and 4mb eDRAM, along with clockspeed improvements would have went a long way.

But this works for Xbox as well, just give it 4mb eDRAM and it would be quite the monster!
 
Last edited:
  • Thoughtful
Reactions: PaintTinJr

lostinblue

Member
Dec 22, 2008
3,005
169
1,045
That's very interesting info, which I assume didn't apply to Japanese devs, because going by Super Monkeyball (an SEGA AM2 launch title) shaders were in full use by a third party on the cube; especially in the final level of the single player game.
Documentation was available from day one and any developer with resources could get good results out of it, it's just that you'd have to both go at it from scratch and make your code specific for the gamecube.

Most developers only invested time enough to make it run/do whatever they were doing elsewhere, not more than.

Sega was an interesting developer on the Gamecube, I remember hearing they ported all their Dreamcast development pipeline to it, development tools and development environment so some games could have been developed for the Gamecube almost as if they were Dreamcast titles. Gamecube was the most similar to the Dreamcast console of the three too.
In fact, comparing Super Monkeyball 2's Target mode - from the original game on cube with the Telltale Deluxe ports of SMB 1+2 for PS2 and Xbox some years later - you get a real feel for what each console did well. Contrary to the idea that shaders would be used more on xbox, the ocean surface shader fx is completely missing on xbox, and the depth cuing fog isn't implemented at all making it an unplayable mode - like some weird floating in space puzzle from some old Ulysses 31 cartoon episode where depths of things are a guess.
Water on the gamecube was usually done via EMBM and using the on-die 1 MB texture cache. Xbox wasn't as efficient when it came to multiple EMBM passes (on xbox it was preferable to do one pass DOT3) and didn't have eDRAM.

Gamecube implementations of water also relied on alpha massively, something that had a smaller performance hit on the Gamecube.
As for someone working on xbox having a better understanding of conventional shaders than the other two, I'm no so sure about that. Anyone using shaders on the PS2 (or cube) would have needed far more understanding than the confines of vertex and fragment pipelines of GLSL, Microsoft's Shadermodel or Nvidia Cg of the time.
That's a bit hard to grasp/extrapolate. I think you can say that, in the same sense that you can say a developer who had to code in assembly back in the day might be a better programmer for it, because he has to understand how something is done (and I do think that is true if we look at programmers with a demoscene background, on the Nintendo front, Shinen and Factor 5 come to mind).

That said, PS2 and Gamecube while versatile enough were fixed function, so while we could call what people were doing shaders, they were either software running outside the GPU or not software at all and instead a group of specific calls with finite results. Granted that examples of what Pixel Shader 1.x could do that other 6th gen consoles couldn't pull in some shape or form were non-existant, but it's down to the hardware workflow, it's simply more in line with what we have now and it really made it hard to port down without rewriting everything. It was a step in what is still deemed as the right the direction.

The Vertex Shader as well, there is specifically nothing the vertex shader on the Xbox could do, that GC or PS2 couldn't, other than offloading most of that process to the GPU. But again, that's more in line with what we have now.

Again, the worst problem here was really that there was no massive pool of software examples you could take what you wanted from, or at least access it to learn how they did it. With unified shaders there's plenty of them available for anyone to reverse engineer.
The versatility of the PS2 hardware particularly allowed for SotC to render fur in real-time, and that wasn't a capable technique on accelerated PC GPUs until Opengl 2.1 and DirectX9c or 10 released; yet obviously there was no PC gamer market with that hardware to sell a game until some years later, so some shader techniques in PS2 games were cutting edge at the time, and those working on them were probably authorities on the subject - right up until the early 360 launch put everyone on the back foot as game development became largely expensive middleware based, overnight.
Both Gamecube and Xbox did the very same Shell Rendering technique you speak of. Tech papers for it precede that generation of consoles by a few years (1993, and perhaps earlier papers exist). It didn't require openGL 2.1 or DirectX9/10, albeit sure, they might have added some calls to accelerate it or perhaps it's tied to pixel shader 3 (DX9) or pixel shader 4 (DX10).

Gamecube was the most suitable hardware of it's generation to do it, as it had the edge in texture passes per clock needed to do it and the aforementioned eDRAM that helped massively.

Shadow of the Colossus did things more impressive than the shell rendering for fur if you ask me. There's a good article for it that I'm sure you know, but is never bad to hyperlink.

I remembered OpenGL was something Gamecube got right, as it used a mostly clean fork of it complete with hardware optimizations. PS2 didn't, and I don't know if Xbox allowed it because they wanted to enforce DirectX instead. OpenGL sure proved to be the future, and not DirectX.
Really, it needed more main memory to at least match ps2’s amount, and also 1 extra Meg of eDRAM for the framebuffer would have resulted in no dithering in certain games, possibly msaa or even supersampling.

48mb 1-tsram and 4mb eDRAM, along with clockspeed improvements would have went a long way.

But this works for Xbox as well, just give it 4mb eDRAM and it would be quite the monster!
This. I was flabbergasted when they didn't bother make those quality of life improvements for the Wii.

It made it even longer in the tooth.
 
Last edited:
  • Fire
Reactions: PaintTinJr