Ok listen closely.
First, let me say that as a fellow game/graphics industry veteran I respect your position
VFXVeteran
on several points. I have even come across your name in my work a few times so I can appreciate someone who is as passionate about graphics tech as I am. I obviously do not know you personally so I would never say anything to personally attack you or your character.
WIth that said, many of your posts around this forum comes across as highly defensive, sometimes combative, and oftentimes misconstrued. I think this is clearly one of those cases where you are clearly missing the point and making the argument about something that it is not.
So let's be clear, I don't think the argument is that graphics technology itself somehow changes or evolves within a console generation. To your point you keep making between Killzone and Horizon (which isn't the best example and I'll explain why), I don't think anybody with any knowledge of the situation would argue that graphics techniques and algorithms are not reused throughout a generation (or even across generations). There is still tech that we've seen in the PS3 generation that we never saw replicated in the PS4 generation for example (glass rendering in Resistance Fall and mud deformation in Motorstorm come to mind). Yes ray tracing is a nearly 50 year old technology and is not novel today just because we are starting to see it used to some level in realtime games. Similarly, it's common knowledge and common sense that the actual hardware remains fixed throughout a console generation. So again when we talk about the
evolution of graphics within a generation it is not due to magically changing hardware or the invention of fundamentally new graphics techniques that have not been used before on other platforms. There is no argument and if that is your primary point, then you can close this thread now.
What we
are really talking about with the evolution of graphics within a console generation is a problem with humans and software. This is strictly a function of the human developers' ability to write efficient software that actually utilizes the hardware effectively and
THAT is something that evolves over time, particularly with consoles which are typically more bespoke and unknown entities when they first launch. Hardware is just plastic and novel algorithms you may read at a Siggraph is just words on paper until a human being can actually make sense of them and apply them to that hardware with quality software. One's ability to do that for a given piece of hardware will vary and there still other factors involved such as the quality of the tools (for debugging and optimizing code) and the SDK/drivers that ultimately provide the instructions to the hardware. The point that folks are making when saying things like "wait until developers learn the hardware" is that at launch the particular tools and technology that they may have (i.e. their engine optimized for the previous generation) is not mature and will not take advantage of the true advances that the new hardware makes right away. Again this isn't about them not being aware of the state of the art graphics techniques (which hasn't changed) but purely on their ability to apply that tech on the new hardware in an
optimal way. This point isn't up for debate or augment as it is fact and common sense for anyone that has ever developed a game across multiple generations.
Now let me make a distinction here on why this point is more relevant to console gamers. With PCs games, the fundamental principles and framework has not changed since the advent of 3D games roughly 30 years ago. Windows is still the OS, DirectX still the graphics API, keyboard/mouse still the primary input mechanism, discrete CPU/GPU still the core architecture etc. Thus, one can argue that PCs don't really have the concept of a "generation" in the same way that consoles do even though we see new hardware (mainly GPUs) with new features introduced throughout. However, since the PC is segmented and not standardized like a console platform, many of those technologies go underutilized and their impact is limited (think Gameworks, PhsyX, and the myriad of other PC technologies that has seen minimal penetration).
With a console you traditionally have bespoke hardware that is novel and divergent from what you would see on a PC. What this means is that the simple act of getting a game to build and run on the new platform can be a painful process taking many months (before trying to leverage the newer features and tech). Remember Mark Cerny called this "time to triangle". Especially for the Nintendo, Sega, and Sony platforms in the past, a Western developer may try to take their previous game and game engine, port it to the new platform, and just run into a ton of errors where the game doesn't run. Then there are language barriers to overcome where much of the documentation is not understandable, SDKs and drivers that are extremely buggy and unoptimized, and time zone differences for getting support etc. I point this out to say that there are typically a ton of barriers that limit the developer's ability to utilize the new hardware at launch. Yes the hardware is there with it's "theoretical" power and the software techniques are known but cannot be effectively applied at launch. So we get the "launch games" that often look rough with poor performance and are not indicative of the capabilities of the system.
Furthermore, the bespoke hardware in console often adopted fundamentally different architectures than PCs to run a game
effectively. As you have pointed out, games are developed on PCs before being ported over to the platform of choice. But that doesn't mean the code developed on PC is optimized to run best on a PC. In fact, coding for a PS2 back in the day often required developing code that would never run on a standard PC, only a PS2 dev kit or virtual machine. Platforms like the PS2 used a CPU based rendering solution with 2 co-processors that needed to be used in harmony in order to extract even remotely respectable results. It had a non-traditional "GPU" and a ton of memory bandwidth that far exceeded even PCs of the time. With this architecture, taking a game designed for the traditional DX pipeline with a CPU, dedicated GPU, and RAM as was found on PCs of the day would result in horrible results. In order to get quality results, you
HAD to learn that bespoke hardware and write low level assembly code to program the co-processors to do tasks that a may have been done on a discrete GPU on a PC for example. Similarly, the PS3 was even worst where most of the launch games just attempted to port over their PS2 or PC engines with again horrible results. The Cell was a beast with tons of processing power but if you tried to run it similarly to a PC, you would be leaving the majority of it's performance on the table. Plus, the RSX in isolation was not a powerful GPU even by the standards of the day. We saw the horrible performance at launch in games like Genji, COD3, and Madden 07 (which only ran at 30fps even though the X360 version ran at 60fps). Same hardware, same developer, and same known techniques. But guess what, those techniques didn't work the same on
that platform. It took EA 2 more iterations of Madden to eventually get it to 60fps on PS3...once they
learned how to use that hardware to achieve the same results. Yeah you have to actually create multiple jobs to distribute across the SPUs. Have to use the Cell in "unconventional" ways (relative to PC development) to supplement the RSX to boost perf. Once you do that, you start to see huge boosts in the performance you're able to get out of the system and even go beyond other platforms. But that learning curve is what took time and that learning curve is why there was such a gulf from launch games and end of gen games 7 years later.
If you follow what I'm saying so far, then it may be clear why I said that Killzone and Horizon were not the best examples. The learning curve is a direct function of how different the hardware is from the conventional platforms(i.e. PC) and how easy it is for developers to get up to speed on the new hardware. Starting with the PS4, Sony made huge strides in making the initial learning curve much shorter and more manageable. Combined with the fact that the PS4 was the first Playstation console to adopt "off the shelf" PC components and devs were able to port over their PC skus relatively easily. This meant that those early launch games were able to utilize much more of the hardware right away and many of the PS4 launch games still hold up to this day. Games like Killzone SF and Infamous Second Son looked amazing at launch and there really wasn't much more room for the developer to expand on a purely technical level. What we saw more of this gen was the talents of the humans behind the games being exercised in terms of their level, art, and gameplay design. To your point, that is the difference we really see with Killzone SF to Horizon or Infamous to Ghosts of Tsushima . Like you said same technical features for the most part but applied in different ways. This was intentional on Sony's part to remove the system as a barrier to allow developers to spend more time being creative. It worked!
Now, the reason why you hear more PlayStation fans make this claim about game looking so much better by the end of the generation is because while this principle does apply to Xbox and Nintendo platforms as well, it is
MOST noticeable on Sony platforms
BY DESIGN. Particularly when Ken Kuturagi was lead system architect for PS1-PS3 he deliberated designed systems that would a) be unlike anything else on the market that plays games and B.) would have a steep learning curve for devs to harness its power. This gives the impression that the system is evolving over time (difficult to learn, impossible to master
). Mark Cerny (being a developer himself and hearing the negative feedback from devs on PS3) worked to undo that model and adopt more of a (easy to learn, difficult to master). It's in PlayStation's DNS to have some features and do some things that are unlike any other system and (as Cerny has said many times) they want developers to be rewarded for digging deeper into the system over time (Cerny described this as balancing "Evolution vs Revolution"). In contrast, Microsoft literally created the Xbox to be a "DirectX machine for the living room so anyone working on PC would be right at home on Xbox. There really wasn't much of a learning curve and much to unlock over time which is why OG Xbox games looked great pretty much out of the box. That is NOT to say that you don't see some evolution on Xbox as well, just not as stark as on Sony platforms.
Ok, with the detailed explanation out the way, it must be stated that there are
countless examples where you can compare an early launch game on a console to a end of life game and see a stark difference. In some cases, I would argue that it almost looks like 2 different platforms. Again, solely because developer's understanding of the hardware and the associated tools matured to a point to allow developers to achieve more performance in the same envelope.
PS1: Ridge Racer (1995) vs Ridge Race Type 4 (1999)
PS2: Gran Theft Auto (2001) vs Gran Theft Auto: San Andreas (2004)
PS3: Resistance Fall of Man (2006) vs Resisence 3 (2011)
Xbox 360: Gears 1 (2006) vs Gears 3 (2011)
PS3: Uncharted (2007) vs Uncharted 3 (2011)
So there you have it. If you followed what I'm saying and see these examples I've provided (which is just a small sample of this principle in action) and still don't believe that there is true evolution in the quality of game visuals through a console generation then I don't think you ever will no matter what anybody says here. Just look at the images above and you can see how the latter versions look almost a generation ahead. Everything is dramatically improved from textures, geometry, shaders, color, visual effects etc. As some have said this isn't really an argument at all, its fact.
Again, I think the misconception that you have is that the evolution is with the hardware or techniques themselves and it is not. The evolution is with the software that the developers use to build apps on that hardware. It has historically been more pronounced with more bespoke difficult to learn hardware (mostly coming from Sony) and thus is actually becoming less noticeable today with these more PC like consoles. PS4 is the outlier in that the difference is not nearly as pronounced as it was in the past. PS4 was almost entirely evolution while PS2 and PS3 were almost entirely revolution with their approaches. However, PS5 strikes a great balance of the "Evolution vs Revolution" where most devs can get their PC or PS4 engine up and running in a month or so but the Dualsense and SSD+I/O block contain true revolutionary features that will be difficult to replicate in a PC or Xbox. I expect we will see some true innovation in game design due to the capabilities of the hardware in the next few years.
Yes I'm sorry for writing an essay and going
IN on this (it's what I do
) but I really wanted to hopefully put this argument to bed because frankly it isn't an argument. The fact that historically games evolve throughout a generation is indisputable and anyone that has developed games (particularly for consoles) would know this and understand why with little difficultly.