• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Graphics Tech | OT

VFXVeteran

Banned
Welcome to the Graphics Tech thread!

Let's begin by expressing what this thread is NOT:

1) A screenshot war thread. If you want to post screenshots of a game to show off some graphical prowess, it must be accompanied by factual data and information about the screenshot. Don't just make a blanket statement about how awesome a game looks and then post up a screenshot as some sort of proof.

2) Subjective comments about a game. If your views or opinions are subjective, then either state that is the case and move on or continue the artistic comparison in some other thread.

3) Console/PC war thread. While it is OK to demonstrate a company that implements an incredible tech feature, it is not meant to showcase how superior the hardware is. Talk about the game itself and (for PC owners) whatever options are enabled to make said game shine.

4) Gameplay comparison thread. This thread only discusses graphics aspects of the game. Animation is welcome. FX are welcome, etc.. but not about how this game is better than that game. Compare games in their appropriate threads.


What this thread can do for you:

1) Learn about all the ways developers use graphical techniques and compare those techniques with other games.

2) Become familiar with graphical terms and learn their meaning.

3) Share knowledge on different algorithms used throughout the industry (games, movies, or anything else).

4) Discuss comparison videos such as Digital Foundry's graphics analysis.

5) Discuss hardware across all platforms (without console warring).

6) Share the love of seeing your favorite graphical game shine when compared to other games.


Here are some recent vids that's been floating around the net (11/30/2019):

Nvidia boosts RTX Quake

Halo Reach Enhancements on PC


Common Graphics Terms:

Anisotropic Filtering

Anisotropic_filtering_en.png


Anti-Aliasing

6a0120a85dcdae970b015437fee512970c-800wi.jpg


PBR - Physically Based Rendering/Shading

reflcompare11.jpg


RTX - Ray Tracing

nvidia-rtx-ray-traced-shadows.png

nvidia-rtx-ray-traced-reflections.png

nvidia-rtx-ray-traced-ambient-occlusion.png


SSS - Subsurface Scattering
4958d2e2745a2c0133aed351bbd80fa07a3a6047.jpg


AO - Ambient Occlusion

ambocc_example_town.jpg



WELCOME!
 

stranno

Member
I'm a big fan of the rain drops on screen effect in racing games.

Some examples along the history:

Rad Rally (Sega AM2, 1991). Probably the earliest example of this effect being properly done.

bGRhIcg.gif


Speedboat Attack (Criterion Games, 1997). Yep, that Criterion Games. Drops are always the same layer of animation but it still looks cool and only happens in the tail of the rivals.

JyDZ7Jj.gif


Wave Race: Blue Storm (NST, 2001). Subtle yet cool effect.

5IAU9cN.gif


Need for Speed II: Special Edition (EA Canada, 1997). I'd say the first hardware accelerated and randomized drops effect ever. They're only displayed in the Special Edition and using the Glide (3DFX) API. It also features snow pellets on screen.

Z4Cot9U.gif


Colin McRae Rally 3 (Codemasters, 2002). The first game in the series to feature such effect, and it was really cool.

LHphH5T.gif


Mobil 1 Rally Championship (Magnetic Fields, 2000). Another old game with good looking effect, drops varying direction/speed based on the car handling.

7u7kw0p.gif


The Touryst (Shin'en, 2019). Another great effect, hard to tell in this GIF. It would be 10x better with voxelized drops tho.

198ljpF.gif
 
Last edited:

Shifty

Member
Excellent thread idea OP, we need more technical discussions around these parts :messenger_sun:

I'll contribute something old-school. I've been writing a Quake .map file importer for the Godot engine this weekend, and had to reverse-engineer the texture UV format due to missing documentation.

It turns out that Quake 1 (and Quake 2, Quake 3, Hexen 2, Daikatana, and certain Valve games) use a form of 'triplanar' texturing to align surface graphics relative to the world instead of relative to the polygon itself:


quake-texturing.png


The planes along each side represent three 'world-space' sets of texture coordinates that repeat infinitely in each axis.
They aren't textures in and of themselves, just coordinates that tell the renderer which sub-region of the texture to draw for a given face, but I've represented them with textured geometry here for illustrative purposes.

When an object like the polyhedron in the center needs to be textured, it checks each face's rotation to see which of the three axes it's closest to, then uses that axis' texture coordinates as its point of reference.

If you look at the top, front and right faces of the polyhedron, you'll see they match up exactly with the numbered textures on the floor, back wall and left wall respectively. Neat, eh? The same goes for the diagonals, though the engine has to arbitrarily pick between two axes there because 45 degree diagonals lie directly between two world axes.

Naturally this isn't the extent of being able to texture in the Quake format - the mapper can apply X/Y offsets, scaling and rotation to the textures to shift them around and make sure everything lines up and looks nice, in case a brush doesn't align exactly with the world texture.

This technique is done in modern engines too, but often as a quick-fix override that costs extra GPU time in order to work around poorly-mapped geometry. It's a shame really- modern engines tend to assume you'll have an expert artist working with you, so they include very little in terms of geometry and texturing tools.

Hence why I built my importer - better to bootstrap old tools that work well than it is to sink untold hours into learning Blender or Maya :messenger_sunglasses:
 
Last edited:

VFXVeteran

Banned
Excellent thread idea OP, we need more technical discussions around these parts :messenger_sun:

I'll contribute something old-school. I've been writing a Quake .map file importer for the Godot engine this weekend, and had to reverse-engineer the texture UV format due to missing documentation.

It turns out that Quake 1 (and Quake 2, Quake 3, Hexen 2, Daikatana, and certain Valve games) use a form of 'triplanar' texturing to align surface graphics relative to the world instead of relative to the polygon itself:


quake-texturing.png


The planes along each side represent three 'world-space' sets of texture coordinates that repeat infinitely in each axis.
They aren't textures in and of themselves, just coordinates that tell the renderer which sub-region of the texture to draw for a given face, but I've represented them with textured geometry here for illustrative purposes.

When an object like the polyhedron in the center needs to be textured, it checks each face's rotation to see which of the three axes it's closest to, then uses that axis' texture coordinates as its point of reference.

If you look at the top, front and right faces of the polyhedron, you'll see they match up exactly with the numbered textures on the floor, back wall and left wall respectively. Neat, eh? The same goes for the diagonals, though the engine has to arbitrarily pick between two axes there because 45 degree diagonals lie directly between two world axes.

Naturally this isn't the extent of being able to texture in the Quake format - the mapper can apply X/Y offsets, scaling and rotation to the textures to shift them around and make sure everything lines up and looks nice, in case a brush doesn't align exactly with the world texture.

This technique is done in modern engines too, but often as a quick-fix override that costs extra GPU time in order to work around poorly-mapped geometry. It's a shame really- modern engines tend to assume you'll have an expert artist working with you, so they include very little in terms of geometry and texturing tools.

Hence why I built my importer - better to bootstrap old tools that work well than it is to sink untold hours into learning Blender or Maya :messenger_sunglasses:

I love triplanar man! I wrote a shader just like that for the Arnold path-tracer. Mine has a few more bells and whistles that you might like. :) If you want me to share the code, just PM me. Our artists wanted a lot more functionality by switching how the texture planes are projected (world, object, camera spaces), individual transform functions for each plane (translate, rotate and scale), and blending the planes together across the seams.

Good work!
 
Last edited:

VFXVeteran

Banned
Update: 12/6/2019 - The Resolution Myth

I see a lot of comments from gamers that indicate they don't understand how important a game's resolution is. A lot of this talk shows up during speculation about the next-gen consoles and how they will receive more than just a resolution bump. I want to address the myth that resolution is just as simple as turning up a switch from 1080p to 4k with little to no repercussions at all and only cost a moderate amount of GPU time.

Before I can talk about that, we must dig down into how these graphics chips render pixels to the screen. We'll find out that it's a very slick workflow but has all kinds of limitations. Resolution is the grand-daddy of all when talking about getting a really good image on the screen. This can be said for both games and film. It has the most impact on the entire graphics pipeline. More so on these graphics cards that have workflows that, before ray-tracing, was completely dependent on a 2D grid of pixels to compute almost everything.

Imagine you have a director that tells you, "I want a very clear and sharp triangle that shaded with an emerald green gradient!!" I want NO exceptions, he says. You go to your desk and draw out a picture of what the director wants and what your hardware is capable of. You come up with this picture:
rasterization-triangle1.png


After a few minutes, you walk over to your boss and show him the picture with the black lines removed. He stares at the picture intently and then looks at you and laughs. "Nice try son, now get back to your desk and give me a REAL looking green triangle!!!"

Let's look at what we are trying to do here:
raytracing-raster.png


The problem is, you can't create an infinite amount of boxes to fill that triangle to make it look like the triangle behind the grid of pixels but this is EXACTLY what he wants. But he might be happy with something that's CLOSE.

In today's modern GPUs, this form of filling a triangle is called rasterization. Don't worry too much about the details of how it's done, but understand that all the pretty lighting, textures, fog effects, characters, trees, sky, water, everything is limited to how many of those pixels you have to use to represent what the director wants. The graphics pipeline uses several of these grids of pixels to come up with the fancy effects you see on the screen. It's not just one of these grids that needs to be done.

When a pixel color is calculated, it undergoes a series of steps which has functions to approximate the director's vision. These functions are called "shaders" and they are the main bread and butter of how to mathematically approximate the real world. The majority of these shaders' time is spent calculating a pixel's color on one or more of these grids. That is one of the main limitations of some of the FX you see today in games like "screen-space" ambient occlusion. It's called "screen-space" because it's trying to calculate a contact shadow on a 2D grid instead of in a 3D world. Nice trick, but not very accurate. If you increase the number of pixels that make up your grid size (i.e. your rendering resolution), suddenly the contact shadows don't look so bad. Look at these two screenshots of COD: Modern Warfare:
kk0RAJJ.png


1080p version

fIfjHNS.jpg


4k version

These two images had every single graphics option set to MAX settings on a GTX 2080Ti. Notice the following things:

a) The reflections on the ground are nowhere near as correct as the 4k image. This is that "screen-space" shading I was speaking about.
b) The bump detail on his gloves are blurred compared to the 4k image. NOTE: the texture size in the 1080p image is the exact SAME as the 4k image and yet loses detail.
c) Far left side there is a reflective pool of water that's almost completely missing in the 1080p image.
d) The ground bump detail is a lot blurrier in the 1080p image.

How about Control with full Ray-tracing on? How would resolution affect an image taken from that game?

RxPQmyD.png


1080p version -- notice the reflective lighting on the floor.

s7eq7Jr.jpg


4k version.

Again notice a LOT of detail lost even with ray-tracing which actually shades triangles in 3D world space like the real world. If the next-gen consoles had the power to render ALL the ray-tracing features that came with Control, but had to sacrifice resolution to get it running, you'd be looking at the 1080p rendered image as a good approximation for how it would look. In motion, it would seem unbearable as the noise would be very distracting.

Let's look at one more example - photogrammetry. This is very popular at the company DICE for their Frostbite 2 engine. It's the de-facto standard for getting really impressive textures that have been examined in the real world photography for realworld data:

cRBs7MN.png


1080p -- notice the smoothed out detail on the tree and the foliage next to it. Since the leaves used a lower texture resolution, you can think it's lower res than it actually is.

G9qPXCJ.jpg


4k image holds up much better. You can see the beautiful detail in the bumped texture on the tree and the foliage looks pretty good and not distracting.

So, how much does going from 1080p to 4k cost? Well, it's 4x the number of pixels on that grid we mentioned. Imagine that not only do you have to render the final image to your boss, but all of your functions, your bad-ass shaders, have to be called 4x as many times as well and suddenly your render budget is in trouble..

Morale of the story - resolution is KEY to making good looking games. It not only affects the overall image but also all the computations used to make the shaders do their work and give you that beautiful detail, those reflections, those nice contact shadows, very nice fog, etc.. etc..

Increasing resolution to 4k makes a game switch from CPU-bound to GPU-bound 99% of the time. If the GPU running 4k falls out of budget, cuts will have to happen. But that inevitably takes away from the beautiful vision that the director wants.
 
Last edited:

psorcerer

Banned
The problem is, you can't create an infinite amount of boxes to fill that triangle to make it look like the triangle behind the grid of pixels but this is EXACTLY what he wants. But he might be happy with something that's CLOSE

I'm not sure why you're mixing aliasing and resolution. Yes, the simplest and most naive way of combating aliasing is to increase resolution. But it's not the only way.
And surely there is no direct relationship between the two.
You're right that aliasing is The Problem in the modern RT 3D graphics and essentially all tech is in fact a bunch of small case solutions for various aliasing problems.
But using 4K instead of 1080p is the most brute force way of solving it.
 

VFXVeteran

Banned
I'm not sure why you're mixing aliasing and resolution. Yes, the simplest and most naive way of combating aliasing is to increase resolution. But it's not the only way.
And surely there is no direct relationship between the two.
You're right that aliasing is The Problem in the modern RT 3D graphics and essentially all tech is in fact a bunch of small case solutions for various aliasing problems.
But using 4K instead of 1080p is the most brute force way of solving it.

They aren't being mixed. Aliasing is the true term for not enough samples to represent an analytical image. Increasing resolution (and therefore samples) is the best way of getting rid of aliasing.

And to add to your comment of "it's not the only way" involves mucking around with the original sampled image. I'm not trying to get into the various ways of trying to anti-alias a graphics image. We are talking about the very first capture - which all the various implementations need as an input. Starting off with a very high resolution capture of an analytical image will always be the best starting point - hence why I talked about resolution.
 
Last edited:

Shifty

Member
Time for another old-school contribution, this time focused on overdraw.

Below is a GIF of The Slipgate Complete (E1M1 from Quake) being built from a .map file in real-time, visualized as overdraw via the Godot engine:

E1-M1-paralellized.gif


Overdraw can essentially be thought of as giving the GPU unnecessary work to do- it represents surfaces that are drawn on top of eachother with respect to the camera's viewpoint: Brighter areas of the image mean more overdraw.
For opaque geometry you only need to draw the frontmost surfaces that the camera can see, since the ones fully obscured by them are no longer visible, and thus are a waste of processing time.

In some cases, such as transparent objects, overdraw is inevitable. Such elements need to be drawn on top of one another by definition, making them a non-negotiable cost that must be invoked sparingly for performance reasons.

The E1M1 example above is pretty bad when visualized sans-optimization since you can still see all of the rooms' internal geometry and surfaces from the outside, but can still be brute-forced on a sufficiently modern GPU.
If things get really out of hand, however, you end up with something like this:

ELJGB1gWwAAmgZ1


The Forgotten Sepulcher from Arcane Dimensions, a.k.a. 'The Ultimate Quake Map'. (Which took 10 minutes and 9GB of RAM to build with the current version of my tool :messenger_sad_relieved:)

It's unbelievably dense, clocking in at ~10000 entities and ~50000 brushes, and leans heavily on the QBSP map compiler's ability to merge geometry and strip out unneeded faces.
Visualizing it as unoptimized geometry results in the dense patches of white you see above, which are an enormous strain on the GPU. It runs at about 6FPS on a GTX1080.

Thus, it's very important to have a solution to manage overdraw. In addition to the mesh merging done by the compiler, Quake's renderer operates on a Binary Space Partition (or 'BSP') system that determines which sections of a map are visible from one another, allowing it to selectively omit geometry that won't be visible in the final rendered image. Such optimizations are referred to as 'culling' techniques.

Unfortunately, Godot has no such obscurance-based culling technique built in, so I'm going to have to do some work if I want crazy maps like The Forgotten Sepulcher to operate playably.
I might cover some culling techniques in a future post depending on how I get on, as there are quite a few available that work in a variety of different ways.
 
Last edited:

VFXVeteran

Banned
Shifty Shifty - can't see your last image.

Can you talk about how overdraw effects todays FPS? Are they using the same techniques in corridors or a totally different technique altogether? Is it a pre-pass during scene load or something done continuously as the character traverses through the scene?

Thanks
 

VFXVeteran

Banned
At last, we know both the official names of Microsoft's and Sony's next-generation consoles. Sony will stick with PlayStation 5 as expected, while Microsoft decided to go with Xbox Series X, which is reminiscent of the powerful Xbox One X mid-generation refresh.

More importantly, though, we can finally discuss the first distinguishing features of each console after so many similarities (AMD's Zen2 7nm CPU architecture and Navi RDNA GPU architecture, hardware support for raytracing, NVMe SSD, etc.).
Enhanced Haptics on PS5
Sony is boasting that its PS5 controller's enhanced haptics will make for a significantly improved feeling during gameplay. SIE CEO Jim Ryan was recently quoted on the topic:


Indeed, Wired's Peter Rubin, so far the only journalist to have tried and reported on this new feature, also had a positive experience with it.


Even the developers at Counterplay Games, who just unveiled the PlayStation 5 and PC game Godfall at TGA 2019, praised the enhanced haptics of the PS5 controller.


Meanwhile, Microsoft's Head of Gaming Phil Spencer only discussed minor changes to the Xbox Series X controller such as a hybrid d-pad and a new share button, while apparently the haptics will remain the same as on the Xbox One.


It seems unlikely Microsoft would be able to address this shortcoming before launch, which means it may well be one area where Sony has the advantage in the upcoming generation. Then again, if the enhanced haptics are only featured on PS5, we may only truly see Sony's first-party developers harnessing it properly as third-party studios often don't have the incentive to support system exclusive features in their multiplatform games.
Patented VRS on Xbox Series X
There is, however, also an area where Microsoft might have the lead with the Xbox Series X. Deep within the blog post that went up right after the console's announcement at TGA 2019, Phil Spencer dropped this tidbit.


Sure enough, after some digging, we found several patents submitted by Microsoft. The last one, titled 'Variable Rate Deferred Passes in Graphics Rendering', lists Ivan Nevraev, a Senior Development Engineer at Xbox Advanced Technology Group, as the inventor alongside Martin J. I. Fuller.

As you can easily imagine, the paper gets very technical. However, in the following excerpt, the inventors described other performance saving solutions such as checkerboard rendering (often used in PlayStation 4 Pro games) or dynamic resolution as naive in comparison.


We've often covered Variable Rate Shading technology on Wccftech, but let's do a quick recap. This technology aims to tackle the issue of rendering the enormous amount of pixels (over 8 million) required for UltraHD displays without reducing the image quality in the most critical areas of the picture. Given that the Xbox Series X and PS5 consoles have both promised support for 8K resolution (over 33 million pixels) displays, VRS is poised to become even more important in the years ahead.

Earlier this year, Microsoft launched the VRS API on PC. While there are only a couple of games (the last two installments in the Wolfenstein franchise, to be precise) currently supporting VRS, several developers have already pledged their support, including first-party Xbox studios like Playground Games, Turn 10 and 343 Industries.

Additionally, the benchmark tools released recently for the 3DMark suite showcase great promise with up to 50-60% performance gains with variable rate shading enabled. So far, on PC only NVIDIA's Turing graphics cards support both Tiers of VRS as they are defined by Microsoft's API, while the Intel Ice Lake CPUs support Tier 1 alone.

Current AMD graphics cards do not support variable rate shading at all, though this is expected to change next year. AMD did file a patent of its own, after all, though it does seem to be limited to Tier 1 VRS features.

Sony has not officially said anything yet about VRS being supported on the upcoming PS5 console. On a hardware level, baseline support is likely to be there given that PS5 is believed to be based on the same AMD Navi/RDNA GPU architecture as the Xbox Series X. That said, even though both Microsoft and Sony are using AMD's latest CPU and GPU architectures, we know they will also make some customizations of their own that won't be found on the other console as they've done in the past.

Microsoft could have worked alongside AMD to further optimize VRS for the Xbox Series X on both hardware and software levels according to their patent and API design, with the latter already publicly available to game developers for almost a year now. Hence Microsoft boasting the patented VRS technology as a key point for the performance.

There is a silver lining here for PC gamers as well. With the Xbox Series X and possibly PS5 supporting this feature, we should see most game developers actually using VRS and thus providing a performance boost, something that's always welcome for any PC gamer.

Posted by sonomamashine sonomamashine
 
Last edited:

VFXVeteran

Banned
Some people on these forums are guessing at how scenes and graphics tech get made at gaming studios. While film/game targets different audiences, they have a LOT of similarities. I'd like to dig a little deeper into the content creation process and get some people to think about some of the challenges of making the best iteration of what the director wants from either games or film.

I opened up some of the "Extras" section in the PC version of Detroit: Become Human because the game is so spectacularly beautiful and I wanted to see what people think as to how some of the art could transition into a closed box with very little resources to work with. It would give a good idea of how difficult it is to develop on a hardware platform where the goal is to closely resemble what the original artist envisions. It also places an emphasis on the need to have more bandwidth than special sauce.

First off, the director and artists rule all. Period. They make their vision and it's up to you to match it.

Let's look at some of these beautiful renditions:

Detroit%20%20Become%20Human%20Screenshot%202019.12.17%20-%2020.34.16.98.png


Anyone see anything in this image that would be a problem for ANY hardware (PC included)? I see a lot. I want to know what you guys spot.

How about this one?

Detroit%20%20Become%20Human%20Screenshot%202019.12.17%20-%2020.34.39.84.png


Pay attention to the slide to the far right? Director says, "I want that hair. Period." How about the rain drops? Or the blur in the background with the rain?

How about this complex scene? How could you render that entire shot in realtime? Notice the highly detailed carpet? What about the sheets and how it scatters light?

Detroit%20%20Become%20Human%20Screenshot%202019.12.17%20-%2020.34.03.98.png
 
Last edited:

VFXVeteran

Banned
PC architecture is really bad for games, it can compete with a console only by brute-forcing things and developers not doing their job in optimizing the game.

My question though is how much optimizing can you get away with on a console that uses the same architecture (CPU) than a general PC? And I think graphics techniques are pretty fine right now. There is still a lot to iterate on (hair is still a huge problem) and you hit the nail right on the head that bandwidth oriented games will be the delta change for the future. This is one of the main reasons I've invested in Nvidia. They see that vision and have talked to many in the film industry about their need for advanced hardware techniques like RT. The gaming community isn't going to get anywhere near film quality if the pipeline is still small. Film uses huge assets and large datasets. We couldn't even fit ONE asset into a graphics cards memory. The graphics subsystem requirments will choke out any optimizations in the rasterizing pipeline so it's a moot point trying to throw money there.

Games development needs to steer the optimizing techniques to gameplay aspects.
 
Last edited:

-kb-

Member
My question though is how much optimizing can you get away with on a console that uses the same architecture (CPU) than a general PC?.

Plenty, there's architectural and SoC based optimisations that you can do to increase performance by targeting a specific microarchitecture for both the GPU and CPU, additionally the latency should be reduced for sharing information between the CPU and GPU as you don't have to traverse the PCIE bus.
 

VFXVeteran

Banned
Plenty, there's architectural and SoC based optimisations that you can do to increase performance by targeting a specific microarchitecture for both the GPU and CPU, additionally the latency should be reduced for sharing information between the CPU and GPU as you don't have to traverse the PCIE bus.

Plenty doesn't tell me much. How are you going to get around the bandwidth problem?
 

-kb-

Member
Plenty doesn't tell me much. How are you going to get around the bandwidth problem?

Theres no getting around the bandwidth problem. But if you think about it logically there are things that should be doable on consoles that are straight up not doable on PCs due to the different architecture (mainly the share memory and coherence you can achieve).
 

Lort

Banned
We are about to get a whole lot of game renders and a whole lot of debate about them, so let’s focus it on one thread so other threads have some room to talk about other issues.

Lets keep the pixel peeking and art vs tech debate here...

Let the games begin!
 

Lort

Banned
Are you sure this is the ray traced one? I don't see cars reflecting each other so I think it's the 8K 120FPS one but AFAIK it doesn't have any new graphical features. The ray tracing one was in more of a demo environment than ingame:
gran-turismo-ray-tracing-demo-mclaren-p1-860x482.jpg

gran-turismo-ray-tracing-jaguar-etype.jpg
Your right it’s from the 8k one .. what are those images u just posted from?
 

Lort

Banned
So videogames will finally look better than real life, about time.
It sounds like your burning the people in the photo due to their attractiveness rating lol

but yes they should def add the colored lighting in the real room to match!
 
Top Bottom