• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Is this the end of CGI characters in gaming?

ResurrectedContrarian

Suffers with mild autism
It's not gonna replace programs like Maya, Nuke or Houdini. They will integrate some type of AI tools that will make tedious tasks a lot faster, but it's not going to create a fully functional asset with a simple prompt, because that's what OP seem to believe.
In time, it absolutely will.

I am indeed making the bold claim that sampling from a model that has learned visual representation on the level of Sora (and beyond... given rate of improvement) will make the old pipeline of manually rendering models, textures, etc completely obsolete for most use cases, and the entire toolchains and jobs of the existing CGI tech are on their way out.

Sampling from controlled (eg. see ControlNet in the case of stable diffusion, and extrapolate to video concepts) generative models will produce greater quality for a tiny fraction of the human hours. Today's work of manually setting up models, renderers, shaders, textures etc will no longer be viable or able to compete. Pixar-like films will be controlling and sampling from fine-tuned generative models and that will be their toolchain within a decade, with almost zero of the current rendering process/tech left standing.
 

Stooky

Member
Yes, but look at how new this tech is, the improvements in such a short time is nothing but incredible. In a couple of years it’s going to be photorealistic that humans can’t tell the difference.
you have to be able to the direct the art in it all the way down to the smallest details if you want to use in production. one example is how ai handles cloth and hair move it all looks like rubber. as of now its great for concepting broad ideas. you cant use it in production the results vary too much. in motion it looks like everything is in slow motion moving in water. it doesnt hold up. it will get better, but its going some time.
 
Last edited:

Stooky

Member
In time, it absolutely will.

I am indeed making the bold claim that sampling from a model that has learned visual representation on the level of Sora (and beyond... given rate of improvement) will make the old pipeline of manually rendering models, textures, etc completely obsolete for most use cases, and the entire toolchains and jobs of the existing CGI tech are on their way out.

Sampling from controlled (eg. see ControlNet in the case of stable diffusion, and extrapolate to video concepts) generative models will produce greater quality for a tiny fraction of the human hours. Today's work of manually setting up models, renderers, shaders, textures etc will no longer be viable or able to compete. Pixar-like films will be controlling and sampling from fine-tuned generative models and that will be their toolchain within a decade, with almost zero of the current rendering process/tech left standing.
it lacks accuracy. thats problem with all the ai imagery. accurate folds in cloth. accurate movement, accurate lighting, etc. all of it looks like a dream
 

Romulus

Member
Cooooome ooooon man, that T-Rex scene where it mosey’s on out of that fence perimeter. That had to blow you away? Of course this is of course you were alive and well and at an age you knew what was going on.
2 years before that we had T2….
Both these movies use of CGI blew my mind in the theater.
What came after Jurassic Park where CGI was heavily used?

Exactly. Those were amazing at the time but I honestly don't see a massive difference to nowadays, especially considering how long its been around. Granted its noticeably better but its should be much further.
 
Last edited:

Stooky

Member
Yes, but look at how new this tech is, the improvements in such a short time is nothing but incredible. In a couple of years it’s going to be photorealistic that humans can’t tell the difference.
I think its incredible, but until i see a directed ai human acting believably..emoting... no six fingers, 3 arms, feet sliding on the floor, moving like its under water walking in slow in motion. its really cool but unusable in production.
 
I mean it's not even out yet and most people if they were actually honest would think this is real. I think it does scenes with heavy complexity well.



Wow. At first glance yeah that is good.

But right now what I wanna know is if it can fix Clu from Tron Legacy?
 

Romulus

Member
I think its incredible, but until i see a directed ai human acting believably..emoting... no six fingers, 3 arms, feet sliding on the floor, moving like its under water walking in slow in motion. its really cool but unusable in production.


Well alot of AI videos don't have the extra digits. Thats something that AI still pictures struggled with 5 months ago and video is already improved over that.

I think the big takeaway is how far it's come in a very, very short time. It's made incredible strides and while some of your concerns come off as nitpicky considering how new this is, its valid. The human eye can pick up on inconsistencies pretty well.

But imo some different aspects have already been conquered. We're at the refining point.

There will come a time very soon, likely so soon that bumping this thread won't be a big deal that it will conquer those concerns.

Just an example 9 months ago lol

 
Last edited:

Holammer

Member
赫本16_9.mp4
Yeah... I am not clicking that
and yeah... it's chinese so it's fake.
Dare you click a tweet?
Second tweet shows Joaquin Phoenix's Joker delivering the 'wanna know how I got these scars' monologue using the tech.

 
Last edited:

Romulus

Member
Dare you click a tweet?



Second tweet shows Joaquin Phoenix's Joker delivering the 'wanna know how I got these scars' monologue using the tech.


The joker vid looks like a total different person from Joaquin when his mouth moves, but strangely believable when I remove my biases of how I know the actor talks and acts. It's predicting a possibility based on that picture which is unique and interesting.
 
Last edited:

Stooky

Member
Well alot of AI videos don't have the extra digits. Thats something that AI still pictures struggled with 5 months ago and video is already improved over that.

I think the big takeaway is how far it's come in a very, very short time. It's made incredible strides and while some of your concerns come off as nitpicky considering how new this is, its valid. The human eye can pick up on inconsistencies pretty well.

But imo some different aspects have already been conquered. We're at the refining point.

There will come a time very soon, likely so soon that bumping this thread won't be a big deal that it will conquer those concerns.

Just an example 9 months ago lol


I agree. I review this stuff everyday to see how we can use in production. Ive seen used to edit and upres textures, concept art, recolor items, etc. but as a replacement for CG its a long way off. If an artist can not get consistent detailed results it cant be used. CGI is used because you can tweak it to minuscule details you can art direct and get consistent results. AI imagery at this point is kind of a black box. In art production thats no good.
 
Last edited:

Romulus

Member
I agree. I review this stuff everyday to see how we can use in production. Ive seen used to edit and upres textures, concept art, recolor items, etc. but as a replacement for CG its a long way off. If an artist can not get consistent detailed results it cant be used. CGI is used because you can tweak it to minuscule details you can art direct and get consistent results. AI imagery at this point is kind of a black box. In art production thats no good.

Oh okay. That's interesting. What are you reviewing? Because this version isn't in anyone's hands yet.
 

Stooky

Member
Oh okay. That's interesting. What are you reviewing? Because this version isn't in anyone's hands yet.
In production we review anything that can help us do our jobs faster. Using Ai is great for the start of look development you get alot of iterative variations results faster , ive seen it used for uprezing low resolution images for texture maps and creating tiled textures. Cant use it for audio voice unless you want sag strikes and getting sued that could extend to body performances. As of now the video Ai lacks the detail and editing capabilities needed for production. Just the nature of how the ai imagery is created kind of goes against that. there is no prompt that i can type in that will have it create Thanos without an artist creating that imagery first. Thats the big problem with it.
 
Last edited:

TheGrat1

Member
none of this looks as good as Thanos or recent cg in films. In that video i saw monkey with 3 arms. I cant match a camera or lighting to it for a live action plate for an fx shot. Its not realtime so its no good for games. its too random to art direct in detail. Everything looks like its moving under water. its a pretty good motion clip art generator. Its cool for a 10 second clip. Its not there yet.
It's a spider monkey.
 

Stooky

Member
Disney is in the same boat. CGI in a lot of current blockbuster movies is WAY worse than it was 10 years ago. Cheap rushed out the door crap with all the budget going to a few famous faces.
The CGI in the first Pirates of the caribbean movies looks better than what we have in 95% of current productions.
You only notice because its looks bad. cg is used in almost every film production and its so good you would never notice it was cg.
 
Last edited:

StreetsofBeige

Gold Member
So this group can now generate a talking or singing person with realistic lip movements and facial movements from a single frame picture.


Here is Audrey Hepburn signing from a single photo.
https://humanaigc.github.io/emote-portrait-alive/content/video/赫本16_9.mp4

Here is a singing video create from an AI created photo

https://humanaigc.github.io/emote-portrait-alive/content/video/16比9视频结果/talk_yomir.mp4 This from an anime picture using the audio from Nier Automata.

Pretty soon any indie developer can make a high production cinematic cutscenes in a game.
Looks pretty good. Not perfect as you can see the mouth moves a little weird in a gamey kind of way. But with the way AI is improving fast, just imagine how good it’ll be in a year.
 
Last edited:
Exactly. Those were amazing at the time but I honestly don't see a massive difference to nowadays, especially considering how long its been around. Granted its noticeably better but its should be much further.
I hear you, watching some of the heavy CGI use in movies over the past decade seems it’s moving in a downward spiral. Even though there’s more going on and a whole lot more being used. I just think it looks worse and I can’t quite put my finger on it.
 

cyberheater

PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 Xbone PS4 PS4


Most of these pass the eye test better than 80 years of cumulative Hollywood effects/cgi. Hollywood couldn't render a cgi monkey 10 years ago without it looking fake half the time. Millions in budget.

A year or two from now things are going to be absolutely bonkers.

It‘s just a matter of time before they can put in an old black and white movie and it will reconstruct it in stunning high resolution and in VR.
 
A year or two from now things are going to be absolutely bonkers.

It‘s just a matter of time before they can put in an old black and white movie and it will reconstruct it in stunning high resolution and in VR.

Combined with this is where things are going to be even more nuts:



The Gene Roddenberry Archive stuff for Star Trek is at the 52:46 mark.
 
AI is gonna be the “end” of a lot of things. I mean, we’ll always have the old ways of making art, but anything commercialized usually goes for the most cost-saving solution. It’s not gonna be long before this is the norm for a LOT of stuff.
The end times are upon us. AI-generated art and media is dystopian as fuck.
 

StreetsofBeige

Gold Member
AI is gonna be the “end” of a lot of things. I mean, we’ll always have the old ways of making art, but anything commercialized usually goes for the most cost-saving solution. It’s not gonna be long before this is the norm for a LOT of stuff.
I see it no different than automated POs, or MS Office or email. If technology makes things better and easier, it's good. I dont think anyone wants to sit there at a desk doing pen and paper notes, spreadsheets on grid paper or handwriting a purchase order from a clipboard form. Endless millions of paper pushers lost their job to computers and younger people who know how to use them.

I get what people are thinking. It sounds cold and boring to have a PC churn out creative arts. A human sitting there thinking about it with a paintbrush or Photoshop taking all week should do it. But then for anyone not into AI art, Photoshop does the same kind of thing automating stuff instead of an artist hand drawing it. Nobody seems to care.

If AI art and audio sucks, the general public will hate it and any product using it will have sales sink. But I dont think people will really care. As long as it's good enough a a good price that's what most people care about.

If people cared that much about handmade art, food, or clothes they wouldnt be buying mass produced stuff for cheap. They'd buy pricier stuff made by an artisan. But most people arent so high brow to care.

Think of it like income tax season. You can pay a tax guy a couple hundred bucks to process all your stuff. Or you can do it for free (or buy one of those $20 programs) and upload it all yourself. That tax guy might be better, but is it really worth the time and cost? For a lot of people, no.
 
Last edited:

Dacvak

No one shall be brought before our LORD David Bowie without the true and secret knowledge of the Photoshop. For in that time, so shall He appear.
I see it no different than automated POs, or MS Office or email. If technology makes things better and easier, it's good. I dont think anyone wants to sit there at a desk doing pen and paper notes, spreadsheets on grid paper or handwriting a purchase order from a clipboard form. Endless millions of paper pushers lost their job to computers and younger people who know how to use them.

I get what people are thinking. It sounds cold and boring to have a PC churn out creative arts. A human sitting there thinking about it with a paintbrush or Photoshop taking all week should do it. But then for anyone not into AI art, Photoshop does the same kind of thing automating stuff instead of an artist hand drawing it. Nobody seems to care.

If AI art and audio sucks, the general public will hate it and any product using it will have sales sink. But I dont think people will really care. As long as it's good enough a a good price that's what most people care about.

If people cared that much about handmade art, food, or clothes they wouldnt be buying mass produced stuff for cheap. They'd buy pricier stuff made by an artisan. But most people arent so high brow to care.

Think of it like income tax season. You can pay a tax guy a couple hundred bucks to process all your stuff. Or you can do it for free (or buy one of those $20 programs) and upload it all yourself. That tax guy might be better, but is it really worth the time and cost? For a lot of people, no.
That’s fair, but I think AI-gen is in a different league. We’ll get to the point fairly soon where it’s not so much a tool, as it is a fully automated way to create any kind of art, whether that’s music, movies, books, etc. And it won’t be long before that art is automatically created and tailored specifically for you, based off of your algorithmic preferences.

It’s the sheer scope that separates AI-gen from any breakthrough tools and advancements we’ve made in the past. We will eventually get to a point where human intervention is no longer required for creative production whatsoever. (I hope we’re at least 20 years away from that point, but who knows.) I don’t think we’ll fully understand how that affects society until it happens.

It’s gonna get weird. I expect by this time next year I’ll be receiving all of my daily news via AI, written and tailored to my exact interests and biases. Maybe even in rudimentary video form. We’ve got no way of closing this can of worms, and I think the craziest AI utilizations haven’t even been dreamt up yet.
 

StreetsofBeige

Gold Member
That’s fair, but I think AI-gen is in a different league. We’ll get to the point fairly soon where it’s not so much a tool, as it is a fully automated way to create any kind of art, whether that’s music, movies, books, etc. And it won’t be long before that art is automatically created and tailored specifically for you, based off of your algorithmic preferences.

It’s the sheer scope that separates AI-gen from any breakthrough tools and advancements we’ve made in the past. We will eventually get to a point where human intervention is no longer required for creative production whatsoever. (I hope we’re at least 20 years away from that point, but who knows.) I don’t think we’ll fully understand how that affects society until it happens.

It’s gonna get weird. I expect by this time next year I’ll be receiving all of my daily news via AI, written and tailored to my exact interests and biases. Maybe even in rudimentary video form. We’ve got no way of closing this can of worms, and I think the craziest AI utilizations haven’t even been dreamt up yet.
For me, I'm all for AI shit if it improves things.

The drawback is that if AI is doing its sourcing algorithms from false data so it gives the wrong answer, or from what we saw with AI images doing weird shit like humans having 6 fingers and no thumb, then that kind of AI generation is garbage. Even though I said things about products being good enough, you got to have some minimum standard of quality. And if an AI gives people three eyes or 6 fingers, and a company just wants to push that through since it cost them 5 cents to make with the push of a button that's not acceptable.
 
Last edited:
All of the new diffusion-based generative tech is coming for gaming, in time.

At some point--not far in the future--most of the current stack of technical expertise for 3D gaming (manual 3d modeling, animators, wiring, textures, lighting etc) will be obsolete, and many careers will be up-ended by this sunsetting of the old expertise.
I think these advancements in technology are here to help and make the process easier and faster but not to make the talent obsolete. I don't think you'd get anywhere near the quality either, right?
 

midnightAI

Member
In production we review anything that can help us do our jobs faster. Using Ai is great for the start of look development you get alot of iterative variations results faster , ive seen it used for uprezing low resolution images for texture maps and creating tiled textures. Cant use it for audio voice unless you want sag strikes and getting sued that could extend to body performances. As of now the video Ai lacks the detail and editing capabilities needed for production. Just the nature of how the ai imagery is created kind of goes against that. there is no prompt that i can type in that will have it create Thanos without an artist creating that imagery first. Thats the big problem with it.
And then with CG you can make small tweaks, render the output a know exactly what you are going to get. If you create a video with AI you can't modify a small detail and recreate that exact same video again but with those small changes, instead you'll get a different (albeit similar) video.

(As it works currently)
 

Audiophile

Member
I think game dev will likely use highly selective and more specific filters for assets eventually.

If you can model 1 lamp, then plug all your concept art in and a bunch of random pictures of lamps, weight the style towards the former and say make me a bunch of lamps on the fly with these seeds. Rinse and repeat for many assets and you can have completely varied worlds.

Or when it comes to full generation of the entire image, they could train a model on a bunch of ultra high quality ground truth versions rendered on extremely high end rigs with a very specific art style; and just keep going and going until the error margin becomes very low. Then layer that on to a rudimentary render of the 3D world with only coarse simulation. Of course, the barrier of entry on this tech will be ultra-efficient algorithms and extremely powerful local inference capability in the hardware (perhaps by the time PS7 comes out the tensor units and ML hardware could be analogue matrix compute blocks consuming just a few percent of the power of their digital counterparts).

It'd be less of a general model but their own bespoke fork for a given game with a model trained specifically for a very narrow vision and set of parameters.

That'd probably be a way's away but very targeted use for specific elements would likely be quite a reasonable expectation going into the next decade.

Even stuff like the content on dynamic displays, billboards, TVs etc. in games could have ai generated content on them. Still have artist-made stuff as a foundation and to hook it all together. But then to avoid seeing things on repeat they could run ai generation on the fly with specific parameters. For eg. if you walk past a a shop in an open world with a bunch of TVs in the window, they could regularly have unique content playing. And it's not just a case of different content but the fact that something like that adds dynamics and life to a world which in some scenarios may not be possible due to storage/bandwidth. But if you can run generation the the fly then you can go all out and just have every surface in a game come alive. Imagine walking along times square in GTAVII with every billboard playing something unique in very high quality, then you turn off to your left and see a store window full of TVs running and you proceed inside only to see TVs everywhere of all shapes and sizes running a variety of new content.
 
Last edited:

cyberheater

PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 Xbone PS4 PS4
Call me an idiot, but doesn't this still count as images generated by a computer?
Yes it is but I guess the main talking point is the level of people involved. A team of animators vs a single line prompt into a A.I engine.
 

ResurrectedContrarian

Suffers with mild autism
And then with CG you can make small tweaks, render the output a know exactly what you are going to get. If you create a video with AI you can't modify a small detail and recreate that exact same video again but with those small changes, instead you'll get a different (albeit similar) video.

(As it works currently)
There are already many tools for diffusion-based (image, video, etc) models that allow you to control small changes or to retain certain aspects (eg. movement, shape, local structures you select) while modifying others. Or you can provide some reference (another image, texture, 3d object, motion reference from another video) and constrain your output to use it. That kind of control is already extremely well developed for image generation, and rapidly improving for video generation.

I think these advancements in technology are here to help and make the process easier and faster but not to make the talent obsolete. I don't think you'd get anywhere near the quality either, right?
Some of the skills will carry over, but I mean that the current stack of tech and expertise that runs gaming / traditional CGI (modeling, textures, renderers, shaders, etc in their current form) is going to be replaced by a completely different kind of tech and programming (sampling from probabilistic models and controlling/integrating them), and that will surely lead to some disrupted career trajectories that rely on current skills. I'd say that there is a danger to specializing in the current stack, which could be swiftly on its way out in a decade.
 
Top Bottom