Are our jobs endangered by the progress of AI ?!?

Hello everyone,
This is something that I asked this during the round table on the last GDC (2018)
Will we reach a point where the AI is going to be doing most of the work for us or will it just fully take over?

For those who don’t know much about neural networks and AI, here are a few examples:

We can use them soon to easily modify and enhance pictures(there are more examples out there if you want to check them out and feel free to share them on this post):
using AI to easily modify pictures (ref : reference)
Face


The link below is a video of research that Disney used to highly improve cloud rendering simulation speed:
Disney’s AI Learns To Render Clouds
Clouds
Clouds_2

Another video that shows that AI (neural networks) can learn to simulate fluids and smoke, and once it’s done you can simulate them a lot faster, and they are easily confused with real footage.
Neural Network Learns The Physics of Fluids and Smoke
Simulations
We’ll be able to use these to make faster simulations, but at some point i think that this will become usable in realtime in game.


Here’s another Video about amazing results done by NVIDIA where they use a still picture to make a video have the same style:
Artistic style transfer for videos
Sintel_3Sintel_1Sintel_2
this can be easily applied to simulations/sprite sheets in order to get a different style using a well made concept art.


Can you imagine the possibilities?? It'll generate "hand drawn" sprite sheets in no time. Simulations that take a few seconds to render. A few seconds to change from one style to another. the possibilities are endless!

In my opinion at first we’ll be using neural networks and AI to accelerate our process of creating the effects at 1st
and at some point it’ll definitely be able to create the effects in game after giving it the desired direction.

I’m looking forward to hear your thoughts on this and the possibilities of what we can accomplish with this :slight_smile:
I’m pretty sure we’ll always need VFX artists, they will be either less technical and very artsy or a extremely technical and a bit artsy.

Other cool videos to watch:
Bubble Collision Simulations in Milliseconds
Stunning Video Game Graphics With Voxel Cone Tracing
AI Creates Facial Animation From Audio

3 Likes

No. Our tools will get better so our jobs will change but we won’t be replaced.

2 Likes

I think my theory on this is somewhere between the two extremes - automation typically removes the brute force efforts, which we usually cover by throwing more bodies at it. In this sense, yes, our tools will get much better, and that means that it’ll take less people to output the quantity of work we’re currently outputting.

This could result in two things: an increase in quality, as those people now have more time to iterate and make better art, and a reduction in staffing. For people that have their jobs lost, it’ll feel like they were replaced by AI/automation.

This equation has been happening since the dawn of capitalism, and in our current society, seems to affect the people who are unwilling to adapt, be it learning new skills, moving locations, or applying their skills in new ways.

Personally, I can’t wait for the day that I can stay focused on why i’m creating art, and let the tools worry about how. Think of how liberating the jobs of concept artists are - for the most part, they remain focused on the outstanding output, and the ability of that output to meet the needs and intentions of the experience. I feel like my job has a 10% focus on that, a 25% focus on making art, and a 65% focus on how the @#$% i’m gonna do it.

4 Likes

I think it will help increase the speed and quality of producing realistic effects simulations and renders, but I don’t think it can ever replace artistic creativity or creative problem solving. I don’t think it will get to a point where it makes a unique solution to a problem with design, or combine and distort noise textures to make something that looks cool on an abstractly shaped mesh

1 Like

I agree with @Keith , I think what we are gonna start seeing eventually when it’s fully implemented in production is indie size teams pushing out games that have AAA polish. Will be interesting to see how it affects the Industry.

Think small indie teams making games like Uncharted.

EDIT: With much faster production times too.

You just reminded me of an article I’ve been meaning to read, that has a really nice way of phrasing this: Triple I Development (I for Indie):

3 Likes

I was just thinking about this the other night while coding. I was wondering if a (sentient) AI could build the effect I was working on. I imagined it would:

  • Create a new language and compiler toolchain first
  • That would enable it to create better software. It would write a new game engine
  • Before creating a procedural system on top of it to generate an infinite number of effects

And it could probably do it all before I finish typing this sentence.

The Singularity is near. :slight_smile:

But you don’t need sentience. Neural nets today are creating incredible things. You can argue that people will still be needed as knob twiddlers to drive the artistic vision, but I don’t see anything preventing a neural net (using other neural nets) to drive the entire pipeline.

If you’re interested, check out Tensorflow.

2 Likes

Somewhere in the distant future, i suspect we just sell software that creates whatever type of game the user wants.

“Make me a space Indiana Jones game”

2 Likes

Now that’s where the money will be. Except I can’t code or contribute to that in any way, so I guess I won’t make the big bucks haha.

I’m really hoping for a Sword Art Online (minus the real death) or Ready Player One type of immersive game in the far future

3 Likes

The day VR/AR hardware becomes as convenient as a cell phone. Also for movement to feel right, we’d need wire that thing directly into the brain, matrix style. I’m not sure Motion controllers would cut it.

3 Likes

Oh man, im glad there are others hyped about this. I was starting to feel crazy talking to colleagues about how robots and AI are going to make game dev way better.

Wonder what I should be looking into on the side for the future? Or will it be somthing we can probably just pick up through our jobs. I guess if game dev becomes possible to do with a small team, I wouldnt even have to worry about having a job on a larger team. What are your guys thoughts on this? Will there be mass job loss or just a redirection

Would love to be able to live on a hill out in the middle of nowhere and work on games…

1 Like

You and me both! I think for sure AI can help with world building and stuff like pathing, and then the above mentioned renders and stuff. I’m actually really looking forward to that aspect. No more sitting there tweaking settings and simulating over and over, just feed it a few video or images of what you want your sim to look like and boom, everything set for you.

It’s not “if”, it’s “when?”
Right now it’s just tools to make content different. But at what point can you really say that making content different is different from making content?

At some point in the next ten years there will be some periodic content that is almost purely synthetic. People will probably think, “omg, that’s so goofy and weird, gosh AI is so dumb and strange.” But the inescapable point will be that it held our attention and people didn’t directly make it. As the tools to steer those engines get more refined, so will the strength of its hold on our attention grow.

Everyone likes to think that their job is unique, different or rare. But the truth is that when you reach quantities that are astronomically large, rarity can only be applied to context of inconsequential scale.

Our jobs will change, our context is replaceable. The real question is whether our context is viable when our environment is not.

1 Like

"Create a new language and compiler toolchain first

Before creating a procedural system on top of it to generate an infinite number of effects"

Why bother with all that crap when you can just jump straight to visual output and simple input.
i would think the questions revolve primarily around what data set you use to train it and the context of the output.
Neural style programs rely primarily on data sets and training. So while what we use is “single image” and the output is a style transfer…the brain behind it was trained with massive data sets.

dude…i’ve already used dream scope and prisma to make hand drawn sprite sheets. The flickering between frames can be better ameliorated, but higher density framerate input would have helped with that as well.

Guys, DO IT! It’s a simple experiment.
Take one of your sprite sheets.
Upload it to DreamScope or Prisma.
get it back, and run it through Slate to smooth the motion.
You are gonna flip.

I’ve done about twenty tests so far. I think two of them were really gripping. I’d share them, but I used proprietary work to do it.

Setting up your own neural style transfer is somewhat difficult. I had a unix guru help me out.

1 Like
1 Like