Poll : Node based VFX compute shaders?

  • Currently using node based system to create compute shader VFX.
  • Just use code to create compute shader VFX.
  • What’s a compute shader?
  • Nodes for particles? Are you crazy?

0 voters

1 Like

So it looks like we’re not the only ones using node based compute shaders. I hinted that I had heard talk of Bungie doing this, and clearly Niagara is headed in this direction.

A lot of the challenges we face with this are identical to material / shader management. Everyone gets to build their own thing. There’s a million ways to do the simplest things. Some people don’t comment their graphs. Some people pile their nodes in one place, others spread out like cities in a desert. Shader proliferation is never ending. Some people hard code their constants, others expose too many.

And now that we have a way to actually program in these nodes. It looks like we need to look to our programmer friends for tips on how to manage a code base.

So please, share a story of victory in managing the weeds in a node graph system?

not sure if its a victory, but massive ocd ftw.

1 Like

Right! So like interview questions would be things like showing people a node graph and asking “what’s wrong with this graph?”

I’ve seen some successful uber shaders for particles. In some ways I think we need to build one of those for our basic fx.

One of our biggest problems right now is managing particle life. It’s nearly impossible to get an emitter to do “particles per second.” or “particles over distance traveled.”

Can you elaborate on this a little bit? I only have a minimal understanding of what a compute shader is.
What engine has a node based compute shader editor?

god i hope this new switch makes anything better and doesn’t just make all these engines have a billion useless and ugly points flying around

1 Like

We’ll be using it for our bread and butter fx. So those systems will have everything that normal systems have. You can tell from Bungies talk that they can do basic stuff just as easily as complex.

My suspicion is that most of the task for making it look less like sparkapocalypse lies in the hands of the artists. A significant tool for dealing with that will lay in the fragment shader.

If you think about it…there’s literally no difference between the systems…there’s only a difference in implementation priorities. One reason sparkpocalypse has been thrust upon us so hard is that we’re using it to do what it does best, and we’re still bound by fill rate.

This is why we are taking a hybrid approach. It doesn’t make sense to make people recreate the wheel every time they want to make a simple behavior. It also doesn’t make good production sense to have to parse a spaghetti mess every time someone wants to change something, share behaviors, or debug/optimize someone else’s effect. Finally, projects need to be able to choose whether they lock down all features and expose subsets of behavior to artists for total control/efficiency, or whether the needs of the project are such that every behavior is custom and nothing is shared.

We are looking at tiers.

  1. Instanceable, reusable and inheritance driven emitters which have parameters that bubble up to production artists and can be combined together in systems which share data and can communicate with each other either by system level scripts or arbitrary event payloads

  2. Common “module style” behaviors which all accept common inputs, outputs, and particle attributes and read from/write to the same data. Taking this approach means common behaviors and needs are made once and then reused forever, and can be optimized project wide. Additional behaviors can be added without adding more particle memory/complexity than necessary and can “stack” with each other (put down two modules that deal with forces and they will correctly sum and affect velocity, for example).

  3. Fully programmable distributions and data interfaces which talk to common particle parameters. This removes the need for a) hardcoded distributions (uniform range, etc) and removes the need for module granularity (initial color vs. color over life go away in favor of “set color” or whatever). This also removes the need to make custom modules for much of the complex behaviors that would normally require making a new graph, instead just “talk” to the data you want to modify without changing the underlying module behavior (multiplying velocity by delta time and adding to position, for example).

  4. For the power user, full access to the entire set of functions to “roll your own” behavior in a graph and have it stack nicely with the existing behaviors, hold onto a particle payload forever or use transient data for just that frame, whatever.

  5. Abstraction between simulation and rendering should enable new, better features. There’s no reason you can’t simulate and then use that say in volume rendering, physics, gameplay forces, reading and writing volume data, whatever. We also fail if data interfaces aren’t completely arbitrary, say sourcing via external data like CSV, or in-engine data like static or skeletal meshes, gameplay events, and so on. Arbitrary struct in, arbitrary behavior out, full data sharing between all systems in between.

Lastly, if it’s not faster to make all this stuff than it used to be, more efficient to simulate, and more full featured to render, then we screwed up.

7 Likes

That was a fully featured response! Bravo!

I can’t wait to try out all the new stuff, Wyeth!
I hope we don’t have to wait too long :grin:

How are you handling the creation of static meshes for fx?

Hopefully we can get somewhere close to Thinking Particles someday. Really love the layout of that system. Very flexible.

I’m no expert on all this cutting edge wizardry or what kind of roles you guys are in. But in film graphics there is a strong distinction between compositing and simulation. One thing I really enjoy about games is that these two departments are one thing. Nothing nauseates me more than “simulation artists”. Whatever neat shit you guys are making, just keep it simple for an artist to draw a quad with a dirt exploding flipbook, or some static images of rocks / texture spinning and flipping. Even if you can simulation 10000 rocks, mixing media is what will be the most aesthetically pleasing. spark-a-ganza is not good.

Keep me posted!! Super excited!

2 Likes

Definitely check out the bungie slides!
While they definitely hit spark saturation…they became quite aware of over kill and toned it down.
Their system still does all the normal stuff and you can even see it in the examples.

1 Like

yeah i gotta check the advances in realtime rendering page again. they must be up!

1 Like

I notice that some of the poll responses were “what’s a compute shader?”
Basically it’s a shader that executes on your gpu.

Recently more code for particles has been moved onto the gpu. So there’s an overall move towards accelerating all of our vfx performance by making graphics cards do the heavy lifting. As more coders become experienced with writing for GPU, more arbitrary code is executed directly on the card.

** cough **

It’s GPU code that can be used for more general purpose programming rather than limited to outputting vertex positions or pixel colors.

2 Likes

good point. I was kinda distracted.
As opposed to a shader that executes on your cpu. :stuck_out_tongue:
If I had read it after I typed it I might have changed it.