Ghost smoke WIP

https://youtu.be/pZGREYrWXTo

Been working on this for a few days and it’s finally looking something like I want it to. The inspiration was one of the effects in Control, the new Remedy game—there’s this really interesting sort of fluid / smoky distortion thing coming off some of the enemies (you can see an example around the 3:35 mark in this video) that I wanted to try to recreate. I’m not doing an actual fluid sim—it’s just a curl-noise offset applied each frame—but I think it works surprisingly well.

Next steps from here: making the “emitter” size scale with distance (right now it’s a constant size in texture space, which is pretty obvious when you move closer and farther away), then adding actual distortion and probably some color.

Thoughts?


Newest version here for the thread preview:

21 Likes

This is soo cool man…

That’s very interesting

so cool:grin::grin::grin:

Dude. That is so awesome.

I had assumed this was some kind of post process effect. Are you doing it with particles? I would love to figure this out, as well!

Progress! I got the sizing sorted out, started on a color map, and added another octave to the noise. It seems to fade out a little more than I’d like it to; probably need to mess with the final blending.

6 Likes

Yup, it’s all post-processing. There’s a Blueprint script that projects a world-space position to camera space and feeds that location into a material that gets drawn to a render target, which then gets distorted in another pass using some curl noise. The final result gets blended in with a post-process material and then reused for the next frame.

I want to learn this :heart_eyes:

1 Like

This is fantastic, a breakdown would be amazing.

2 Likes

I think I have a plan for how to get it to handle object occlusion; I’m currently only using the red and green channels, so if I switch to a floating-point render target and use the blue one to draw a region of “here’s how far away the thing was” each frame, I should be able to compare that depth with the pixel depth in the post-process material and mask it out. Here’s a screenshot testing that with a fixed depth.

As to doing a breakdown—I’d be happy to share it, the technique isn’t too complicated :slight_smile:

10 Likes

:heart_eyes::heart_eyes:I like it

Did you ever do a breakdown? Cool effect! Cheers

1 Like

would also love a breakdown!

1 Like

Okay, after getting a few requests for a breakdown of this I finally got it organized enough to (hopefully) explain. Here’s what’s up.

The base idea is: given some world position you want as the “source” of the smoke, every frame, you project that position into screen coordinates and draw a glow at that position into a render target. Then you use a post-process material to composite that target into your camera view. There’s a few more wrinkles that you have to address, including some I never got around to working out:

  1. distorting the smoke so it looks like it’s being blown around
  2. fading it out over time so it doesn’t just fill up your screen
  3. color variation
  4. (unaddressed) accounting for camera movement / rotation to make the screen-space-i-ness of it less obvious
  5. (unaddressed) hiding it behind objects

Here’s the entirety of the Blueprint part of this. I’ll explain the details below.

In the upper left, you can see the logic for transforming the position to screen space, as well as a sketchy way to figure out a screen-space size (the distance between the screen-space projections of the main point and a nearby point the-world-space-radius away from it).

Note that there are two separate render targets: a “scratch” target that most things are drawn into, created dynamically elsewhere in this blueprint, and then a final one that’s a named asset (WarpRenderTarget). I think the only reason for this is that I couldn’t figure out how to access the post-process material from a blueprint to give it the scratch target. The render targets are both RG16F—the red channel is the “density”, where the smoke is, and the green channel is a color mix value. The actual color is applied at the end by the post-process material.

Here’s what one of the render target looks like after a while; the yellow area is where there’s some red (density) present, while the green is color-mix value that’s been swirled around over time.

render target example

In sequence, here are the materials in use.

Copy / fade (blend mode Normal)
Takes the previous frame contents (from the scratch render target) and applies a fade over time (the upper section) and at the screen edges (the lower section). Note that the fade is only applied to the red channel (density) so that it doesn’t mess with the green one (color mix).

Glow (blend mode Additive)
Takes the parameters set by the blueprint for position / aspect / radius and draws a circular glow to both the red and green channels, added to what’s already there because of the additive blend mode.

Color (blend mode Modulate)
Using the same logic as the glow material for the mask, multiplies the green channel (the color mix value) with some noise, leaving the red channel untouched.

Warp (blend mode Normal)
Samples the supplied texture (the scratch render target) with a slight noise offset, effectively warping it a little bit along that noise every frame.

Post-process
Turns the red/green mask texture into the final colors that’ll be blended into the camera view. The green channel is used to blend between the actual colors and the red channel is used to mask the result with where the smoke is, as well as to add a white “core” to the smoke.

That’s pretty much it! I hope this explains things well enough; if you’ve got any more questions about it, please let me know. :slight_smile:

4 Likes