UE4 cheap fire


I am trying to create fire similar to the fire in “Technical Artist Bootcamp: The VFX of Diablo” presented by Julian Love from Blizzard.

I have the textures multiplied against each other but they all play at the same speed with the same information, even after using a particle random. Is there something I am missing to have them all play individually?

I am also wondering if the best way is to use ribbons, if so how would you approach that and if not would rectangles be the next option?

Sorry, UE4 is unfamiliar territory for me.


What you could do is to use a dynamic parameter inside your particle material, which e.g. adds its value to your panners’ speed. Then inside your particles add a parameter dynamic and use the one you create in your material. With settings: ‘float uniform’ between two values, e.g. [-1, 1]. Tick ‘Spawn time only’. Then your particles should have random panner speed added to them.

Why would you want to use ribbons? If you are going for an abstract fire you could, but i’d go for flipbooks, alpha cutout panning, or shader magic like this.

If my explanation was too cryptic, i’ll try to make some screenshots later on when i’m home.

BTW: particle random node only works with gpu particles.

1 Like

Heres my basic set up. Ignore the top 3 texture samples. They are not used in this fire material. Though they are the set up for smoke from the Blizzard vfx video you are talking about.

I apologize for the messy material. I was just playing around on night at 3 in the morning.


No time to comment on the whole setup, but that depthfade shouldnt be multiplied, you put that previous multiply into the opacity slot.

/me runs off to catch bus.

U can get color from blackbody node. that way U can with depth fade simulate losing temperature against the ground. That way U can save 3 channels cuz U only need like 1 channel. ( tip: for blackbody node U need to multiply input by about 3000. )

1 Like

This is a bit of a sidetrack, but I’ve always wondered about the depthfade. I see some people putting it in as a multiply and others directly putting it in the opacity. I’ve tried both ways before and it seemed like they both result the same. Is there a reason why its one or the other

I prefer multiplying after, you might want to subtract the depth fade output instead, so the opacity input is optional.
(Mayyybe it’d be doing an extra Multiply by 1 inside the DepthFade, but that “should” get stripped out on compile).

yea, it shouldnt matter… Im gonna bump a dev buddy to see what they think about it.
I wanna know what the difference would/could be.

Ha Luos_83!
I bet that’s a habit from unreal 3. Remember when it was just a depth node?


The code that the DepthFade outputs is:

Opacity * Saturate( (SceneDepth - PixelDepth) / max( FadeDistance, 0.0001) );

So that means it is mathematically the same as multiplying it by an opacity after. The compiler should be smart enough to realize that 1 * X = X and compile away the default Opacity (which is 1).

Sooooo that means that it should be identical. I do not know if certain compilers would not correctly remove the needless multiply, but I know most would.


link to the talk mentioned in the first post for anyone interested.




link isn’t working anymore, here’s a different url: GDC 2013: Julian Love - "Technical Artist Bootcamp: The VFX of Diablo" : Free Download, Borrow, and Streaming : Internet Archive

1 Like