Motion Vectors (for animated textures), How do you generate them?

One thing people sometimes forget is that they can hand animate morphing using photoshop’s liquify. You can author your distortions on an image, then save the liquify mesh. Then, load your UV coordinate texture and apply that mesh distortion. It’s an instant ghetto-morph.

Instead of starting a new topic about the same thing I’ll lump it in here. I’ve created a Houdini asset that will generate motion vectors that don’t require much in the way of tweaking in the game engine. What I need now is to stress test it. If anyone is currently playing with motion vectors and uses Houdini hit me up and I’ll send you a link to the asset and a video.



Ooh yes please, I’ve been interested in the generation of motion vectors in Houdini for a while. I’ve tested it in the past but hadn’t had any luck generating anything that looks promising.


That would be cool, Actually, looking into motion vector right now. Would be cool if you could share your link and the video to me. Cheers.

1 Like

There are two major solutions I am aware of for motion vectors from video. One is built into the Slate particle sheet editor. The other is a plugin for aftereffects called Twixtor.

We’ve released a motion vector ROP for Houdini. You can find more details here:



It’s been a while since I’ve started this thread. Thanks to you all I’ve discovered things I didn’t know about before. In the end, I’ve settled for generating the motion vector maps via ALU instead of texture baking them. Thanks to the mighty power of Gather(). I’m so glad I came across Gather() it’s mind blowing!

Gather() returns a float4 that returns in xyzw the 4 pixels used for filtering (that isn’t point), you can also specify the offsets. So GatherRed() returns the 4 pixels on red channel, GatherGreen the ones on green channel etc… In my case I specify the

Basically this:

			float gradientX =  (SampleLevelZero(normTex, cFrameUV + offset.x).x*2.0-1.0) +
							   (SampleLevelZero(normTex, cFrameUV - offset.x).x*2.0-1.0) +
							   (SampleLevelZero(normTex, pFrameUV + offset.x).x*2.0-1.0) + 
							   (SampleLevelZero(normTex, pFrameUV - offset.x).x*2.0-1.0);

			float gradientY =  (SampleLevelZero(normTex, cFrameUV + offset.y).y*2.0-1.0) +
							   (SampleLevelZero(normTex, cFrameUV - offset.y).y*2.0-1.0) +
							   (SampleLevelZero(normTex, pFrameUV + offset.y).y*2.0-1.0) +
							   (SampleLevelZero(normTex, pFrameUV - offset.y).y*2.0-1.0); 

Can be done in 2 samples doing this:

			float4 xSamples = GatherRed(normTex, cFrameUV, int2(g_sampleDistance, 0), int2(-g_sampleDistance, 0), int2(offsetX+g_sampleDistance, 0), int2(offsetX-g_sampleDistance,0));
			float4 ySamples = GatherGreen(normTex, cFrameUV, int2(0, g_sampleDistance), int2(0, -g_sampleDistance), int2(0, offsetY+g_sampleDistance), int2(0, offsetY-g_sampleDistance));	

And on top of that, it’s texture size agnostic. so you don’t need to adjust a scaler value for the motion strength. All you need is to do a bit of prep work.

The whole Motion Vector is now as follow: Basically mStrength flips the process on, otherwise it’s just cross blending.

	float4 finalUV = float4(0.0,0.0,0.0,0.0);
		float2 pFrameUV_flow = float2(0.0, 0.0);
		float2 cFrameUV_flow = float2(0.0, 0.0);
		if (mStrength > 0.0)
			// Setups UVs and Blend
			float2 pFrameUV = UVtemp1;  					// UVFrame1
			float2 cFrameUV = UVtemp2;  					// UVFrame2
			float frameDiff = uvAnimBlend;					// UVBlend Timer

			//This is made for RGB packed flipbooks.
			float2 texSize = float2(0.0, 0.0);
			uint xxx = 0;
			uint yyy = 0;
			normTex.tex.GetDimensions(xxx,yyy); //Tells you the number of X and Y pixels your texture is.
			texSize.x = (float)xxx;
			texSize.y = (float)yyy;
			// calculate gradient of time.
			int offsetX = int(pFrameUV.x*texSize.x); 		// Need for later
			int offsetY = int(pFrameUV.y*texSize.y); 		// Need for later
			//Scale for varying texture sizes.
			float2 texScale = float2(numberOfXframes/texSize.x, numberOfYframes/texSize.y); 			
			float g_sampleDistance = 2;       				// Width of sample.
			float4 xSamples = GatherRed(normTex, cFrameUV, int2(g_sampleDistance, 0), int2(-g_sampleDistance, 0), int2(offsetX+g_sampleDistance, 0), int2(offsetX-g_sampleDistance,0));
			float4 ySamples = GatherGreen(normTex, cFrameUV, int2(0, g_sampleDistance), int2(0, -g_sampleDistance), int2(0, offsetY+g_sampleDistance), int2(0, offsetY-g_sampleDistance));			   
			xSamples = (1.0- xSamples) * 2.0 + ( -(float4)1.0);
			ySamples =  ySamples * 2.0 + ( -(float4)1.0);
			// Create Gradient of motion
			float gradientX = 0.5*(xSamples.x+xSamples.y+xSamples.z+xSamples.w);
			float gradientY = 0.5*(ySamples.x+ySamples.y+ySamples.z+ySamples.w);

			// Magnitude accounting texture size
			float gradientMag = length(float2(gradientX*texScale.x, gradientY*texScale.y));		

			//convert gradient into motion vector
			float2 velocity_p;
			float2 velocity_c;
			velocity_p.x = (frameDiff) * (gradientX*gradientMag);
			velocity_p.y = (frameDiff) * (gradientY*gradientMag);
			velocity_c.x = (1-frameDiff) * (gradientX*gradientMag);
			velocity_c.y = (1-frameDiff) * (gradientY*gradientMag);
			pFrameUV_flow = velocity_p;
			cFrameUV_flow = velocity_c;
	float4 tex1 = Sample(normTex, UVtemp1+(pFrameUV_flow));
	float4 tex2 = Sample(normTex, UVtemp2-(cFrameUV_flow));
    finalTexture = lerp(tex1, tex2, uvAnimBlend);

You may have to invert X and Y channels on line xSamples and ySamples depending on your engine’s preference…

Result (Crappy texture… but you get the idea hopefully…)


Thanks for posting about Gather(), it’s definitely a missing piece of the puzzle. I love that you can specify individual gather sample offsets, although looking at the disassembly in Renderdoc it looks like you do pay a similar cost as if you were offsetting manually (in your example, the two Gather() calls are 8 instructions).

9: gather4(2,0,0)(texture2d)(float,float,float,float) r2.x, v2.zwzz, texture0.xyzw, sampler0.x
10: gather4(-2,0,0)(texture2d)(float,float,float,float) r2.y, v2.zwzz, texture0.xyzw, sampler0.x
13: gather4_po(texture2d)(float,float,float,float) r2.z, v2.zwzz, r3.xyxx, texture0.xyzw, sampler0.x
14: gather4_po(texture2d)(float,float,float,float) r2.w, v2.zwzz, r3.zwzz, texture0.xyzw, sampler0.x
15: gather4(0,2,0)(texture2d)(float,float,float,float) r3.x, v2.zwzz, texture0.xyzw, sampler0.y
16: gather4(0,-2,0)(texture2d)(float,float,float,float) r3.y, v2.zwzz, texture0.xyzw, sampler0.y
18: gather4_po(texture2d)(float,float,float,float) r3.z, v2.zwzz, r1.xyxx, texture0.xyzw, sampler0.y
19: gather4_po(texture2d)(float,float,float,float) r3.w, v2.zwzz, r1.zwzz, texture0.xyzw, sampler0.y

However, if you don’t specify manual offsets, Gather() is only a single instruction.

1: gather4(texture2d)(float,float,float,float), r0.xyxx, texture0.xyzw, sampler0.x

Next steps would be to look into a rework of your method that can take advantage of that fact. Definitely something to ponder. On some textures it looks like you can get away with a single 4 tap sample also which is interesting.

I have been working on an alternate method which is more limited use as it make some assumptions about the velocity vectors and magnitude, it’s mostly meant for simple explosions as it is better at silhouette preservation. It’s cheap though, currently about 15 or 16 instructions. Here’s a WIP.


Hi Wyeth! I’d be very interested in your alternate method… As one of the problem I am facing is the silhouette preservation between frames on explosions…

Absolutely, as soon as the kinks are worked out I’ll share it. I am almost done, right now it requires a good amount of magic number fiddling which I would like to remove, although in the interest of keeping it as cheap as possible I might just leave it, the thinking being that you spend 15 minutes dialing in your numbers and then at least for that flipbook you are kinda done. Seems a reasonable trade for ease of use. On most projects I’ve worked on it’s not like you have 50 different flipbooks to dial in, you use the same 5-10 over and over.

1 Like

Been attempting this, can’t get it right. Any chance I could see your node set up?

Hi Junco, here’s a basic example you can start with.

Of course, you can change input UVs and other things as you go.

1 Like

How’s it coming along :smiley:

Thank you, the visual helps tremendously. :slight_smile:

Hi Mederic, Have you seen this ?

1 Like

i’m overwhelmed but desperate to incorporate this into our mobile (unity) pipeline
looks like the easiest (artist friendly) most accessible method is

  • explosion in blender,max,popcornFX or maya, (many simple tutorials online we can learn from)
  • output to FacedownFX (slate editor)
  • beg shader engineer to incorporate into toolset

my mission was to grab an explosion and i can take two avenues

  • sample the whole sequence
  • take 4 images, try to loop it (2x2)

went with a simple 2x2 to see if I can figure it out

there must be some way to append a 5th image to be used as the final flow but don’t know what to do next,…actually I feel like I haven’t learned anything

Are there anymore detailed accessible methods out there? Klemen Lozars is still too involved for me to pickup
I tried googling “Optical Flow Gnomon” or “Lynda” don’t get anything particularly useful

what would be the current next steps in terms of researching?

obviously the ideal answer I want - popcornFX or Blender would have an Optical flow checkbox

  • I make explosion
  • hit check box
  • render two sprite sheets
  • plop into engine shader

Is this a pipe dream?
is my only single package solution at the moment Houdini?
Is it a single package solution?

1 Like

Right now Houdini is the only single package solution.

The other solutions is Slate, Twixtor and potentially NukeX.

Klemens writeup is the only non Houdini one I’ve seen.


thanks man. im trying fraps with Unity -> VLC -> image sequence + Slate

I mean…it’s a start.


  • beginning frames are fast …not enough visual parity to get flow
  • smoke is so slow and using too much texture space, it could be rendered on 2’s

I can get visual parity with playback_speed but Slate would need a “remap” curve to process the sequence blending X:1 and the curve would also process smoke frames at higher ratios to throw out redundant visual data… I’m imagining sequences need to be authored with normalized pixel movement not real-time some type of optimized distortion time analogous to a models UV’s with unified pixel density - Am i making myself clear?
but all that doesn’t exist so what can we do?..

could I just break it up into 2 sprites and 2 emitters? would that work?

  • one is the explosion using additive
  • one the smoke using alpha

hmmm not a bad idea? this might allow me to optimize the smoke into a 256 with 3x3 (9 images)
and a 512 (4x4) for the explosion

Hi, I’m curious if anyone else is running into the issue of blockyness when the flipbook is played back more quickly? Has anyone been able to find a solution for that?

Hey everyone, at Tuatara we are super happy to announce that we have released a one-click motion vectors baking tool : TFlow - Motion Vector Generator