Motion Vectors (for animated textures), How do you generate them?

Just want to add, steven burrichter had given us an example on remapping the velocity inside a pyro sim with a python script

uld

This should help with getting the proper velocities, I had attempted to do this on my already rendered “vel” fields and trying to clamp it in post, but that clearly didnt work, will need to try this out, @Gaxx, thanks for sharing those results! looks great

-A

5 Likes

I have been using twixtor for our vectors here at foundry. Sometimes we layer them with the fumefx velocity maps but other times we just use the raw twixtor vectors. As mentioned by gaxx, you need to divide the twixtor strength value by the resolution of one of your frames… Unfortunatly the vectors you get out of fume dont work the same way. Best way i have found to find the strength for the fume vectors is just import the raw ones into your engine and adjust the shader strength until its smooth. If the strength is too low you will get the frame blending effect, if its too high you will see each frame flowing past the silhouette of the next frame. It helps if you make the sprite live for a long time so its easy to see any errors. (60-100 sec) Once you have your value you can multiply that with your original frame res to recover the twixtor strength required to match your fumefx map.

If everything works out you can extend the lifetime of your flipbooks over 100x while still having smooth motion. This technique works best with longer sequences (the longer it is the smoother the transition between frames) or reduced frame count, not much point using it on sprites that only live 1-2 seconds.

My latest flipbook has 72 frames, reduced from about 200 frames and im able to have the sprite live for 300+ seconds with no problems.

P.S. Orthographic renders can help.

1 Like

Oh man motion vectors. I explored some of it with Houdini a while back. Been meaning to revisit it.

So, Houdini allows you the ability to get the velocity voxel grid and render it. I think I left a shader implementation in the game dev shelf on SideFX’s Github before I left (instead of some crazy python). However, keep in mind that’s exact data in 3D space. Optical flow is a 2D approximation of images as opposed to actual simulated data. Because it’s being approximated, that’s how we can get that nice little overflow of movement.

You could try blending the velocity data between frames and creating a flipbook out of it. I’ve been able to get some results with some compositing work. However, keep in mind that because we’re looking at a 3d velocity representation, you might get some artifacts and weirdness.

Now, if we could actually get a proper optical flow COP node, then the above process would be (mostly) moot. I’ve been exploring HDK recently, so maybe this would make a fun little side project.

1 Like

This is somewhat of a brain dump, a couple of notes from my RnD into optical flow. Probably has some duplicate info, but hopefully some useful info as well. :slight_smile:

Despite looking and working very similar to each other, there is actually an important difference between the two. The velocity data coming out of your 3d package has arbitrary values assigned to it. The engine doesn’t know if your max velocity is 10cm or 100m, it only knows the velocities in relation to itself. Its also doesn’t know the difference from one frame to the next.

A proper optical flow maps strength values are directly tied to its frame resolution. By using the frame resolution as a sort of measuring stick, it can figure out the exact amount of displacement required to achieve a perfect blend. This is why i prefer to use a program like twixtor to generate my optical flow maps. Using a little bit of math we can figure out the exact value required for our shader. The formula is your displacement strength, (assuming your using twixtor or a similar program.) divided by your frame resolution. (not the total resolution of your flipbook, only 1 frame. Resizing the final flipbook afterwards does not change the required values)

IE: 128 strength / 1024 = 0.125 in the shader.

Since the velocity info coming from your 3d package isn’t in relation to anything other than itself, there is no easy formula for calculating the required strength for the shader. The best way iv’e found is just to import your velocity data and manually adjust the shader until its smooth. Setting the lifetime of your sprite to something crazy high (100-200 seconds) helps you narrow down the required value. Once you have your value you can multiply that with the frame resolution to get the corresponding value for twixtor. (in case you wanted to combine the 2)

Since the strength values are directly tied to the frame resolution, you can’t scale your vectors in post. You need to do all scaling actions before your vectors are created, either by a keyframed camera in your scene or using twixtor to regenerate after your scaling operations.

Velocity info from your package doesn’t have the edge padding required to deal with the silhouette of your effect. When an optical flow map is calculated, it compares each frame to the next in sequence to find out exactly how the pixels have moved in comparison to the previous frame. It then adds a sort of edge padding so it can properly distort the silhouette. Without this you will see the same stepping artifacts on the silhouettes as you do with traditional frame blending.

I’m not sure if this is the case for all iterations of the shader, but the way our shader works is the flowmap is always 1 frame ahead. So when its on frame 1 of the effect, the flowmap its using is actually the one from frame 2. Because of this we have to do a shuffle with all our flowmaps before compiling the flipbook. I just delete the first frame and have an empty flowmap on the last frame. (127,127,0) This way the first frame of my flowmap flipbook is actually frame 2 of my sequence. Usually your effect has almost totally faded out by your last frame so missing the last vector doesn’t matter too much, unless your looping your sequence.

You can use optical flow maps to reduce the amount of frames in your flipbook while still keeping smooth motion, but it does have limitations. The main thing to keep an eye on is how much the effect changes from frame one to the next. Ideally you wont want the effect to move more than 5-10% the width of the frame. Also keep in mind that although the frame is being distorted towards the position of the next, there is still a frame blend that happens as the frames switch. If the frames are too different from one to the next, you will still get blending artifacts. (ie: Imagine an A trying to change into a B) Think of the motion vectors as a direct linier translation of the pixels from one frame to the next.

Doing extreme camera movements between frames can produce bad results from twixtor. (raw velocity maps are much more accurate in this situation) Things such as keyframing the camera so the effect fills as much of the frame as possible, regardless of how much it moves. You must keep in mind the differences between frames, as well as the pixel positions between frames. Things such as explosions that expand a lot at the start can sometimes cause weird motions in your vector info if you keyframe to compensate for the rapid expansion . (vectors moving in directions opposite of what you would expect) I’ve found it helps in those situations if you leave a little bit of room for the effect to expand in each frame. (IE: the effect is slightly bigger in frame 2 than frame 1 etc.) This will help the vectors to flow outward as the effect expands instead of flowing inward due to the effect of the camera zooming in. You can still keyframe the camera, just as long as the effect expands in your frames. The first couple frame are always the trickiest.

You can still have frames that are not power of 2, as long as the final texture is a power of 2. (ie: 2048x2048 flipbook, 9 frames wide and 12 frames tall.) In this situation the same equation to calculate the shader strength is used, the only thing to keep in mind is to use the larger resolution number of your frame to calculate the value. (ie: a 1024x768 frame, use the 1024 in the equation)

Now go forth and…

8 Likes

Speaking of Twixtor, it would be hugely appreciated if someone could write up a really quick guide on how to get it to generate and export optical flow maps suitable for use in a particle shader! A while back I downloaded the version for After Effects but I couldn’t for the life of me get it to work, and after googling I could only find resources on how to use it to slow down filmed footage, which wasn’t quite what I was after.

2 Likes

Great info! I’ve definitely been finding you can’t stretch things too far with these either: Too much distance between frames doesn’t seem like it can be covered; the effect generally falls down. Smooth motion seems to be the sweet spot for these.

I’ve also been toying with using MotionVector.z as Motion Magnitude, just length(xy) but baked in to texture as it’s a free channel and these samples are already being combined.

I did show you… in person. o.O Where you ignoring me all this time?! :smiley:

Screw it. I’ll google it for you.
http://help.revisionfx.com/task/21/#/tutorial-86

5 Likes

This is exactly what I was going to post but was having trouble remembering.
Thanks!

Is it possible to make a looped animation of smoke from the chimneys, using one frame for diffusion and for Motion Vectors?

Or at least as far as possible frames to smoke and for Motion Vectors. Will it do so smoke for mobile games, in terms of performance on the cheap?

Is it possible to make a looped animation of smoke from the chimneys, using one frame for diffusion and for Motion Vectors?

Definitely, this example is using a 128x128 1 frame smoke alpha, and a photoshop > Render > cloud in its red and green channel to act as smoky normals. Using flat particle color for albedo. Displaced around by a small noise texture scrolling mixed in the normal+alpha texture’s UVs. Don’t mind the non-looping .gif

19 Likes

Another way you can loop it with twixtor is to loop the frames of your diffuse and then clone the first couple of frames and rename them so they are at the end.

ie: a 32 frame sequence becomes 34 frames, frame 33 is a clone of frame 1, 34 is frame 2 etc.

This way when twixtor will calculate the vectors during the transition loop. After that just delete the extra frames and compile your vectors.

P.S. It helps if you have a couple frames of padding. Twixtor examines multiple frames at a time when calculating vectors.

3 Likes

I’ve been finding that just using Twixtor doesn’t yield super positive results. Everything I’ve been testing still has a bit of the judder. None of my tests are as buttery smooth as the examples on alemen lozars writeup.

edit: I need to try cessex math trick of strength / resolution to see if that fixes my blend value. :stuck_out_tongue:

Yeah… If you have the wrong value, it will stutter. There are also some things to consider while generating the content to use with motionvectors. You’ll figure them out once you nail the scale.
Edit:

Things to consider would be the shapes of the sim. If you have a thin smokestrand and a blob of smoke juts out from it at one point in the sequence, that’ll most likely stutter a bit while the uniformlooking stuff remains smooth. Uniform expansion type things work the best and looping stuff that don’t have very irratic behaviour works well too.

This is absolutely true; the less noise/movements you have frame-to-frame, the better.

I have tried and failed a ton of times recently trying to get convincing motion vectors for some large-scale, billowing flames - and ultimately there was just too much noise/motion frame-to-frame. The lesson I learned is not to try and force shiny-new-tech in for the hell of it - there’s a time and a place to use it :slight_smile:

Caleb made a great sandstorm effect in our recent CitizenCon demo here. We will hopefully be releasing a breakdown video of this later in the week (I will leave it for Caleb to post that here).

It’s a perfect example where motion vectors CAN work brilliantly - very subtle changes frame-by-frame, and stretched out over a huge lifetime :desert:

1 Like

Here is the most recent ATV episode. In here i briefly explain the sandstorm a bit better. This is a decent example of the benefit of the motion vectors. The dirt/smoke textures in there had roughly 72 frames, but have a lifetime of 200 seconds. They were generated from a fumefx sim that had about 250 frames, then reduced down to the final count without losing any fluidity in the animation. Vectors were generated using twixtor.

7 Likes

the amount of progression in this game is insane!:cold_sweat:

Thanks so much for sharing that @cEssex! I wouldn’t have figured it would have been that straight forward a setup for the vfx side. Super cool! Were the vectors generated entirely with Twixtor, or did you need to export anything special out of fume besides just the rgb sim itself?

1 Like

These ones are 100% twixtor. Recently i have been trying to avoid layering the fume vectors on top, much less hassle. As long as your render doesnt change too much between each frame then twixtor does a pretty good job. I find it can really help if you have your effect expand slightly in each frame. So your effect is slightly smaller in frame 1 than 2 etc. This way when you generate the vectors they tend to always flow outwards, towards the edge of the frame. This will reduce some of the odd and inverted motions you can get if you’re doing any sort of scaling to keep the effect as big in the frame as possible. The resolution savings you can get by reducing total frame count makes up for the slightly less efficient use of the space within each frame.

1 Like

Did you end up having your texture looping, or do you just stretch it out from frame 0 to the end with the vectors?

No looping, just really long lifetimes. New particles continuously fade in as the old ones die.

I doubt anyone (aside from vfx artists :stuck_out_tongue: ) will stand there for 200 seconds (almost 3.5 mins) to watch it anyhow haha.

3 Likes