PixelPerfect: Sketch #25 Root

https://giphy.com/gifs/SAH8JzNKf08leQYOHv/html5

This is an interesting challenge.
My goal is to make a simulation for growing tentacles, that grow and try to catch whatever is near. The important part is that I don’t what to fake it, tentacles should act in a physically correct manner.

I’m doing that simulation using vellum in Houdini.

The first iteration

t1%4016colors

something already growing and moving randomly using simplex noise

Update1:

https://giphy.com/gifs/Z9QlAK15Trwm6Mqopx/html5

Made tentacles weave around their target
I’m applying a curly force to the tip of each tentacle and to areas that close enough to target (places where the curly force applied marked with red color), the white part moves randomly as before.
Also made tentacle size to be random.

Update2:

https://giphy.com/gifs/XGUy7np0dDkVLxqkwO/html5

Encoded animation to texture using Houdini gamedev toolset
Playing back it in Unity using slightly modified sidefx shader
it was not very obvious how to configure it all for unity, but it works in the end

Final Result:

https://giphy.com/gifs/SAH8JzNKf08leQYOHv/html5

polishing small details
added cracks on the ground, and some dust


P.S.
I’m open for any technical questions

48 Likes


here is what inside it
each tentacle consists of 26 spheres connected to each other using fixed length constraint, each sphere has a random force that slowly change over time

7 Likes

This is amazing, it looks so organic and real! :grinning:

1 Like

sometimes tentacles can nip each other, and then they try to free themselves, this looks funny, have to captrure that on video

also twitter suggested to feed them with Japanese girls…

4 Likes

Hahahaha.
Twitter plz.:joy:

2 Likes

It’s very interesting, how you begin to work it?
Can you share de .hip?

1 Like

This is really neat! Are you planing a realtime implementation? Houdini engine can’t export simulation that are dynamic only baked, right?

1 Like

Those physics are too heavy for the realtime engine, so I’m going to play ready animation, not sure about the exact way I’m going to go, maybe unity.formats.alembic or baking into vertex animation texture and playing that with ECS based GPU Animation library

1 Like

I’ll share lot more details a bit later

After the end of the challenge , please do a tutorial on how to do this or kindly share a link from which webniar or masterclass that helped to achieve this effect, it will be very helpful to other houdini using RTVfx artist.

Thank you! very much :grinning::pray:

2 Likes

The most tricky part of this all was to make tentacles grow… Normally, in Houdini, for dynamic objects you never change constraints or geometry attributes during simulation, you provide solver with initial values and it does the magic automatically, but in this case, I had to change geometry attributes and constraint length over time during simulation…

I’ve started with randomly positioned short lines

image

Those lines are short but each has 26 points
so 9 lines times 26 points per line = 234 points

For each point on each line, I have few attributes for later use

  • id of the line ( 1-9 )
  • index of the point in the line ( 1-26 )
  • U value of the point ( 0-1 normalized point position on the line )
  • bottom point of each line added to “pin” group

image

Next, I’m setting up constraints for vellum

now these lines are hair, with very high stretch stiffness and pretty high bend stiffness,
also bottom points is pinned and can’t move.

Vellum Constraints node generates additional geometry for constraints: 450 primitives
for each line 50: 25 for bend and 25 for stretch

image

Next, I’m passing data generated by Vellum Constraints to dop net (dynamic simulation)

image

this is a very basic simulation setup
with vellum solver and static solver

image

also (on the left side of the graph) I’m passing two static rigid bodies: a tube and ground plane for tentacles to collide with

image

all magic happening in sop solver where I’m changing constraints and forces

image

5 Likes

First Dop Import node configured to get ConstraintGeometry 450 primitives
the second gets usual Geometry 234 points

that data is passed to Compiled block

with 2 Attribute Wrangles
left wrangle operates on stretch constraints

image

I’m getting points on both sides of constraint and getting their radiuses.
and setting the length of constraint to be the sum of radiuses + extra 5%

Right wrangle operates on points

image
(it may need some clean up)
here I’m changing points radius over time and applying forces

  • grow with random delay per tentacle
  • if a point is close enough to the victim (y-axis) or close to the tip of the tentacle - curl around the y-axis
  • otherwise, add random velocity
3 Likes

At this point, it looks this way

tt-wire-32

blue lines are a velocity applied to points
the red part of the line is were it curls, white is here moves randomly

same thing but with point radius visualization:

tt-points-32

2 Likes

next, I’m sweeping a circle along lines and converting seep NURBS to mesh
and merging it with the tube that was used as a static collider

image

result geometry I’m saving to file cache

image

4 Likes

next in output context i’m dropping GameDev VertexAanimationTexture node

image

and hit Render
it does 4 things

  • outputs first frame as a static mesh
  • encodes vertex positions and normal to the texture (.exr 16bit per channel)
  • gives me BBOX numbers and frames count to configure shader in a realtime engine

after importing those files to unity
I’ve changed position texture settings

image

and created a material
bounds and number of frames values are provided by Houdini ones
notice that for unity Bounds have to be divided by 100

image

Also i’ve added Emission and Local Position to this shader

4 Likes

This is a very interesting work and this effect really deserves 1 place. I like both variant with small and big stones. As for me, the option with small stones looks better because the main focus is on the tentacles.

1 Like

I totally agree that focus is the most important thing.
Design is mostly about placing the right accents.

In russian, there is a steady expression “Замыленый взгляд” what literally means “look with soap in the eyes”, while working on this I’ve watched it too many times, so my look isn’t fresh anymore, and I can’t clearly see wtf I’m doing. It is great to get feedback from someone with a fresh gaze.

1 Like

One more small update

improved sidefx shader to interpolate positions and normals between nearest available frames, now there is no need to export that many frames to texture, and i can do a slowmo effect

https://giphy.com/gifs/Ve5EVMUitZcbD7dqsH/html5

HLSL

	void flipHouCoord( inout float3 v ){		    
        v.xyz = v.xzy; //swizzle y and z because textures are exported with z-up
        v.x *= -1; //flipped to account for right-handedness of unity
	}			
	
	float3 decodeFloat( float f ){
	    //decode float to float2
        float alpha = f * 1024;
        float2 f2;
        f2.x = floor(alpha / 32.0) / 31.5;
        f2.y = (alpha - (floor(alpha / 32.0)*32.0)) / 31.5;

        //decode float2 to float3
        float3 f3;
        f2 *= 4;
        f2 -= 2;
        float f2dot = dot(f2,f2);
        f3.xy = sqrt(1 - (f2dot/4.0)) * f2;
        f3.z = 1 - (f2dot/2.0);
        f3 = clamp(f3, -1.0, 1.0);   
        flipHouCoord(f3); 
        return f3;
	}


	//vertex function
	void vert (inout appdata_full v, out Input o) {		
        
        half currentTime  = _T * (_numOfFrames-1);// half
        half prevKeyframe = floor(currentTime);// int
        half nextKeyframe = ceil(currentTime);// int 
        half r = currentTime - prevKeyframe;
        
        half prevKeyframeLocation = prevKeyframe/_numOfFrames; // in 0-1 range
        half nextKeyframeLocation = nextKeyframe/_numOfFrames; // in 0-1 range            

		//get position and normal from texture
	    float2 uv1 = v.texcoord1.xy;
		float4 prevData = tex2Dlod(_posTex,float4(uv1.x, uv1.y - prevKeyframeLocation, 0, 0));
		float4 nextData = tex2Dlod(_posTex,float4(uv1.x, uv1.y - nextKeyframeLocation, 0, 0));			
		
		float3 pos = lerp(prevData.xyz, nextData.xyz, r);			

		//expand normalised position texture values to world space
		pos *= _boundingMax - _boundingMin;
		pos += _boundingMin;
		flipHouCoord(pos); 
		v.vertex.xyz += pos;

		//calculate normal		
		v.normal = lerp(decodeFloat(prevData.w), decodeFloat(nextData.w), r);
		
		UNITY_INITIALIZE_OUTPUT(Input,o);
		o.localPos = v.vertex.xyz;
	}
3 Likes

would love to see the .hip file, if your up for sharing, this effect is fantastic!