Andy's VFX Sketchbook. Oculus VR, hands tracking, Unreal and VAT

:comet: thumbnail :comet:

move_cam_up_iteration_3


Hey great RTVFX community!

I’m Andy and I am happy to join the party here.

Let me show you first the effect I am working on, after which, those who are interested can find the sequence breakdown and a couple of sentences about me.

Cut when pieces are emitted from the hand:
pillar_mk1_emission

Pillar assembly:
pillar_buildup

The full sequence in good quality

Here is the general overview of the pipeline:

  1. Using Oculus Quest 2 and its hand tracking feature (no controllers), hands and fingers movements feel quite natural
  2. Oculus’ hand skeleton is a bit different from the standard mannequin in Unreal, so I have created a custom skeletal mesh in Houdini and tweaked the existing inverse kinematics solution (UBIKSolver) to work with it
  3. Pillar assembly is a VAT (vertex animation texture) I made in Houdini. Internally it is RBD (rigid body dynamics) sim with custom trajectory solvers and timing management
  4. Stream of emitted pieces is driven by Niagara, because I want to fully control its trajectory - whenever I move or rotate the hand, newly spawned particles follow the updated path
  5. Stream emission is attached to the event of opening the hand. I’ve added some infra to know when a finger is opened/closed so I could plug and swap effects with little overhead

The next learning and building plans are:

  1. Landing pillar on the floor, accompanied by ground destruction
  2. Adding secondary effects for pillar construction, like dust coming out when large pieces are put into place
  3. Adding materials around the Ice theme
  4. Adding proper accumulation effects around hands and more nuances to pieces casting
  5. Adding sound effects, want to explore the use of collision data from Houdini to generate a believable sound layer
  6. And to continue having fun with this :wink:

And here is my story in short - I’ve been having a great journey building startups and doing things like data engineering and machine learning, when got the chance to try VR for the first time (the game was Superhot VR) and……

I was hooked and knew what I wanted to work from there on.

A couple of years forward from that, and I am happy to start sharing the things I am learning and building.


Thanks to those of you who’ve got that far, and please ask the questions if you’d want to know more.

6 Likes

Added pieces accumulation around the hands, followed by emission:

atract_emit

The full sequence in good quality

The effect has 2 main components:

  • Spline which creates the path for particles. Continuously updated to reflect current hands position and orientation.

  • Particles are managed by the Niagara system. I wanted to have the look of the same pieces spawned, accumulated, and then cast, so ended up building a single emitter, in which particles transition between those 3 states when certain criteria are met.

    I also wanted particles to stick to the hand when they are around it, but fly freely on the trajectory when they are away from the hand. For this, the position is interpolated between current and cached splines. As there are no arrays of splines in Niagara where I could store past spline trajectories, I am caching those into 1d array within emitter.

    Plus a good number of curves to drive behavior and variability, and other logic, like forward vector calculation - I ended up with a fairly big emitter. It is maintainable and easy to art direct, but I think is around the upper limit in complexity.

    Another thing - conditions and loops in Custom HLSL are better to be avoided (if possible), as advised by Unreal - I cannot count how many times the editor was crashing during work with an array in loops, but that happens only during editing Niagara modules, not in-game.


Dear RTVFX community - how would you approach building this kind of effect? Happy to hear your opinions or feedback.

2 Likes

Hey, dear community,

I’m excited to share a recent iteration of the VR experience I’ve been working on. It has several effects and interactions which you can check on YouTube, while I will post here the main one:

closeup_iteration_3

Wider shot:

move_cam_up_iteration_3

In essence, the effect is built through two Houdini RBD simulations:

Sim 1. Pillar Assembly
I’ve built a hierarchical structure (pillar → large → small chunks), and combined it into a 3-stage animation that defines target positions for each stage, then using forces within RBD sim to guide the pieces into place.

Sim 2. Floor Breakdown
The behavior I was striving for was to break the floor into bigger pieces when a pillar hit the floor, those large pieces would then break into smaller ones when colliding with walls, pillar of floor.
I took a hierarchical approach to sim here as well, maintaining constraints within large clusters until specific criteria are met.

The next step for me is to create a technical breakdown for the entire experience. Please let me know if you’d like more insight into specific aspects.

The full sequence can be checked here:

Feel free to reach out with any questions or share your feedback. I appreciate your engagement!

Thank you,
Andrew

3 Likes

Hi Andy. Thanks for the sharing your works. Can you able to show more detail workflow for the step 1? I guess you’d use 3 stages of point position then blend them step by step and bake to VAT. Is this right?
Thanks you in advance :slight_smile:

hey @gud22

It is a bit more nuanced, but I encourage you to check the breakdown video and I hope you’ll find the answers there:

If questions will remain - I’d be happy to answer.

And my apologies for the delay with response.