Let me show you first the effect I am working on, after which, those who are interested can find the sequence breakdown and a couple of sentences about me.
Using Oculus Quest 2 and its hand tracking feature (no controllers), hands and fingers movements feel quite natural
Oculus’ hand skeleton is a bit different from the standard mannequin in Unreal, so I have created a custom skeletal mesh in Houdini and tweaked the existing inverse kinematics solution (UBIKSolver) to work with it
Pillar assembly is a VAT (vertex animation texture) I made in Houdini. Internally it is RBD (rigid body dynamics) sim with custom trajectory solvers and timing management
Stream of emitted pieces is driven by Niagara, because I want to fully control its trajectory - whenever I move or rotate the hand, newly spawned particles follow the updated path
Stream emission is attached to the event of opening the hand. I’ve added some infra to know when a finger is opened/closed so I could plug and swap effects with little overhead
The next learning and building plans are:
Landing pillar on the floor, accompanied by ground destruction
Adding secondary effects for pillar construction, like dust coming out when large pieces are put into place
Adding materials around the Ice theme
Adding proper accumulation effects around hands and more nuances to pieces casting
Adding sound effects, want to explore the use of collision data from Houdini to generate a believable sound layer
And to continue having fun with this
And here is my story in short - I’ve been having a great journey building startups and doing things like data engineering and machine learning, when got the chance to try VR for the first time (the game was Superhot VR) and……
I was hooked and knew what I wanted to work from there on.
A couple of years forward from that, and I am happy to start sharing the things I am learning and building.
Thanks to those of you who’ve got that far, and please ask the questions if you’d want to know more.
Spline which creates the path for particles. Continuously updated to reflect current hands position and orientation.
Particles are managed by the Niagara system. I wanted to have the look of the same pieces spawned, accumulated, and then cast, so ended up building a single emitter, in which particles transition between those 3 states when certain criteria are met.
I also wanted particles to stick to the hand when they are around it, but fly freely on the trajectory when they are away from the hand. For this, the position is interpolated between current and cached splines. As there are no arrays of splines in Niagara where I could store past spline trajectories, I am caching those into 1d array within emitter.
Plus a good number of curves to drive behavior and variability, and other logic, like forward vector calculation - I ended up with a fairly big emitter. It is maintainable and easy to art direct, but I think is around the upper limit in complexity.
Another thing - conditions and loops in Custom HLSL are better to be avoided (if possible), as advised by Unreal - I cannot count how many times the editor was crashing during work with an array in loops, but that happens only during editing Niagara modules, not in-game.
Dear RTVFX community - how would you approach building this kind of effect? Happy to hear your opinions or feedback.
I’m excited to share a recent iteration of the VR experience I’ve been working on. It has several effects and interactions which you can check on YouTube, while I will post here the main one:
Wider shot:
In essence, the effect is built through two Houdini RBD simulations:
Sim 1. Pillar Assembly
I’ve built a hierarchical structure (pillar → large → small chunks), and combined it into a 3-stage animation that defines target positions for each stage, then using forces within RBD sim to guide the pieces into place.
Sim 2. Floor Breakdown
The behavior I was striving for was to break the floor into bigger pieces when a pillar hit the floor, those large pieces would then break into smaller ones when colliding with walls, pillar of floor.
I took a hierarchical approach to sim here as well, maintaining constraints within large clusters until specific criteria are met.
The next step for me is to create a technical breakdown for the entire experience. Please let me know if you’d like more insight into specific aspects.
The full sequence can be checked here:
Feel free to reach out with any questions or share your feedback. I appreciate your engagement!
Hi Andy. Thanks for the sharing your works. Can you able to show more detail workflow for the step 1? I guess you’d use 3 stages of point position then blend them step by step and bake to VAT. Is this right?
Thanks you in advance