Hi Folks,
I’ve been trying to do “EveryDay” VFX / tech art sketches in my spare time since Nov last year. Managed about a 2 in 3 rate as just can’t do it every day, but I’m still quite happy with some of the doodlings and experiments that has come out of it.
This month, current experiments mostly been with LIDAR point cloud data from NYC audio-reactive inputs - e.g. today’s is:
Not sure if I’ll be spamming this thread with every daily, or just with ones that I’m particularly chuffed with.
Any feedback, ideas etc. more than welcome! I already have a massive ideas.txt file, but I find it handy to have cool ideas on a backburner for when I get tired of whatever I’m messing about with, so the more the merrier
Cheers!
#EveryDay 289: Running through the current audio-reactive point cloud experiments. First 2 are most polished state, but still scope for refining colours. Will need to trim out redundant test variants of the pull/push forms - will keep the “volumetric” ones, and maybe work on nicer colour gradients, as the ones in there are pretty much stock Unity vfxgraph ones with tweaked brightness.
#EveryDay 296: Shiny fish variant. No schooling/boids behaviour, still spawning from fluid sim velocity buffer and then motion from turbulence noise field (modulated by fluid sim speed).
wow, that looks nice! you should make a GIF and add it as first image in your first post so that it gets taken as thumbnail (or manually upload it as a thumbnail). I almost didn’t click your thread because there was no enganging thumbnail But there ware very interesting experiements in here!
I’ve seen those LED walls used for realtime movie VFX production a few times in person. Is the moiree for the person controlling the effects an issue?
Because up close I always found that somewhat disturbing / confusing
For this one it’s not great, because the LED panels are 5mm pixel pitch and designed for a viewing distance of 10+ meters (section show is 1% of LED install going into an indoor theme park as wraparound screen all the way around central hub section).
So at the distance azure kinect camera works (max about depth about 6 meters, but motion picked up better around 3 meters) the viewing isn’t good for the screen. You can even see significant moire on the recording - not helped by my iphone 8 being pretty old and low res now. iPad Pro sensor, especially using wide FoV lens/zoom, gives much less moire in recording from same distances and closer.
However, with much finer pixel pitch LED panels it’s much better - e.g. IIRC these ones are around 2mm pixel pitch and designed/specced for viewing distances around 2 meters (used as a body tracked game install on a cruise ship, with LED floor in front so the visuals need to work up close):
There’s loads of interesting info about LED volumes for TV/Film these days - AFAIK they use similar pixel pitch (around 2mm) but the cameras filming are much higher resolution sensors and further away than 2m, so moire from pixel pitch not an issue, but panels need to be calibrated to the camera sensor response, not human eye. This is one of the best articles I’d read when ILM/Favreau/Mandalorian first started showing off virtual production: The Mandalorian: This Is the Way - The American Society of Cinematographers (en-US)
Yes camera calibration and a minimum distance of ~5m sounds familiar. The setup in that video though, wow that looks utterly magical
finally feeling like a magician
Messing about with HDRP in latest Unity alpha, trying to get my lighting, fogging, PPP values etc all in sensible ranges so I’m not constantly fighting with settings because 6 months ago I randomly made one of the lights twice as bright as the sun as a quick hack to catch the bloom.
Fed some audio into a vfxgraph:
LED CAVE using OptiTrack camera rig for headtracking in Unreal nDisplay running Rural Australia environment. Have to capture it through the lens of active shutter glasses as it’s stereoscopic 3D, so video is only showing left eye view.
We didn’t realise there was a snake model in the brush there until we had it running in the CAVE - when it was just single powerwall 3D, never got to look at the ground
Always fun to see what perspective based visuals look like from the wrong perspective. LED CAVE visuals look correct for the position of the tracked 3D glasses I’m wearing, but wrong as soon as that diverges from the recording camera’s PoV.
Testing some of the 2D logo vfx upconverted to more of a 3D treatment:
Went down a rabbit-hole tweaking this underwatery kelp type vfx: