Hello, I haven’t used Unity so don’t think I can provide any tips for it. From what I’ve seen online, Unity is always way easier to use when it comes to implementing GPU techniques (Unreal is way harder to work with ^^’)
For Niagara, we can implement custom data interface in code which can then be used by the Editor.
This includes stuff from EmitterSpawn to ParticleUpdate, almost everything can access the interfaces.
If you watch the GDC talk I referenced, the basic requirements are
- We should be able to store the rendered pixel data into a buffer (I used ByteAddressBuffer). I am sure unity lets you do this. I am storing particle world position and number of particles to spawn atm. We should be able to store ScreenUV and use that to extract data from Niagara’s GBuffer interface. I am also using world position directly so that I can emit particles from Transparent objects too.
- We should be able to extract the particle count from GPU buffers (buffer readback). This is needed because Niagara determines how many particles to spawn on CPU. This can cause stalls if we try to read the GPU on same frame so I do that on next frame which adds one frame delay to particle spawning.
- So once I have the particle data buffer and particle count to spawn I can just send that data through my custom Interface to Niagara
For supporting writing to custom buffer you have to jump a lot of hoops in Unreal , I think that should be easier in Unity. I am doing a depth only pass and outputting the data in to a buffer instead of using render targets.
Let me know if you need more info