Render target which constantly get darker. Ping pong technique?

Hi guys :slight_smile:

Im a bit stuck. The goal is to take some dynamic texture (canvas render target) draw something on it, but every frame overal brightness of those texture should drop a little. 1, 0,9, 0,8 and so on, as an example.
So its clearly what we should take out canvas render target, pass it trougth some simple material with
tex*0.9 lets call it dimmed material, and save it to another render target.

My problem is what i dont clearly understand how we can achieve a loop which will take the allready dimmed material and pass it trougth another dimmer pass in each frame.

i’ve not done this myself but i think there’s an example in the Unreal Content Examples which does something similar - they do a fluid calculation so the Render Target water material acts like it has physics. If memory serves they use 3 RT’s in a loop rather than 2, not sure if that’s necessary or just part of their setup for fluids. Maybe take a look at what their doing and see if it helps?

The Unreal Content Example is a good starting point, but it does have some flaws.

Essentialy, make a shader only for doing the simulation. Make a texture parameter in the shader, which you use for creating the next frame and write that result into the RT which writes itself again on the same texture parameter used for the next frame, and so on.

Not 100% why they use 3 RTs, but i guess because writing into the RT and then the Texture Parameter takes time, and by scrolling trough a couple of them you make up for it and achieve real-time performance. (I guess :D)

As a know where something under the hood when you try to read and write to just i render target. I was readed on some OpenGL forum what even if you write some code like this, with just 1 RT for all, and it will works, actually a GPU will automatically create second RT texture for read/write. And having code writed that way can cause some bugs and errors.

Why not 1 RT?
If I’m not mistaken, most architectures restrict read-access and write-access to a resource at the same time.

Unless you are specifically just sampling one pixel and then overwriting it, it just doesn’t make any sense to read the texture you’re writing to.

Imagine a filter running over a texture and changing values as it runs over. If it wrote those results in the same texture, the next pixel wouldn’t know what was in there before.

Besides that, the GPU runs pixels in batches asynchronously. So from one pixels point of view, the value of another pixel could change while it is running it’s code, which is a total NG.

Why 3 RT’s?
The code for the fluid surface calculates further progression by looking at the change between the two previous frames and projecting it forward.
So you have:

  • [A] front rt for writing your results to.
  • [B] rt for frame - 1
  • [C] rt for frame - 2

next frame you cycle one forward so

  • [C] becomes front rt and gets written into
  • [A] is the result from last frame, so it is frame - 1
  • [B] is the result from two frames ago, so it is frame - 2

Hopefully I haven’t forgotten anything important :smiley:

1 Like

Thanks, got it! :smiley:

I got into RTs and simulating stuff lately, and each time I start working I learn something new. It’s a very interesting topic and you can make a lot with it, just have to be carefull to optimize it.