You can double click the node to see the implementation.

Iâll try to explain for future reference:

Blue:

- Scene Depth is the depth recorded in the depth buffer (translucency doesnât write into depth, so we can be certain this is the closest opaque object)
- Pixel Depth is the depth of the current pixel

Dividing SceneDepth by pixeldepth, or essentially, what part of the total scenedepth is our current pixeldepth

Yellow:

- Absolute world position is the current pixel in world space.
- CamerPosition is the camera position in world space.

Subtracting Camera position from the world position gives us a vector pointing from the camera to the pixel we currently deal with (fragment technically I believe, but itâs ok to think about it as pixels)

Red:

First we multiply our vector from the camera to the pixel position with the scene depth fraction, so now we have a vector which points from the camera in the direction of the current pixel, but with the length of the scene depth.

So now the only thing we need to get a worldspace vector is to add our camera position back into the vector.

To visualize the depth situation, it can be useful to think in numbers.

Current scenedepth is 10m

Current pixeldepth is 5m ( itâs always smaller, if it were bigger the pixel would be behind the depth end therefore not get rendered)

Fraction = 10/5, or we need to make the vector 2 times as large to reach the current scene depth.

I hope that clears it up