Death Stranding - Odradek Terrain Scanner - UE4 case study

Lastest update:

-------------- Original post ---------------

Hello everyone ! :smiley:

I finished playing DS recently and thought “how could I get this scanning effect in UE4?” and so I decided to try reconstruct it myself, so I’ll share here with you how this is going and what I discovered during this adventure.

For reference, the effect looks like this:

So, breakdown is:

  • Scanning wave effect
  • Outline around scanned objects
  • Little colored symbols appear in superposition to indicate terrain steepness/danger

Ok so, for now, here’s what I got inside UE4:

It’s a start for the scanning wave, but it’s not quite perfect.
First, let me show you how it’s done:

I started with the outline effect:
It’s a simple sobel post-process effect, with some minor tweaks. It’s mostly based on this tutorial video (very interesting stuff) and the UE4 livestream training on Cel-Shading which goes a little bit more in-depth about different techniques and approaches for an outline.
Most of the time, you’ll see outlines generated from a custom depth/stencil pass, but in my case it’s not possible since every object on screen must be outlined the same way.
Hence my take on trying a sobel approach here, but you can clearly see it’s not perfect. In Death Stranding, the outline effect seems to be based on a repeating depth value over distance, not much caring about pixel intensity derivatives at all… so I’ll try and search more about this.

The wave effect is the simple part, I just take the Scene Depth buffer and compare it with a parameter to get a mask:

This mask is useful, since now I can know if a pixel has been scanned (is white) or not yet (is black). So I multiply my sobel by this mask, and the mask itself is multiplied by the divided depth to get a gradient:

This is then just multiplied with a blueish color, and added on top of the game rendering (SceneTexture:PostProcessInput0 node). I also faded a bit the outlines near the camera, to get something less noisy at close distances.

Next up:

  • find a proper outline computation that matches what Death Stranding actually does
  • add the symbols (this one is gonna be fun. will probably do it with particles).
  • set up a third-person control of this with blueprints

Let me know what you think :wink:

PS: I forgot to mention that the PostProcess Material is set to be rendered Before Tonemapping/Before Transluency because otherwise you’ll have a lot of jitter due to TAA happening during tonemapping.
Also, terrain/environment assets are taken from A Boy And His Kite demo.



you can see outlines get sharper/thicker at the end of the gif. This is due to TAA running when I stop moving the material parameter. Once I implement this in BP, this shouldn’t happen.

Now the sobel/edge detection is better. Again it’s not perfect, I had to tweak and cheat a bit to get this result, but I’m getting closer to what the game actually does with it’s depth-buffer.
So it’s actually a double-sobel:
first pass, classic sobel

second pass, modified depth with interval

Both buffers are maxed together.
The interval line is done by testing the edge-detection with a modified depth-buffer that is kind of “discretized”, like this:

Here you can see the comparison with right pixel, I just do this for every for pixels (right, left, top, bottom).
By doing that, I’m doing an edge-detection with the fractional repeating part of the depth, hence this repeating edge effect.

Result is this:

Next steps: find a better way to get this “repeating line” pattern, if possible. I’m getting close but you can clearly see it’s not good yet.

  • Main problems are the little artifacts between the lines (mostly on ground grass). This is because the sobel is done in a certain “zone” instead of being just a pure, single repeating line.
  • And the general outline for objects against the sky (you can see this clearly on the main effect gif. On the top-right corner, the tree leaves are not outlined correctly with the background sky. This is due to the scanning mask lacking a “one-pixel dilation” to take sobel correctly within it).

I did it! :smiley:

now TAA is disabled for now, for easier tweaking/debug
Now it’s a perfect match of the game: a constant, distance-based, one pixel repeating line.

Turns out it was way easier than I expected. Still the good ol’ sobel work, but instead of edge-detecting pixels with the fractional parts of the depth, it’s just a simple floor process:

Bonus: no need for double-sobel, it just works like this.
Result seem easy stuff, but it was kind of a pain to wrap my head around that. Tried a bunch of different approaches and math before ending up with such a simple graph…
End result of the sobel:

And then for the outline/sky problem, as expected, I just had to dilate the scanner mask by one pixel in order for it to take the edge of the sobel (seems logical, since the sobel is kind of a “outer stroke” effect, so if you want it inside you mask, you have to dilate your mask by the thickness of that stroke).
Simply copied and pasted my mask code, and just like for the sobel I modified UVs to sample neighboring pixels (right, left, top and bottom). Used “max” node to add everything together in the end.

Next: setup Blueprint control with TPS controller


Quick and dirty recording to show the draft implementation inside the ThirdPersonBP:

Used a MaterialParameterCollection to drive ScanningRange, ScanningDirection and GlobalOpacity parameters.
Direction is set to camera forward vector when activating the scan, then this is tested in the shader with a dot product (against the shader camera vector).

A simple timeline in the blueprint controls the animation, and then the postprocess volume is deactivated at the end.

Edit: I forgot to mention that this is not quite good, as you can see in the video my lines are moving along with the camera. This is normal, since i’m doing everything with the current frame depth-buffer. I need to find a way to get a single-frame depth value, or something like that. Something uncorrelated with the current frame camera position…

1 Like

Fun fact, I took a break from doing shader stuff, to come here, to see your post, which is EXACTLY what I’m working on right now :smiley:
I use Unity, but I want to do exactly what you’ve done, and I’m clearly far behind…
My approach so far is using the depth texture to create some line with a bit of maths (modulo, subtract, abs, etc…). It works “ok” but I struggle to get close to the look of DS.

I will read your post more precisely for sure, you gave so much details :open_mouth:

For the particle on the ground, I’m using VFXGraph which allows me to use splatmap texture as an input, so I can get the current elevation of the terrain, and the surface type by sampling the color of the map. If the same map is used to create the terrain, it’s quite easy to decide which texture will be used.

Crazy to work on something on your own, and stumbling upon someone else work that’s exactly the same by just browsing randomly on the web…

1 Like

Haha, that’s quite a nice coincidence indeed :smiley:
I’m curious to see how you approached it, if you can share it with us! Feel free to post here :wink:

For the particles, your approach is pretty much what I had in mind! But I’m planning on using Niagara (should be a good exercise to get into it a bit). I just need to see how I can sample the terrain properly and efficiently. I was also thinking about a render-target approach, I don’t know yet.

I will for sure, once I have something that’s good to enough to share.
The only part from your post I don’t understand is how you make lines on the terrain, as the sobel by itself is a simple “edge detection”. Right now I’m using the depth and depth normal to get a good looking edge detection, but I don’t understand how I would come with those lines…

Well, it’s actually not a good solution for what I’m trying to achieve, but basically it’s still a edge-detection but with a “stepped” depth.

When you do a sobel operation with the depth, you compare the current pixel depth with the adjacents pixels depths, the depth of it’s neighbors. You subtract every result for every neighbor (current pixel - neighbor pixel >>> 4 times for top, bottom, right and left) and add up the 4 results together.
In my case, instead of using the raw scene depth as input, I divide the depth by a number and then floor it (round it to the nearest low integer) and use that as input. So, I operate an edge-detection on a “stepped”, “discretized”, “posterized” version of the depth buffer. So most of the pixels will have the same depth since they are floored, but not until the depth value passes to the next integer, were I’ll get a “jump” of 1 in my pixel depth. Doing a sobel on this gives me perfect lines.
I hope this explanation is clear enough :sweat_smile:
a number is just your line interval distance.

But, as I said, in my case this is not good. It’s based on the current depth, so it is changing every frame when my camera moves, and I don’t want that :stuck_out_tongue:

I cracked it! Finally :smiley:

Here’s another quick video recording, where I move around:

sorry for the lossy compression, my OBS recorder sucks up all my memory and CPU so it is the best I can do without losing all my framerate… I need to find an other video recording software.

So… turns out it was really complicated to get here in the end :sweat_smile:
To recap: my problem was that I was using the depth buffer all along to get this effect. Depth is essentially computed using the world position of the rendered pixel in relation to the camera (it’s the famous MVP matrix, transforming any point from a certain position in the scene (worldspace) to a certain position on your screen (“clipspace” first, before being converted into normalized viewport coordinates). Depth is, by definition, the distance between your eye and the pixel.
This causes a major issue for me, as you can see the lines “moving” when translating/rotating the view:

It’s obvious when you think of it: since depth is computed from the camera position and direction, changing it will change the depth :stuck_out_tongue:

So, I needed to find a way to get this instead:

And so by the look of it, you can probably guess what it is: a world-space distance field.
But the tricky part was to apply the sobel filter on a worldspace data, because it’s a screenspace effect. So I needed to:

  • Get the current pixel worldspace position (it’s the usual AbsoluteWorldPosition node).
  • Convert this PixelWorldPosition to screenspace coordinates (MVP stuff). There’s this TransformToClipSpace material function that exists in UE4 that does just that.
  • Apply a 1 pixel offset of the screen coord in four directions. Was already doing it for the original sobel, just take the inverse view size (1/ViewSize), multiply it by your 2D direction, and add that to your screen coord.
  • This new coord is the neighbor pixel. We have its screenspace position, now we have to convert it back to worldspace coordinates. That part was tricky, because there’s no function already here to do it for you. More on that below.
  • Now we have the current PixelWorldPosition AND ALSO all the neighbours PixelWorldPosition! yay!
  • Simply calculate the distance field for every pixel: length(PixelWorldPosition - PingWorldPosition).
  • Subtract the current distancefield and the neighbor distancefield, do this for every direction
  • Add up everything, and clamp between 0 and 1 (saturate).

Et voilà :wink:

PS: about the screen to world stuff. As some of you may have guessed it, it’s an inverse MVP matrix transform that does the trick. It was just a bit of a pain to find the right way to do it in UE4. Searched a lot before stumbling on this thread which lead me on the right track. Tried copy/pasting the solution (easy lazy solution, I hate matrices) but it wasn’t working (of course), I needed to strip out some calculations in my case.
Here’s the final ScreenPositionToWorldPosition function I made:

Clip Position is the centered-coord screen position (top left pixel is -1,-1 and bottom right is 1,1)
Screen Position is the normalized viewport UV (top left is 0,0 and bottom right is 1,1).
Now that I have written this, I realize I could just do everything with a single input lol…

Edit: added a link to a cool drawing made by Anton Gerdelan, of the 3D transformation pipeline that explains how a mesh is converted to screenspace.


Wow, awesome:wave:wave:wave:wave:wave:wave:wave:wave:wave:wave:

Wow, this is very cool, great work! And thank you sooo much for the detailed breakdown :slight_smile: :sparkling_heart:

1 Like

New update,

Here’s a draft for the Niagara system I’ll be using for the symbols. Particles are simple plain quads for now, and I’m debugging grid UVs in the particle color.
I’ve managed to setup a simple behavior that does the following:

  • When scanner is activated, spawn a BP Actor containing a SceneCaptureComponent2D
  • Move this actor right above the character position, pointing downward
  • Capture the scene and write depth to a render target
  • In Niagara, spawn particles on a horizontal grid
  • Sample the render target and reproject the camera’s point of view
  • Use this to offset the particles Z position, to match captured depth, and adapt to terrain model

For now it’s mainly based on this very good training session by Chris Murphy during GDC 2019. It’s a solid base for what I’m after, so it’s perfect!

Next steps:

  • Improve on this system: find a way to ignore stuff in the render target (like character)
  • For now it’s a perpective projection, maybe switch to orthographic.
  • Spawn particles in front and away of the player.
  • Find a way to detect harsh terrain/colliders somehow (no idea how to do that yet)
  • Polish visuals

I’ve managed to get a good visual result now, despite some Niagara crashes and bugs :sweat_smile:
Here’s some breakdown of what I’ve changed/added:

First, I’ve offseted the RenderTarget camera and the Niagara System spawn a bit, to spawn further in front of the player instead of above him:

I’ve also managed to select some specific rendering components inside the SceneCaptureComponent2D:

For now, I only render Landscape actors, so only the terrain :smiley: I might re-use this technique to render StaticMeshes in another texture, thus giving me the ability to distinguish terrain from objects!

Here’s how the Niagara Emitter is now:

I’ve tweaked the grid placement a bit, and offseted Particles.Position upward by 25cm to get particles just above the terrain.

I’ve created two custom Module Scripts.
Odradek Color is the main one, and controls all the rendering of the particles (fade-in, wave behavior, gradients, fade-out):

It’s quite a spaghetti mess, but it’s pretty simple. It’s mostly distance-field gradients and waves based on distance from User.PlayerPosition to Particles.Position. User.PlayerPosition is set at system spawn inside the BP.
I also use the grid UVs and get a sine wave of that, in both direction, to get a fading gradient at all outer edges.

The other custom module is Minimum Screen Size, and helps reduce the anti-aliasing happening when particles are to far away from the camera (when their size is smaller than a pixel), by ensuring a minimum pixel size for every particles:

This way, particles will never be smaller than 1 pixel size, thus giving a perfect non-aliased rendering :slight_smile: I didn’t manage to get camera/viewport data inside Niagara though, so I feed everything (Viewport resolution and Camera Position) via the Blueprint:

Next steps:

  • A bit of visual polish is needed on the animation side
  • Set up object collision detection (probably a second render target, will see)

Finally got the collision/object detection working. That was really complicated to get :sweat_smile: Not so much on the technical side, but I’ve been experiencing some UE4 and Niagara issues here and there throughout the day, and so I just spend most of the time going back and forth, disabling and enabling things to spot different bugs… It was quite a pain.

So, as I envisioned earlier, it’s just another RenderTarget but this one is captured from the player’s camera point of view, and I render only static meshes and foliage:

From there, I just place particles randomly on a plane facing the player, and offset every particle depending on the captured depth.
Animation/rendering in done the same way, I mask out and change the texture SubUV and size depending on the depth. So particles appear in a wave, then animate to change their texture from lens-flare to cross shape:
It’s not perfect though, as I stumbled upon some limitations inside Niagara:

  • Mainly being forced to use GPU particles in order to sample a render texture. I understand why, and I actually prefer to use GPU for this kind of effect, but it’s clear that the GPU sim still has some major problems that are quite frustrating. Not being able to debug particle values is cumbersome for example. But also some recurrent rendering problems, like particle not updating correctly or not rendering at all.
  • It seems to be very dependent on how many objects are rendered in the game at the same time (which is obviously logical). But I didn’t manage to find any possibility to ask the engine to “prioritize particles” when doing calculations on the GPU. Not sure if it’s even possible :stuck_out_tongue:
  • It’s not clear how particle data is carried around between spawn-time and update-time. I tried to set explicit values to be calculated at spawn, for performance, but sometimes it just does not work. And I don’t understand why.
  • It’s very hard to debug particle transforms issues. I spent hours trying to figure out why my particles where spawning in the wrong area, or with a wrong rotation. It would be very useful to be able to manipulate transforms in the Niagara editor itself, to quickly see if transforms are correct.
  • I tried to kill particles that are not used, but it seems buggy for now. Any attempt to set DataInstance.Alive in a module script failed, not sure why. So for now, I’m forced to spawn a lot of particles in order to have just a bunch of them to scatter on objects, and I just faded-out the unused ones. I should probably just set their size to 0, to avoid useless overdraw, but the ability to kill them completely would be perfect.

Next steps:

  • Improve the original terrain grid and take steepness into account
  • Add a cooldown and some feedback to the BP controller
  • Visual polish

Everything’s now implemented, along with a proper controller with cooldown and UI feedback (you can see the little progress bar on the bottom left corner).
I’m calling in done for now :upside_down_face: There’s still room for some improvements, especially on visual polish and better particle placement, but I’ll maybe be doing that another time. Time to rest a bit :slight_smile:

Last breakdown, for the steepness detection on terrain, it’s simply yet another RenderTarget but this time it’s capturing the Normal:
Shame there’s not the option to capture Normal in RGB and Depth in alpha, that would have saved a capture component and a RT asset…
In Niagara, I just take the normal color value for the particle and extract the Z value, and then compare that to a threshold:

So basically I can know if a particle spawned on a part of the terrain that is more or less perpendicular to the world up-vector, and change it’s color and texture subUV index based on that:

That’s it for now, thanks for those who read everything and I hope you enjoyed! :smiley:


Awesome work! :smiley:

Really cool with great breakdowns, thanks for those :smiley:

You did this to replicate it visually, do you think they used some similar methods to just bring the terrain info visually and procedurally with particles, or are they some hand placed Actors in the level with some logic behind important for gameplay mechanics? (Just a curious question :smiley: )

1 Like

Thanks :slight_smile:

I have actually no idea how they did it :sweat_smile: But I would bet on a similar approach (with render target) because for me it’s the most efficient method for this. The main difference might be that they “bake” the terrain info somehow instead of using a runtime buffer, which would probably save some memory on the GPU. Again, not sure because I’m not a graphics programmer and I don’t know enough about Decima Engine or the PS4 architecture :stuck_out_tongue:
But the RT approach might save a lot of time when level artists and environment artists are constantly updating/modifying the terrain.

If anyone has another idea or some info, feel free to share!

1 Like

Wonderfully done! @MrBrouchet
Thanks for the breakdown as well!

1 Like

Really cool stuff ! Thanx a lot ! But i have no idea why you call this filter Sobel. Sobel has two different matrices for each two directions. Like here It’s not a Sobel. It’s a some kinda of custom thing that works in this case. Am I not right?

You are right, a complete Sobel operation takes both direction into account when applying the kernel to the image, so you can get the gradients directions as well as the intensity difference :slight_smile:
In my case, I needed only the gradients magnitudes, so no need to compute in both direction and to calculate directions. But in my opinion, doing a kernel matrix operation to get pixel intensity derivatives can still be called a “sobel”, it’s just not a “full” sobel i guess :sweat_smile: