Death Stranding - Odradek Terrain Scanner - UE4 case study

Lastest update:

Full video here: https://www.artstation.com/artwork/qAGA5N

-------------- Original post ---------------

Hello everyone ! :smiley:

I finished playing DS recently and thought ā€œhow could I get this scanning effect in UE4?ā€ and so I decided to try reconstruct it myself, so Iā€™ll share here with you how this is going and what I discovered during this adventure.

For reference, the effect looks like this:

So, breakdown is:

  • Scanning wave effect
  • Outline around scanned objects
  • Little colored symbols appear in superposition to indicate terrain steepness/danger

Ok so, for now, hereā€™s what I got inside UE4:

Itā€™s a start for the scanning wave, but itā€™s not quite perfect.
First, let me show you how itā€™s done:

I started with the outline effect:


Itā€™s a simple sobel post-process effect, with some minor tweaks. Itā€™s mostly based on this tutorial video (very interesting stuff) and the UE4 livestream training on Cel-Shading which goes a little bit more in-depth about different techniques and approaches for an outline.
Most of the time, youā€™ll see outlines generated from a custom depth/stencil pass, but in my case itā€™s not possible since every object on screen must be outlined the same way.
Hence my take on trying a sobel approach here, but you can clearly see itā€™s not perfect. In Death Stranding, the outline effect seems to be based on a repeating depth value over distance, not much caring about pixel intensity derivatives at allā€¦ so Iā€™ll try and search more about this.

The wave effect is the simple part, I just take the Scene Depth buffer and compare it with a parameter to get a mask:
mask

This mask is useful, since now I can know if a pixel has been scanned (is white) or not yet (is black). So I multiply my sobel by this mask, and the mask itself is multiplied by the divided depth to get a gradient:


This is then just multiplied with a blueish color, and added on top of the game rendering (SceneTexture:PostProcessInput0 node). I also faded a bit the outlines near the camera, to get something less noisy at close distances.

Next up:

  • find a proper outline computation that matches what Death Stranding actually does
  • add the symbols (this one is gonna be fun. will probably do it with particles).
  • set up a third-person control of this with blueprints

Let me know what you think :wink:

PS: I forgot to mention that the PostProcess Material is set to be rendered Before Tonemapping/Before Transluency because otherwise youā€™ll have a lot of jitter due to TAA happening during tonemapping.
Also, terrain/environment assets are taken from A Boy And His Kite demo.

55 Likes

Update,

you can see outlines get sharper/thicker at the end of the gif. This is due to TAA running when I stop moving the material parameter. Once I implement this in BP, this shouldnā€™t happen.

Now the sobel/edge detection is better. Again itā€™s not perfect, I had to tweak and cheat a bit to get this result, but Iā€™m getting closer to what the game actually does with itā€™s depth-buffer.
So itā€™s actually a double-sobel:
first pass, classic sobel


second pass, modified depth with interval

Both buffers are maxed together.
The interval line is done by testing the edge-detection with a modified depth-buffer that is kind of ā€œdiscretizedā€, like this:

Here you can see the comparison with right pixel, I just do this for every for pixels (right, left, top, bottom).
By doing that, Iā€™m doing an edge-detection with the fractional repeating part of the depth, hence this repeating edge effect.

Result is this:

Next steps: find a better way to get this ā€œrepeating lineā€ pattern, if possible. Iā€™m getting close but you can clearly see itā€™s not good yet.

  • Main problems are the little artifacts between the lines (mostly on ground grass). This is because the sobel is done in a certain ā€œzoneā€ instead of being just a pure, single repeating line.
  • And the general outline for objects against the sky (you can see this clearly on the main effect gif. On the top-right corner, the tree leaves are not outlined correctly with the background sky. This is due to the scanning mask lacking a ā€œone-pixel dilationā€ to take sobel correctly within it).
7 Likes

Update,
I did it! :smiley:


now TAA is disabled for now, for easier tweaking/debug
Now itā€™s a perfect match of the game: a constant, distance-based, one pixel repeating line.

Turns out it was way easier than I expected. Still the good olā€™ sobel work, but instead of edge-detecting pixels with the fractional parts of the depth, itā€™s just a simple floor process:


Bonus: no need for double-sobel, it just works like this.
Result seem easy stuff, but it was kind of a pain to wrap my head around that. Tried a bunch of different approaches and math before ending up with such a simple graphā€¦
End result of the sobel:

And then for the outline/sky problem, as expected, I just had to dilate the scanner mask by one pixel in order for it to take the edge of the sobel (seems logical, since the sobel is kind of a ā€œouter strokeā€ effect, so if you want it inside you mask, you have to dilate your mask by the thickness of that stroke).
Simply copied and pasted my mask code, and just like for the sobel I modified UVs to sample neighboring pixels (right, left, top and bottom). Used ā€œmaxā€ node to add everything together in the end.

Next: setup Blueprint control with TPS controller

10 Likes

Quick and dirty recording to show the draft implementation inside the ThirdPersonBP:
https://youtu.be/BVdG1ZoPLFw
Used a MaterialParameterCollection to drive ScanningRange, ScanningDirection and GlobalOpacity parameters.
Direction is set to camera forward vector when activating the scan, then this is tested in the shader with a dot product (against the shader camera vector).

A simple timeline in the blueprint controls the animation, and then the postprocess volume is deactivated at the end.

Edit: I forgot to mention that this is not quite good, as you can see in the video my lines are moving along with the camera. This is normal, since iā€™m doing everything with the current frame depth-buffer. I need to find a way to get a single-frame depth value, or something like that. Something uncorrelated with the current frame camera positionā€¦

2 Likes

Fun fact, I took a break from doing shader stuff, to come here, to see your post, which is EXACTLY what Iā€™m working on right now :smiley:
I use Unity, but I want to do exactly what youā€™ve done, and Iā€™m clearly far behindā€¦
My approach so far is using the depth texture to create some line with a bit of maths (modulo, subtract, abs, etcā€¦). It works ā€œokā€ but I struggle to get close to the look of DS.

I will read your post more precisely for sure, you gave so much details :open_mouth:

For the particle on the ground, Iā€™m using VFXGraph which allows me to use splatmap texture as an input, so I can get the current elevation of the terrain, and the surface type by sampling the color of the map. If the same map is used to create the terrain, itā€™s quite easy to decide which texture will be used.

Crazy to work on something on your own, and stumbling upon someone else work thatā€™s exactly the same by just browsing randomly on the webā€¦
Cheers!

1 Like

Haha, thatā€™s quite a nice coincidence indeed :smiley:
Iā€™m curious to see how you approached it, if you can share it with us! Feel free to post here :wink:

For the particles, your approach is pretty much what I had in mind! But Iā€™m planning on using Niagara (should be a good exercise to get into it a bit). I just need to see how I can sample the terrain properly and efficiently. I was also thinking about a render-target approach, I donā€™t know yet.

I will for sure, once I have something thatā€™s good to enough to share.
The only part from your post I donā€™t understand is how you make lines on the terrain, as the sobel by itself is a simple ā€œedge detectionā€. Right now Iā€™m using the depth and depth normal to get a good looking edge detection, but I donā€™t understand how I would come with those linesā€¦

Well, itā€™s actually not a good solution for what Iā€™m trying to achieve, but basically itā€™s still a edge-detection but with a ā€œsteppedā€ depth.

When you do a sobel operation with the depth, you compare the current pixel depth with the adjacents pixels depths, the depth of itā€™s neighbors. You subtract every result for every neighbor (current pixel - neighbor pixel >>> 4 times for top, bottom, right and left) and add up the 4 results together.
In my case, instead of using the raw scene depth as input, I divide the depth by a number and then floor it (round it to the nearest low integer) and use that as input. So, I operate an edge-detection on a ā€œsteppedā€, ā€œdiscretizedā€, ā€œposterizedā€ version of the depth buffer. So most of the pixels will have the same depth since they are floored, but not until the depth value passes to the next integer, were Iā€™ll get a ā€œjumpā€ of 1 in my pixel depth. Doing a sobel on this gives me perfect lines.
I hope this explanation is clear enough :sweat_smile:
a number is just your line interval distance.

But, as I said, in my case this is not good. Itā€™s based on the current depth, so it is changing every frame when my camera moves, and I donā€™t want that :stuck_out_tongue:

I cracked it! Finally :smiley:

Hereā€™s another quick video recording, where I move around:

sorry for the lossy compression, my OBS recorder sucks up all my memory and CPU so it is the best I can do without losing all my framerateā€¦ I need to find an other video recording software.

Soā€¦ turns out it was really complicated to get here in the end :sweat_smile:
To recap: my problem was that I was using the depth buffer all along to get this effect. Depth is essentially computed using the world position of the rendered pixel in relation to the camera (itā€™s the famous MVP matrix, transforming any point from a certain position in the scene (worldspace) to a certain position on your screen (ā€œclipspaceā€ first, before being converted into normalized viewport coordinates). Depth is, by definition, the distance between your eye and the pixel.
This causes a major issue for me, as you can see the lines ā€œmovingā€ when translating/rotating the view:

Itā€™s obvious when you think of it: since depth is computed from the camera position and direction, changing it will change the depth :stuck_out_tongue:

So, I needed to find a way to get this instead:

And so by the look of it, you can probably guess what it is: a world-space distance field.
But the tricky part was to apply the sobel filter on a worldspace data, because itā€™s a screenspace effect. So I needed to:

  • Get the current pixel worldspace position (itā€™s the usual AbsoluteWorldPosition node).
  • Convert this PixelWorldPosition to screenspace coordinates (MVP stuff). Thereā€™s this TransformToClipSpace material function that exists in UE4 that does just that.
  • Apply a 1 pixel offset of the screen coord in four directions. Was already doing it for the original sobel, just take the inverse view size (1/ViewSize), multiply it by your 2D direction, and add that to your screen coord.
  • This new coord is the neighbor pixel. We have its screenspace position, now we have to convert it back to worldspace coordinates. That part was tricky, because thereā€™s no function already here to do it for you. More on that below.
  • Now we have the current PixelWorldPosition AND ALSO all the neighbours PixelWorldPosition! yay!
  • Simply calculate the distance field for every pixel: length(PixelWorldPosition - PingWorldPosition).
  • Subtract the current distancefield and the neighbor distancefield, do this for every direction
  • Add up everything, and clamp between 0 and 1 (saturate).

Et voilĆ  :wink:

PS: about the screen to world stuff. As some of you may have guessed it, itā€™s an inverse MVP matrix transform that does the trick. It was just a bit of a pain to find the right way to do it in UE4. Searched a lot before stumbling on this thread which lead me on the right track. Tried copy/pasting the solution (easy lazy solution, I hate matrices) but it wasnā€™t working (of course), I needed to strip out some calculations in my case.
Hereā€™s the final ScreenPositionToWorldPosition function I made:


Clip Position is the centered-coord screen position (top left pixel is -1,-1 and bottom right is 1,1)
Screen Position is the normalized viewport UV (top left is 0,0 and bottom right is 1,1).
Now that I have written this, I realize I could just do everything with a single input lolā€¦

Edit: added a link to a cool drawing made by Anton Gerdelan, of the 3D transformation pipeline that explains how a mesh is converted to screenspace.

20 Likes

Wow, awesome:wave:wave:wave:wave:wave:wave:wave:wave:wave:wave:

Wow, this is very cool, great work! And thank you sooo much for the detailed breakdown :slight_smile: :sparkling_heart:

1 Like

New update,

Hereā€™s a draft for the Niagara system Iā€™ll be using for the symbols. Particles are simple plain quads for now, and Iā€™m debugging grid UVs in the particle color.
Iā€™ve managed to setup a simple behavior that does the following:

  • When scanner is activated, spawn a BP Actor containing a SceneCaptureComponent2D
  • Move this actor right above the character position, pointing downward
  • Capture the scene and write depth to a render target
  • In Niagara, spawn particles on a horizontal grid
  • Sample the render target and reproject the cameraā€™s point of view
  • Use this to offset the particles Z position, to match captured depth, and adapt to terrain model

For now itā€™s mainly based on this very good training session by Chris Murphy during GDC 2019. Itā€™s a solid base for what Iā€™m after, so itā€™s perfect!

Next steps:

  • Improve on this system: find a way to ignore stuff in the render target (like character)
  • For now itā€™s a perpective projection, maybe switch to orthographic.
  • Spawn particles in front and away of the player.
  • Find a way to detect harsh terrain/colliders somehow (no idea how to do that yet)
  • Polish visuals
7 Likes

Iā€™ve managed to get a good visual result now, despite some Niagara crashes and bugs :sweat_smile:
Hereā€™s some breakdown of what Iā€™ve changed/added:

First, Iā€™ve offseted the RenderTarget camera and the Niagara System spawn a bit, to spawn further in front of the player instead of above him:

Iā€™ve also managed to select some specific rendering components inside the SceneCaptureComponent2D:


For now, I only render Landscape actors, so only the terrain :smiley: I might re-use this technique to render StaticMeshes in another texture, thus giving me the ability to distinguish terrain from objects!

Hereā€™s how the Niagara Emitter is now:


Iā€™ve tweaked the grid placement a bit, and offseted Particles.Position upward by 25cm to get particles just above the terrain.

Iā€™ve created two custom Module Scripts.
Odradek Color is the main one, and controls all the rendering of the particles (fade-in, wave behavior, gradients, fade-out):


Itā€™s quite a spaghetti mess, but itā€™s pretty simple. Itā€™s mostly distance-field gradients and waves based on distance from User.PlayerPosition to Particles.Position. User.PlayerPosition is set at system spawn inside the BP.
I also use the grid UVs and get a sine wave of that, in both direction, to get a fading gradient at all outer edges.

The other custom module is Minimum Screen Size, and helps reduce the anti-aliasing happening when particles are to far away from the camera (when their size is smaller than a pixel), by ensuring a minimum pixel size for every particles:


This way, particles will never be smaller than 1 pixel size, thus giving a perfect non-aliased rendering :slight_smile: I didnā€™t manage to get camera/viewport data inside Niagara though, so I feed everything (Viewport resolution and Camera Position) via the Blueprint:

Next steps:

  • A bit of visual polish is needed on the animation side
  • Set up object collision detection (probably a second render target, will see)
6 Likes


Finally got the collision/object detection working. That was really complicated to get :sweat_smile: Not so much on the technical side, but Iā€™ve been experiencing some UE4 and Niagara issues here and there throughout the day, and so I just spend most of the time going back and forth, disabling and enabling things to spot different bugsā€¦ It was quite a pain.

So, as I envisioned earlier, itā€™s just another RenderTarget but this one is captured from the playerā€™s camera point of view, and I render only static meshes and foliage:


From there, I just place particles randomly on a plane facing the player, and offset every particle depending on the captured depth.
Animation/rendering in done the same way, I mask out and change the texture SubUV and size depending on the depth. So particles appear in a wave, then animate to change their texture from lens-flare to cross shape:
texture
Itā€™s not perfect though, as I stumbled upon some limitations inside Niagara:

  • Mainly being forced to use GPU particles in order to sample a render texture. I understand why, and I actually prefer to use GPU for this kind of effect, but itā€™s clear that the GPU sim still has some major problems that are quite frustrating. Not being able to debug particle values is cumbersome for example. But also some recurrent rendering problems, like particle not updating correctly or not rendering at all.
  • It seems to be very dependent on how many objects are rendered in the game at the same time (which is obviously logical). But I didnā€™t manage to find any possibility to ask the engine to ā€œprioritize particlesā€ when doing calculations on the GPU. Not sure if itā€™s even possible :stuck_out_tongue:
  • Itā€™s not clear how particle data is carried around between spawn-time and update-time. I tried to set explicit values to be calculated at spawn, for performance, but sometimes it just does not work. And I donā€™t understand why.
  • Itā€™s very hard to debug particle transforms issues. I spent hours trying to figure out why my particles where spawning in the wrong area, or with a wrong rotation. It would be very useful to be able to manipulate transforms in the Niagara editor itself, to quickly see if transforms are correct.
  • I tried to kill particles that are not used, but it seems buggy for now. Any attempt to set DataInstance.Alive in a module script failed, not sure why. So for now, Iā€™m forced to spawn a lot of particles in order to have just a bunch of them to scatter on objects, and I just faded-out the unused ones. I should probably just set their size to 0, to avoid useless overdraw, but the ability to kill them completely would be perfect.

Next steps:

  • Improve the original terrain grid and take steepness into account
  • Add a cooldown and some feedback to the BP controller
  • Visual polish
6 Likes

Everythingā€™s now implemented, along with a proper controller with cooldown and UI feedback (you can see the little progress bar on the bottom left corner).
Iā€™m calling in done for now :upside_down_face: Thereā€™s still room for some improvements, especially on visual polish and better particle placement, but Iā€™ll maybe be doing that another time. Time to rest a bit :slight_smile:

Last breakdown, for the steepness detection on terrain, itā€™s simply yet another RenderTarget but this time itā€™s capturing the Normal:
capture_normal
Shame thereā€™s not the option to capture Normal in RGB and Depth in alpha, that would have saved a capture component and a RT assetā€¦
In Niagara, I just take the normal color value for the particle and extract the Z value, and then compare that to a threshold:


So basically I can know if a particle spawned on a part of the terrain that is more or less perpendicular to the world up-vector, and change itā€™s color and texture subUV index based on that:

Thatā€™s it for now, thanks for those who read everything and I hope you enjoyed! :smiley:

12 Likes

Awesome work! :smiley:

Really cool with great breakdowns, thanks for those :smiley:

You did this to replicate it visually, do you think they used some similar methods to just bring the terrain info visually and procedurally with particles, or are they some hand placed Actors in the level with some logic behind important for gameplay mechanics? (Just a curious question :smiley: )

1 Like

Thanks :slight_smile:

I have actually no idea how they did it :sweat_smile: But I would bet on a similar approach (with render target) because for me itā€™s the most efficient method for this. The main difference might be that they ā€œbakeā€ the terrain info somehow instead of using a runtime buffer, which would probably save some memory on the GPU. Again, not sure because Iā€™m not a graphics programmer and I donā€™t know enough about Decima Engine or the PS4 architecture :stuck_out_tongue:
But the RT approach might save a lot of time when level artists and environment artists are constantly updating/modifying the terrain.

If anyone has another idea or some info, feel free to share!

1 Like

Wonderfully done! @MrBrouchet
Thanks for the breakdown as well!

1 Like

Really cool stuff ! Thanx a lot ! But i have no idea why you call this filter Sobel. Sobel has two different matrices for each two directions. Like here Sobel operator - Wikipedia. Itā€™s not a Sobel. Itā€™s a some kinda of custom thing that works in this case. Am I not right?

You are right, a complete Sobel operation takes both direction into account when applying the kernel to the image, so you can get the gradients directions as well as the intensity difference :slight_smile:
In my case, I needed only the gradients magnitudes, so no need to compute in both direction and to calculate directions. But in my opinion, doing a kernel matrix operation to get pixel intensity derivatives can still be called a ā€œsobelā€, itā€™s just not a ā€œfullā€ sobel i guess :sweat_smile: