Why are Additive shaders so heavy for phones?

I was creating some additive shaders with Unity’s shader-graph. Some of them were simple shaders with just a transparent texture. However I realised that when I set the transparency mode to Additive, if the objects using them get really close to the camera so that they show in a big portion of the screen the fps lower A LOT in low-end and mid-end (or high-end but older) mobile phones. Just setting the shader to Alpha makes a big difference. I really wonder why, I mean… an alpha blended texture has to do the average of every overlapping color, meanwhile an additive blended texture just has to sum them. It doesn’t seem like much of a difference…

So why is this happening? And… are there any tricks or advices to make addtive materials more performant?


The thing is that mobiles have very weak GPU so the biggest problem you have to deal with is the overdraw (amount of pixels that are rendered = coverage of a shader + amount of layers). This implies that if you use either transparent or additive shader it’s very bad, because it renders not only the layer of a shader, but also everything behind it (so each pixel is rendered twice), now if you stack up 10 smoke particles that use transparent shader, it will have to render 11 layers per pixel and if they are big on screen then it’s a huuuuuuuuge cost.

That’s why in mobile games most of the effects are made with opaque or masked shaders (sometimes they use dithering as it’s a much cheaper way to get transparency). After couple years in mobiles I got PTSD of using transparency. You can still use them, but usually as single layers and very small on the screen.

Thanks for the response! My intuition was giving me that too. But still, allow me to be more specific:

I understand that transparency has this problem, and I would understand a large difference between using transparent vs opaque materials. However, let me present to you the following sample scene:

  • 1 plane of transparency.
  • after that, another plane of transparency.
  • after that a skybox or a opaque object.
  • A camera looking through the 3 and very close to the first transparent one.

Now take 2 variations:

  1. the 2 layers are transparent.
  2. the 2 layers are transparent and additive.

In my experience the first is a little heavy but not that heavy.
But the second absolutely destroys performance. And I wonder why additive layers have that effect, since in both examples are the same layers, same size, same pixels and same textures.

1 Like

Ah ok, hmm I didn’t really dwell too much into transparent vs additive unfortunately :c (never really had to as I didn’t use either). Though maybe someone else will know that.

Thanks for the response! I hope someone knows why this happen for additive materials!

That sounds very weird, for all I know the only difference between the two options is the blend mode so just the math operation they’re using to add the material results to the buffer. By the way keep in mind that alpha blending doesn’t average all the overlapping colors, each plane “lerps” between the screen color and its own color. If you have 3 planes all with an alpha of 0.5, whichever plane is drawn on top is affecting the color way more than the ones behind it.

The only other difference I can think of is that when using additive blending you will often get color values greater than 1, which wouldn’t be the case with alpha blending if all your color values were in the 0-1 range. Maybe that has some performance implications in Unity? You could try to test this by setting the additive color to a very dark gray and see if there’s still a performance difference…

Other than that I’m as lost as you are so I’m curious to see if anyone knows something about this haha

1 Like