What happened to Forward+ rendering path?

Nowadays Deferred rendering path is considered to be a de facto standard for AAA titles. Comparing to forward rendering it gives more freedom for lighting configuration but has drawbacks with large memory consumption and IO performance.

Algorithmically forward rendering is more optimal as it relies on computation speed rather than reading and writing large amounts of memory buffers.

Back in 2012 AMD published a paper called “Forward+: Bringing Deferred Lighting to the Next Level”. It introduced an extension of forward rendering path which allowed to use large number of lights.

As far as I know some projects implemented the idea:
Forza Horizon 2 is using Forward Plus rendering technique.
Some VR demos adopted it because deferred was far too slow for frame rate requirements

My question is why Forward+ is not widely used? It remains in underground of real-time graphics community and is used only by rebels and geeks.

I wouldn’t say that’s completely true. I’ve seen it excel at stuff like character rendering, making it very viable for e.g. cinematic work, so it’s definitely being used.

Like you said Forward+ has pros and cons. I can’t think of any game that uses Forward+ exclusively, but I can’t really imagine a case where that would be beneficial either.

Could you name some titles you seen Forward+ used?

I am thinking about using it myself, more examples would be useful.

It’s not really that cut and dried any more. Many renderers use a mix of deferred, forward and clustered (aka forward+) depending on what works best. See Doom’s Siggraph 2016 talk for example.

On Far Cry Primal we used a form of clustered for the volumetric fog, which already splits the frustum into cells and so was a good fit. But given our scene lighting is primarily directional, we stuck with traditional deferred for the rest of the opaque object lighting. That’s not to say we won’t revisit that decision in the future though, requirements are always changing.

Angelo Pesce wrote a good blog post that gives a run-down of all the different approaches you can take, and the benefits/drawbacks of each one.

1 Like

Oculus made a forward renderer branch from Unreal. Altho Unreal already supports forward rendering for transparency, I believe, so its not like they re-wrote it from scratch. But yeah its a big improvement for VR applications. I wondered if Unreal will make it just a check box in render settings in the future. What all will that support and will the add features for it specifically I doubt it but who knows?

Also Valve released a similar rendeder for Unity: Valve Releases 'The Lab' Unity Renderer for Free – Road to VR
I havent checked out the oculus UE4 implementation yet, but valves is unfortunately limited to win64 and directx11, so it can not be used as a full featured replacement.

Oculus’ single pass forward renderer for Unreal is a clustered forward renderer, which falls into the “Forward+” type of renderer. Single pass refers to stereo instanced rendering where a single draw calls renders to both eyes, and rendering all lights in a single shader pass for each eye. This obviously differs from deferred where each light is rendered separately.

An official forward renderer will be coming to the mainline of the Unreal Engine soon, though I believe written internally rather than directly using Oculus’ implementation. I have no information on the implementation of the official forward path renderer, but I would assume it’ll be a similar single pass clustered forward renderer.

Valve’s Lab Renderer is not a Forward+ renderer, but is single pass in that it renders all lights in a single shader pass per eye. I don’t believe it’s stereo instanced though. While Unity’s default forward renderer renders the main directional light and ambient lighting / lightmaps in a single pass, then each additional light is an additional render pass on top of that. Also curiously there doesn’t seem to be anything preventing the techniques Valve is using from working on win32 or non-DX11 apart from code they added to disable those options, probably just because they didn’t want to support them more than anything else.

Really though there’s no point using these style renderers unless you’re doing desktop VR in which case you’ll want win64 and DX11 anyway. If you’re doing mobile VR you’re probably doing 100% baked lighting and light probes, or otherwise single light scenes which for Unity will end up being equivalent to single pass with the default forward renderer.

Came across this today, the new Doom used fancy Forward rendering.