Future of rendering paths and SSRR

Hi, great info regarding 5.2.

A few quick questions:

  1. What is the future on the different rendering paths in Unity? What do you think about Forward Rendering Path? Is it going to stay?
    As of today, I need forward rendering path only because it provides high quality antialias (I use x8).
    Is forward rendering path going to disappear in a future? If so, what would be the solution to antialias.

  2. Is it impossible to apply SSRR to forward rendering path?
    I’m confused at this point because on Unity 4.6 I was using the solution from LIvenda (Candela SSRR) which worked magically on any rendering path, of course, including forward.
    Now, I have read the 5.2. docs and you state that SSRR will be only supported on deferred because it needs info of depth buffers and other data which is not available on forward rendering path. So, the obvious question is: how does Candela work then? Obviously I’m not trying to disclose any IP from Candela here, just to make sure that I understand - it’s possible or not, in terms of the data needed/available?

  3. I’ll give a try to SSRR on deferred when the first beta is available. What antialias do you recommend which works ok with SSRR on deferred which provides sharper lines and borders?

  4. Will SSRR work reasonable well on high end devices like newest iPad/iPhone and Samsung S6 for instance or will be a desktop-class only effect?

Thank you in advance.

I think so. In some fancy future I’d like us to have a more efficient forward rendering (tiled/clustered forward etc.).

Other people need it in more cases I guess. e.g. transparencies or different shading models.

That’s a good question – I don’t know. My guess would be that Candela does not do “physically based SSRR”; it just approximates “something”.

With deferred your only options are one of postprocessing based anti-aliasing solutions (FXAA etc.). We are working on some temporal-based postprocessing AA solution now, but no timelines.

I would expect it to be “too slow generally”. That said, I think deferred shading in general is always disabled on iOS since it’s “usually too slow”.

1 Like

Thank you Aras.

SSRR is possible in forward rendering, but it relies on either using the depth pass and writing extra data (normals, then using Alpha to represent reflectivity) or using an extra render texture pass, I have a working model, the thing is, once you realise the limitations and the fact you want forward rendering for low end mobiles, SSRR becomes worthless.

I’d advocate the use of old school reflections ie mirrored objects and a transparent floor or just box projection reflection probes for mobile…

Agreed, Mobile is about faking everything you can.

Well yeah, but once you render out depth, normals, smoothness and whatnot... you're pretty much at "I have a G-buffer" stage, so might just as well do deferred by then.


Why is worthless in mobile? I would like to see a generic option, knowing their limitations. For architectural and visualisation projects, performance is not as important as in games I think.

I guess @LIVENDA found a sweet spot in this space providing such product. Sad that they have discontinued it in Unity 5.

See what Aras put above, the extra data you need means you may as well use Deferred, I’ll post a working model at some point when I get time so people can see why it’s such a limitation, the problem is the cheapest way is:

Main render = RGB + A= Reflectiveness
Depth texture = Depth with Normals

But this creates other limitations, and issues, anyway if I get time I’ll post it, personally I wouldn’t use SSRR on mobile yet it’s too expensive, it’s a desktop solution, I mean Mobile can only just do PBS (Nexus 7 runs terribly with PBS, Samsung S6 rocks it, but it’s the best Mobile you can get pretty much)

1 Like

Ok. I'm starting to look at deferred...

If aliasing is an issue, consider implementating a real AA solution like Super Sampling. This is what we do and it really shines. Sure thing you need a good machine to handle it, but for those who have it it is almost flawless. Looks great in trailers and on the show floor.

The idea is to render your scene on a RenderTexture that is X times as big as your resolution and then downscale it to the native resolution of the screen.

Reserving such block of continuous memory will be very problematic on mobile I think. I have had lots of problem trying to render to texture of 1024x1024 on iPad Mini first generation, not to think about something bigger…

Do you downscale it via script/shader or do you use a second camera? I tried the second camera, but I got the same aliasing as before (I expected that, but I wanted to try it anyway. Deferred, a sun and buildings tend to flicker like a disco)

Right now I use this solution from the asset store, but it has some limitations, so I'm working on my own system now.

The way I'm planning to do it is to create a RenderTexture that is X times the size of the screen and then use Graphics.Blit to downscale it to screen size.

hi there,
i wonder if there is any “future proof” concept of adding different shading models to unity’s deferred rendering pipeline and its current gbuffer layout.
right now i use gbuffer2.a to distinguish between 4 different materials which works quite well and does not break the standard shader.

any thoughts?
@Aras : or any advice?


Super Sampling is not a very realistic solution, it raises your minimum target spec by a large amount.

What’s great with PC gaming is that you can activate and deactivate any effect depending on the power of the user machine. There is no downside in providing an option that works beautifully when the system can handle it.


Sure, provide it as an option, but lets implement something that works and looks good on a wider range of machines and devices first.




Guess we'll have to wait for 5.3

come on! what the hell is this now?