How to Mask Custom Renderer Features correctly?

Lately, I've been learning about shaders and Screen Space effects.

Despite being able to create cool full-screen effects with my knowledge of the rendering pipeline, I wondered if there was some way of only applying certain effects to certain groups of objects while keeping everything screen-space.

Disappointed by the lack of in depth guides to renderer features, and how to mask them, I tried to mimic what I could see in some videos/resources that end up applying effects to certain layers.

Right now my relevant process for creating such effect would be:

  • Create 3 render targets identifiers: A filter one, a temporary one and the camera color one.
  • Using filtering settings defined with a layermask and some drawing settings whose material is overridden by one that outputs a white color call the draw renderers method with the filtering target as a target and the pass that mask to the shader.
  • Blit back and forth between the color target and the temporary target using a material that reads the prior mask provided by the filtering target and computes my desired effect.

I thought this simple method could work but doing some debugging I found out that when calling draw renderers, with a determined layermask set, it effectively does renderer all objects in that layer ignoring whether they are occluded or not which makes the effect leak if an object outside the layermask is in front of one in the mask. By "occluded" I refer to when an object I do not desire to apply an effect on contains behind any object that belongs to the layermask, causing the first to receive the effects that would have been applied to the object behind.

Later on I tried the following:

  • Create 4 render target identifiers: A filtering one, an occlusion one, a temporary one, and the camera one.
  • Using the draw renderers similarly create a mask in the filtering target and in the occlusion target. However this time I invert the layermask for the occlusion target filtering settings and instead of overriding the material with a plain white one I use one which outputs the eye depth for later use in the blit material.
  • Blit as before but in the material I lerp between the original camera color and the processed one ny using a step function to determine if the depth of the occlusion mask is greater than the depth of the desired mask (these come from the occlusion target identifier and the filtering target identifier respectively).

However this solution does not work either, in spite of being able to isolate what is an occluder and what is not, due to weird issues with the depth output from the override material shader. This one is actually just a simple Shafergraph were the output color comes directly from the Scene depth node set to eye mode.

As an extra I'd like to mention that most of the effects that I have downloaded to reverse engineer them and learn from them suffer quite a lot when dealing with transparents, sorting and masking and so on, which basically shuts down any possibility of filtering for 2d projects as they mainly use transparent objects coming from sprites.

It would be great if someone or even Unity could provide some light about the optimal way to apply layer masking/rendering layer masking to custom renderer features as the are super powerful but not always wanted to be applied to every kind of object.

If the question to answer is not clear, sorry: What general steps/considerations to regard when coding a custom renderer feature so that this can be masked to affect only to desired objects?

I'm really would like to know how did you solve your problem

Turns out I solved this issue some days after I got it first time and made since then a GitHub repo with the solution I now use every time I want to mask one of these screen-space effects.

Good to know! I solved a similar issue, but with 2D. In case somebody stuck with this...