Assign Post Process Volume to a objects on a layer

Is there a way to assign a Post Process Volume to effect objects assigned to a specific layer?

For example, I want to use a Post Processing Volume to boost the saturation of a few objects in the scene, not all the objects in the entire scene. Can I use Layers to control what the post Process Volume effects?

Then use different Post Processing Volume to boost saturation of other objects in a different way. Can I use a different layers to control what the different Post Process Volume effects?

Sean

I’m quite sure it doesn’t work this way.
(related: https://answers.unity.com/questions/1714118/how-to-use-the-new-post-processing-to-make-a-speci.html )

See, the post processing volume doesn’t affect objects, it affects the camera. The image camera sees is affected by the volume. Not objects.

To boost up individual objects, you’d have to use some masking magic or override object’s materials.

Interesting, what type of “masking magic” or “override object’s materials?” Any YouTube vids you can post?

I do not normally watch youtube tutorials, because I prefer text.

You can replace material and its shader at runtime. You can override material parameters at runtime using material property block or through a script.

Masking magic would be needed to single out a speciafic RASTERIZED object and apply the effect only to it. This can be possibly done via stencil buffer, but there’s more than one way to do it.

The easiest way to do this is with camera-stacking. Put all of the objects that you want the post-processing on in a separate layer. Have camera with that post processing effect render that layer first. Then use a second camera without the post processing and use that render all of the other layers.

4 Likes

That’s an interesting idea, but there’s a good chances that you’ll screwup the lighting on the highlighted object, even if you retain zbuffer.

You can accomplish this using command buffers. There’s a pretty simple intro here that I’ve used as a base for various effects.

5 Likes

I’m curious about this. Point lights and shadows can be set to cull or include whatever layers you choose. So if you exclude from postprocessing with a camera stack, why would that override light/shadow rendering layers?

It’s not the layers it’ll screw up, it’s things like shadowing and effects from other geometry that is excluded from the layer.

It’s entirely possible to do, it’s just that in the general case there are a huge number of edge cases to work through. Depending on what’s needed here that might not matter. Often for a special effect you don’t need generalised, physically correct rendering.

Personally, I’d pick an unused part of the stencil buffer, have the relevant objects write to it, and then have a post effect which is masked by that part of the stencil buffer.

I can get an object to render to the stencil mask, but what’s the best way to get specific effects from the post processing stack, to react to a stencilmask value?

I don’t want to edit a copy of the postprocessing stack and treat it as a custom package.

I can’t find if there’s some exposed way to do this, or just extend/edit the e.g. bloom shader to read from the stencil mask.

It feels like this postprocessing stack, since it supports volumes and layers, should have (and probably has) also thought about object masking.

You can also look into custom shaders and global variables, would be a very small footprint on your project if you’re comfortable with shader graph at all. If all the materials of these objects share the same shader, and you have a global float variable controlling saturation within that shader, you could achieve these global effects with very little headache. There are a lot of small pitfalls along the way though, like some shader node editors adding an “_” into the internal naming of the property of global variables specifically and others not, among other little things like anything in gamedev that you’d have to figure out along the way.

The code would look something like:

var globalSaturationString = “globalSaturation”;
if(shader editor adds an underscore to global variables) globalSaturationString = “_globalSaturation”;

Shader.SetGlobalFloat(globalSaturationString, desiredSaturationValue);

Edit: I’ll go ahead and link what your variable setups would need to be in shader graph since i’ve already gone this far:


Note: The REFERENCE name is what matters, not the default Name itself.

So any object using a material that’s derived from this shader would have this saturation. Any object using a standard shader, or a different custom shader with a different global variable for its saturation would have a differnt saturation.

If you wanted to get tricky with this shader, and create custom switches so that you could have one uber shader, and several global variables affecting several “layers” of objects, that would be entirely possible as well.

It sounds like for what you’re wanting to do with custom visuals for certain objects, learning some shader tricks would go a long way.

By enabling bloom and adding unique global variable multipliers to diffuse you could obtain custom bloom settings for layers, custom opacity settings, you could even move the location of objects or get wavey distortions with vertex offsets so long as it didn’t pertain to object collision.