What does this mean exactly? I have checked all my lights, and none of them use more than 4 layers in the culling mask. I don’t understand the “too many layers to EXCLUDE”. Can someone from Unity please translate this?
I believe the restriction is global, not per light. I took a look at the warning, and it seems to be fired when the renderer would have to consider more than 4 layers to render the scene properly, i.e. the union of all layers used by loaded objects is greater than 4, and more than 4 of those layers are actually excluded by at least one light.
I guess I still don’t understand the wording. “more than 4 of those layers are actually excluded by at least one light”. It’s like some kind of weird double-negative or something. Can it be explained in a way that is more clear?
I don’t understand the idea of a limitation on how many layers you EXCLUDE from lights. Shouldn’t the limitation be on how many layers are INCLUDED?
Could someone please chime in here if you know what I need to do to make this thing happy? I think it has something to do with layer culling masks for lights and cameras, but I already have them all set to only affect the layers needed. I don’t understand the “too many layers to exclude objects”.
I found a GameObject in the scene using a layer that is excluded from lights and rendering, which happened to have a MeshRenderer on it (even though it doesn’t need it since it’s only for a hidden collider). I removed the MeshRenderer and the complaining stopped.
I still don’t understand why this Renderer component caused the deferred renderer so much strife. The layer it was on is excluded from all lights and rendering, so it should just be ignored by the renderer. If this is by design, I don’t get it. If it’s a bug, I hope it gets fixed. Odds are nobody from Unity will read this and do anything about it though.
What I gather from my reading through the code supplemented with my own knowledge:
The old deferred renderer uses a few bits of a stencil buffer to handle light layers, that’s where the limit of 4 layers comes from - 4 stencil buffer bits dedicated to light layers. The renderer counts the amount of layers where the following is true:
- the layer is used by at least 1 gameobject
- the layer is excluded by at least 1 light
These are the layers the renderer needs to consider to properly render the scene. If that number exceeds 4, you get the error.
AFAIK, the reason for counting exclusion and not inclusion is that the base assumption/behavior is that a light should be applied.
I’m not sure what you mean by “old deferred renderer”. I’m using Unity 6 preview, so it shouldn’t be old at all. Unless by “old” you mean “built in render pipeline”.
It still doesn’t make sense to me that it has to consider any layers that are excluded. But I don’t know crap about how the renderer works.
Yes, sorry for the confusion. I’m talking about the deferred renderer in the built-in pipeline, as opposed to the newer deferred paths in URP and HDRP. It’s very old at this point - the focus has been on the SRPs for a long time.
If a layer is not excluded by any light, then all lights should affect it. In that case there is no reason to store any information about the layer in the stencil buffer. If it is excluded by at least 1 light, we actually need to store info about the layer in the stencil buffer to apply lighting for the layer on a per-light basis.
The logic could have been inverted, allowing max 4 inclusions rather than max 4 exclusions, but the ‘base assumption’ is that lights should be applied. If you are going to have a limit, limiting exclusions seems like the better option.
Basically what you’re saying is I should stop worrying about trying to optimize lighting by only turning on layers that it cares about, and have it affect everything (except for the specific things I NEED it to not affect), and somehow that’s more optimal than excluding layers that I don’t need.
I’m only trying to explain the limitations of the current implementation, apologies if any of this came off as condescending
. Given the limitations of said implementation, it’s not so much that one approach is more optimal than the other, but that only one of the approaches will actually work correctly, so you don’t have much of a choice.
Since the renderer can only consider 4 layers at a time, a decision needed to be made about the behavior of the rest of the layers - whether to allow max 4 inclusions or max 4 exclusions. I’m not sure what exactly went into that decision, as I wasn’t involved in making it, so I won’t try to defend it too much. It’s unlikely to change at this point, since as I mentioned, the focus has been on the SRPs for a long time now.
Admittedly, this limitation is a bit obscure. It made me curious whether this is documented anywhere, and I found that it actually is on this page Unity - Manual: Deferred rendering path in the Built-In Render Pipeline
culling masks are only supported in a limited way. You can only use up to four culling masks. That is, your culling layer mask must at least contain all layers minus four arbitrary layers, so 28 of the 32 layers must be set. Otherwise you get graphical artifacts.
That last bit is actually the most helpful. Don’t worry, you haven’t come off as condesceding. The whole thing is just so unintuitive and backwards compared to how people normally think about optimizations with layer masks.
It’s terrible grammar, like the same people who write math problems.