What is the algorithm used behind projectors?

I’m wondering, which algorithm is used for projectors. It seems that an arbitrary number of projectors is supported - even on mobile. Since I’m mainly developing for mobile platforms, I would like to know how the algorithm works to better understand how it would impact performance if used excessively.

I’ve already done some research, and those are the approaches I have found so far:

  1. use a deferred renderer and the g-buffer (I guess that wouldn’t work on mobile?)
  2. create a tri-overlay and use alpha-blending
  3. combine texture with projcted texture - precalculate the texture (in every frame)

Is Unity’s algorithm one of those approaches or something completely different?


I have added some screenshots from a simple scene which contains 1 terrain, and 33 projectors which are using 3 different projection-materials/textures.

The scene without projectors:


The scene with projectors:


So, to me it looks like the geometry which receives the projections is rendered number-of-projectors times (in this case 33 times). Am I seeing this right? That could have a serious performance impact especially on mobile platforms, i guess.

All a “projector” is, is another rendering pass. You can do whatever you want with it, but Unity sets up some special variables for it:


Projectors actually works similar to the way post-process works. It simply renders what ever geometry it’s on its frustum and then project from that POV the texture in projector (it’s camera) space. I’m not sure if it does the projection in the same pass (while rendering the depth information) or in a extra pass but what i know is they shouldn’t be there, they are extremely heavy.
It is possible to do deferred decals in forward rendering using a depth+normal texture (well at least i got some ideas how to do it). ^^