Hi,
I have a render texture, and a camera. The camera is disabled, so it will only render if I call cam.Render();
With this in mind I tried the following:
- render with solid clear flags, the scene to a rendertexture.
- change camera cull layer to the ball’s layer and set the ball to use the above render texture, then render back onto the same render texture.
Now this causes a feedback loop in the visuals (like pointing a mirror at a mirror) which you might say “hey this is expected” however it’s not expected because I am doing this:
- Camera renders into texture
- Camera settings change to a completely different layer
- Camera renders into texture with clearflags set to none (so it doesn’t clear)
In theory this should work. But it still does the feedback 2 mirrors thing. Any idea how? The goal of what I am trying to achieve is use a shader to render some fancy special effects on some objects (think predator) without rendering the level twice.
Is there something I don’t know about unity’s render order? Am I taking deep enough control of the process to pull this trick off without too many passes?
Worked on an older engine I used. It seems like regardless of controlling when the cameras render, unity still decides to render all at once, later. How can I fix this?