I’m rendering an object with a specific shader when a specific camera is rendering it. In my example, it’s the cube that renders with the “Standard” shader when the primary camera is drawing it, the “Black” shader when the second camera is drawing it and the “White” shader when the third camera is drawing it.
What I would like to know is how these two functions operate and why I’m getting the results I am. The shaders are just the Standard built-in shader and two simple unlit shaders that return the color white or black depending on the shader used.
Below are the results I got (Scene and Game views both included).
These three screenshots (desired result) show the results of using OnWillRenderObject():
These three screenshots (anomalous result) show the results of using OnRenderObject():
The script I have used is attached to the cube (with OnWillRenderObject replaced with OnRenderObject respectively):
using UnityEngine;
using System.Collections;
public class Cube : MonoBehaviour {
public Shader _shaderBlack;
public Shader _shaderWhite;
void OnWillRenderObject()
{
if(Camera.current.name == "CamB")
{
GetComponent<Renderer>().sharedMaterial.shader = _shaderBlack;
}
else if(Camera.current.name == "CamC")
{
GetComponent<Renderer>().sharedMaterial.shader = _shaderWhite;
}
else
{
GetComponent<Renderer>().sharedMaterial.shader = Shader.Find("Standard");
}
}
}
As you can see, the shader is applied alternately between the Scene view and the Game view based on the function used.
As stated, I have the results I need, but I’d like to understand the difference between the two functions in relation to te results I’m receiving so that I can better understand how they’re operating differently.
Thanks!