So, I've come across three different ways to write shaders in Unity.
First, there's the fixed function pipeline, where you use only ShaderLab code like
... Pass {
SetTexture [_MainTex] {
constantColor [_Color]
Combine texture * constant
}
}
...
next, there is the custom surface and lighting function using Cg, looking like this
...
#pragma surface mySurf myLight
struct Input { ... };
void mySurf (Input IN, inout SurfaceOutput o) { ... }
half4 LightingMyLight (SurfaceOutput s, half3 lightDir, half atten) { ... }
...
and finally, you can write your very own vertex and fragment programs with
...
#pragma vertex vert
#pragma fragment frag
struct v2f { ... };
v2f vert (appdata_base v) { ... }
half4 frag (v2f i) : COLOR { ... }
...
If I understand the documentation right, you can combine them in different ways but also use them (more or less) exclusively to achieve a certain effect.
My question now is: If I'm only interested in simple shaders that can possibly be achieved by either of the above methods (simple light models or unlit, various texture combines, and so on), which one should I choose?
Most of the time, I've found myself using the surface and lighting function method, because I've had previous experience with Cg. But I was wondering if there is maybe a performance gain with the fixed function pipeline or such.
Thank you!
Sorry to bring back an old question. I just noticed it looking for something else. Benefits of each:
FixedFunction: This is the fastest method. It provides vertex-lighting and simple texture combiners at the cheapest cost. If you are worried about performance, then you should use fixed-function shaders. They are also the easiest to write since Unity handles the transformations and math for you.
Surface Shaders: Surface shaders are really just vertex and fragment programs at a different abstraction level. AFAIK, Unity compiles that into standard CG code. It lets you determine the look of the surface without having to mess with the low-level stuff. A simple example, think how often you would put this into a CG program `float4 vert = mul(UNITY_MATRIX_MVP, input.vertex);` This is only one line of code, but it gets the point across. It must be typed in 99% of shaders to position the object properly. That raises the question:why can't I have the compiler put it in for me? This problem is emphasized even more with shadow mapping. So, as Aras points out in his blog posts 1 and 2, the current abstraction level requires programmers to write many of the same statements repeatedly.
Surface shaders handle that stuff including transforming coordinates so that the programmer can focus on the look, not the repetitive chunks of code. It should be compiled down into identical (if not more optimized) code than you would have written by hand.
CG programs: CG programs provide lower level control than surface shaders. With the creation of surface shaders though, they are really only needed for special effects. It is usually a hassle to create lighting and shadowing when Unity could do that for you with the above technique. You only want to use vertex and fragment programs if you are creating a special effect, possibly some sort of GUI.