Reconciling different shader writing styles

I’m trying to put an outline around my sprites using a shader. I found one that sort-of works already and am trying to modify it to fix a problem with it. I’ve read all of the documentation in the manual and watched several tutorials on writing shaders and would like help understanding why the syntax in the shader I’m modifying doesn’t match the syntax of the shader the guy is writing in the tutorials I’m watching. I assume it’s just two ways to do the same thing. Here’s the shader I’m modifying:

Shader "Outlined/Silhouetted Diffuse" {
    Properties {
        _EmisColor ("Emissive Color", Color) = (.2,.2,.2,0)
        _MainTex ("Particle Texture", 2D) = "white" {}
    }

    SubShader {
           
        Tags { "Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent" }
        Blend SrcAlpha OneMinusSrcAlpha
        Cull Off
        ZWrite Off
        Fog { Color (0,0,0,0) }
        Lighting Off
        Material { Emission [_EmisColor] }
        ColorMaterial AmbientAndDiffuse

        Pass {
            AlphaTest Equal 1
            SetTexture [_MainTex]
            {   
                combine texture * primary
            }
        }

        Pass {
            ZTest Less
            AlphaTest NotEqual 1

            SetTexture [_MainTex]
            {   
                combine texture * primary
            }
            SetTexture [_MainTex]
            {
                constantColor [_EmisColor]
                combine previous * constant
            }
        }

    }

    Fallback "Diffuse"
}

And here is one of the shader samples from the video tutorials:

Shader "_Shaders/JustColor"
{
    Properties
    {}
   
    SubShader
    {
        Tags { "Queue" = "Transparent" "RenderType" = "Transparent" }
        Blend SrcAlpha OneMinusSrcAlpha
       
        Pass
        {
            CGPROGRAM
                #pragma exclude_renderers ps3 xbox360
                #pragma fragmentoption ARB_precision_hint_fastest
                #pragma vertex vertex
                #pragma fragment fragment
                #include "UnityCG.cginc"

                // uniforms
                uniform fixed4 _Color;
               
                struct vertexInput
                {
                    float4 vertex : POSITION;
                };
               
                struct fragmentInput
                {
                    float4 pos : SV_POSITION;
                    float4 color : COLOR0;
                };
               
                fragmentInput vert( vertexInput i )
                {
                    fragmentInput o;
                    o.pos = mul( UNITY_MATRIX_MVP, i.vertex );
                    o.color = _Color;
                   
                    return o;
                }
               
                half4 frag( fragmentInput i ) : COLOR
                {
                    return i.color;
                }
            ENDCG
        }
    }
}

The big difference is, in the video tutorials version the guy doesn’t ever use SetTexture in any of his passes. Instead he’s using a CGPROGRAM block, setting up a vertex and fragment input struct. Then defining his own vert and frag methods.

The shader I’m modifying on the other hand uses SetTexture and combine to change the color output.

I understand the video tutorial workflow pretty well I think. I just could use some context when applying what I learned from that to the shader I’m modifying.

Appreciate the help!

Nevermind, in my continued effort to figure out and get the shader I have been working on correct I came across information that helps me understand what’s happening.

It looks like SetTexture is a layered way of writing a shader that is intended for support for older graphics cards. Exactly how old your card has to be to only support this approach is something I’m not sure of but it sounds like they have to be pretty outdated to not support the “fragment programs” approach.

SetTexture is part of the fixed-function pipeline, whereas GCPROGRAM is the programmable pipeline.