I have a shader that derives its screen coordinates from vertex coordinates in a custom way, without using the standard UnityObjectToClipPos method.
When applying this to a quad, the results are different than when rendering this to a render texture in a Graphics.Blit.
When using the same shader (below) a) to render to a Quad directly and b) to Graphics.Blit to a render texture, this is what I see:
left: shader applied to standard Unity quad, right: shader applied in Graphic.Blit to render texture.
This is the shader:
Shader "Unlit/Blittest"
Shader "Unlit/Blittest"
{
Properties {}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 100
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
};
struct v2f
{
float4 vertex : SV_POSITION;
float4 origpos : TEXCOORD0;
};
v2f vert (appdata v)
{
v2f o;
o.origpos = v.vertex;
o.vertex = UnityObjectToClipPos(v.vertex);
return o;
}
fixed4 frag (v2f i) : SV_Target
{
return fixed4(frac(i.origpos.x), frac(i.origpos.y), 0, 1);
}
ENDCG
}
}
}
So upon further inspection, it seems that the Quad passed in to Graphics.Blit has vertex coordinates ranging from 0 to 1, whereas the standard Unity Quad ranges from -0.5 to 0.5.
Could this be correct or am I missing something? The docs say:
Blit sets dest as the render target, sets source _MainTex property on the material, and draws a full-screen quad.Blit sets dest as the render target, sets source _MainTex property on the material, and draws a full-screen quad.
This doesn’t mention that the quad drown by Blit is a different quad than the standard. If I’m correct it might be useful to add that info there.