From my understanding a vertex shader is applied to each vertex in its local space. I tried adding a simple post-processing shader and removing the ‘unityObjectToClipPos’ line in the vertex shader. This resulted in the scene being rendered at 1/4 the size. Does anyone know why this happens? What exactly are the vertices that get passed to the vertex shader in this context? I’ve added the shader code, the script i’ve attached to the camera, and the before and after of removing the MVP multiplication.
// Upgrade NOTE: replaced 'mul(UNITY_MATRIX_MVP,*)' with 'UnityObjectToClipPos(*)'
// Upgrade NOTE: replaced 'mul(UNITY_MATRIX_MVP,*)' with 'UnityObjectToClipPos(*)'
Shader "Custom/DepthGrayscale" {
Properties
{
_MainTex ("Texture", 2D) = "white" {}
}
SubShader
{
// No culling or depth
Cull Off ZWrite Off ZTest Always
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
};
v2f vert (appdata v)
{
v2f o;
o.vertex = v.vertex;
o.uv = v.uv;
return o;
}
sampler2D _MainTex;
fixed4 frag (v2f i) : SV_Target
{
fixed4 col = tex2D(_MainTex, i.uv);
// just invert the colors
col.rgb = 1 - col.rgb;
return col;
}
ENDCG
}
}
}
Script attached to camera object:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class PostProcessDepthGrayscale : MonoBehaviour
{
public Material mat;
void Start()
{
GetComponent<Camera>().depthTextureMode = DepthTextureMode.Depth;
}
void OnRenderImage(RenderTexture source, RenderTexture destination)
{
Graphics.Blit(source, destination, mat);
}
}
Before removing MVP multiplicaction:
After: