Obtaining relative speed of vertex in a shader

Hello everyone. I am trying to create a particular shader effect.

Basically, I want the emissivity of the vertex to scale with its relative speed with the camera.

Right now, I manage to scale the emissivity of the vertex with its distance from the camera with the following shader:

Shader "Unlit/EmissiveSpeed Shader" {
	Properties{
		_Color("Color", Color) = (1,1,1,1)
		_MainTex("Albedo (RGB)", 2D) = "white" {}
	_Dist("Distance", Float) = 1
		_Glossiness("Smoothness", Range(0,1)) = 0.5
		_Metallic("Metallic", Range(0,1)) = 0.0

		_EmissionColor("Color", Color) = (0,0,0)
		_EmissionMap("Emission", 2D) = "white" {}
	_Emission("Bloom", Float) = 1
	}
		SubShader{
		Tags{ "RenderType" = "Opaque" }
		LOD 200

		CGPROGRAM
		// Physically based Standard lighting model, and enable shadows on all light types
#pragma surface surf Standard vertex:vert fullforwardshadows
		//#pragma surface surf Lambert vertex:vert


		// Use shader model 3.0 target, to get nicer looking lighting
#pragma target 3.0

		sampler2D _MainTex;
	sampler2D _EmissionMap;
	half _Dist;
	half _Emission;
	half _Glossiness;
	half _Metallic;
	fixed4 _Color;

	struct Input {
		float2 uv_MainTex, uv_EmissionMap;
		float3 viewDir;
		half relativeSpeed;
	};

	void vert(inout appdata_full v, out Input o) {
		UNITY_INITIALIZE_OUTPUT(Input, o);
		half3 viewDirW = _WorldSpaceCameraPos - mul((half4x4)unity_ObjectToWorld, v.vertex);
		half viewDist = length(viewDirW);
		o.relativeSpeed = saturate(viewDist / 1000);
	}

	void surf(in Input IN, inout SurfaceOutputStandard o) {
		fixed4 c = tex2D(_MainTex, IN.uv_MainTex) * _Color;
		o.Albedo = c.rgb;
		o.Metallic = _Metallic;
		o.Smoothness = _Glossiness;
		o.Emission = tex2D(_EmissionMap, IN.uv_EmissionMap) * _Color * _Emission * IN.relativeSpeed;
		o.Alpha = c.a;
	}
	ENDCG
	}
		FallBack "Diffuse"

}

This creates the following, already kinda neat effect:

87757-neat.png

Now, my plan is to somehow store the vertex-camera distance for a frame, and use it to calculate the vertex-camera head-on relative speed on the next one, while also updating the vertex-camera distance.

Anyone has pointers on how I might do that? Or a better idea to achieve it altogether?

That’s similar to how per-object motion blur is done.

You store the M (Model) matrix (16 floats) from the previous frame. You have 1 such a matrix per 1 mesh object. Model-matrix takes points from local space and transforms them into world space (relative to world’s zero coord)

As you draw the mesh, as well as getting the usual vertex coordinate in MVP space, you also have two world-space positions: one using the old M and one using the current M-matrix.

Therefore, during the fragment stage you are able to see where the fragments were in the previous frame, and can calculate the difference vector. You then use the length (and perhaps the direction) as the coefficients to do whatever you need.

That’s it.


Additionally, if you are doing any further post processing, you could store the result into a texture

You even could then pack the velocity and magnitude into a so-called “velocity-texture” which has RGB storing the packed direction and A storing the length of each fragment.

You do the packing like this (code is in GLSL coz I don’t know CG):

float inv_max_length = 1 / max_allowed_length;

vec3 myWorldVec = vec3(4, -70, 40);
vec3 myWorldDir = norm(myWorldVec); //normalized direction in world space  (0.05, -0.87, 0.5)
float myWorldVecLength = length(myWorldDir); //length (notice, it's larger than 1, won't fit into texture!)

notice, myWorldDir can be both negative and postive values (which don’t go above |1|),
but texture can only be between 0 and 1. Hence we need to do packing:

   //      multiplication is approximately 3 times quicker than  division, hence we divided once, and are            now just multiplying by inverse.
    vec4 myPackedRGBA = vec4(    myWorldVec.rgb * 0.5 + 0.5,      myWorldVecLength * inv_max_length)

now, you can shove all the values into the texture, since components of myPackedRGBA are all between 0 and 1. You just need to reverse the process correctly, once you are in the fragment shader, to get back your values:

vec4 myPackedRGBA = tex( myPackedTexture, uv );
vec4 myUnpackedRGBA = vec4( myPackedRGBA.rgb * 2 - 1,  //get back world direction (unit length)
                               myPackedRGBA.a / inv_max_length); //get back length of world direction.

The tricky part comes in when you want to constantly update the old-matrix for each object, I don’t know how to do that efficiently in unity. The good news is that you only need one old M-matrix per object, and all of its vertices can use that single matrix.
Remember that matrix is just 16 float values. You could in theory store several of such matrices in the texture or a buffer of some sort. Just make sure it’s efficient, since it has to be written-to every frame.

Also, if you want the speed relative to camera, you use Model-View matrix instead of Model

A word of caution: make sure nothing happens to the alpha component (no alpha blending etc) and that the values are not f*ucked up by alpha blending.