Mobile: Shader stepping and pixellation with _Time input

Hello there!
First post :slight_smile:

We’ve got many shaders in our mobile game that use the _Time input for surface shaders to utilize many scrolling effects. The shaders render correctly when testing in the editor, but I’ve noticed while testing this on mobile (Galaxy SII) that after 2-3 minutes, the shaders eventually start stepping and become very pixelated.

Here is what it looks like after 3 minutes:
alt text

This shader is using 2 textures, a star overlay which doesn’t scroll, and a cloud overlay which uses the _Time variable, here is the shader code for it:

Shader "Rage/Scroll Overlay" {
	Properties {
		_MainTex ("Base (RGB)", 2D) = "white" {}
		_Overlay ("Base (RGB)", 2D) = "white" {}
	}
	SubShader {
		Tags { "RenderType"="Opaque" }
		LOD 200
		
		CGPROGRAM
		#pragma surface surf Lambert

		sampler2D _MainTex;
		sampler2D _Overlay;

		struct Input {
			float2 uv_MainTex;
			float2 uv_Overlay;
			float4 _Time;
		};

		void surf (Input IN, inout SurfaceOutput o) {
			
			float2 overlaymove = IN.uv_Overlay.xy;
			overlaymove.x = overlaymove.x + _Time;
			
			half4 c = tex2D (_MainTex, IN.uv_MainTex);
			half4 ov = tex2D (_Overlay, overlaymove);
			o.Albedo = c.rgb + ov.rgb;
			o.Alpha = c.a;
		}
		ENDCG
	} 
	FallBack "Diffuse"
}

Thanks for your time!

The problem is related to the internal precision of mobile GPUs. Many mobile chips don’t offer full 32 bits floating point values but only 16 bits or even less.

A solution to increase precision is to add a vertex modifier, do the calculation there, then clamp to the 0-1 range and after that, add the result to the existing UV. Then in the fragment shader, read the modified UVs. This also saves you some clock cycles in the fragment shader.

The reason to do it in the vertex shader is that the precision is higher in the vertex shader than in the fragment shader.

Somehow, your picture isn’t visible.

Just a guess, though, maybe you want to use _Time.y (in line 25) instead of the vector.

ShaderLab builtin values

I know this is old but it is still very much an issue. So for anyone else that encounters this, as @LyveIG pointed out this is a 16bit or less GPU issue, not just an android issue I have encountered this on iOS as well. My solution was to write a very simple singleton that updates a global shader value with time. This way I can use one CPU calculated value across all my shaders.