Writing Native GLSL

Hi

I’ve some fairly complex shaders in cg, that get compiled for ios.

I set their pragmas to:

#pragma target 3.0
#pragma glsl
#pragma only_renderers gles d3d9

(Fragment and Vertex shaders, no surface shaders)

They run good so far on ios. My question: will they be better optimized if I write them in nativ GLSL?

I assume they go through the same GLSL optimizer either way, but you could do a test and find out.

Sometimes the built-in shader converter generates inefficient code unnecessarily, like dependent texture reads (calculating texture coordinates in fragment shader, slow on iOS).

Thanks for the answers.

Mhm… I don’t know any GLSL yet, so it would take quite the effort to translate it.

Is it possible to see somewhere what GLSL code Unity generates?

And, if I have GLSL code, will that translate to something D3D can use?

You can write native GLSL shaders, yeah.

But Unity will not convert those over to CG for you, it’ll only convert CG over to the other platforms.

Here’s a really simple shader I wrote for a project I’m working on - that has a GLSL shader first, and then a CG equivalent afterwards (the hardware will run the first subshader it comes to that it can render).

However, Unity’s CG > GLSL is - for the most part - pretty good. I’d only revert to writing raw GLSL if you really need to (i.e. you know the converter is making unoptimised code that you’re certain is slowing down your framerate).

Shader "BeforeDeath/Unlit Outline" {
	Properties {
		_MainTex ("Diffuse (RGB)", 2D) = "white" {}
	}

	Category {
		Tags { "RenderType"="Opaque" "IgnoreProjector"="True" "Queue"="Geometry" }
		Lighting Off
		LOD 200

		SubShader {
			Pass {
				Tags { "LightMode" = "Always" }
				GLSLPROGRAM
					#pragma glsl_no_auto_normalization

					#ifdef VERTEX
					varying lowp vec2 uv;
					varying lowp float o;
					void main()
					{
						gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
						uv.xy = gl_MultiTexCoord0.xy;
						lowp vec3 eyeVec = vec3(gl_ModelViewMatrix * gl_Vertex);
						lowp vec3 normal = vec3(gl_NormalMatrix * gl_Normal);
						o = dot(eyeVec, -normal) / gl_Position.w;
					}
					#endif

					#ifdef FRAGMENT
					uniform lowp sampler2D _MainTex;
					varying lowp vec2 uv;
					varying lowp float o;
					void main()
					{
						gl_FragColor = texture2D(_MainTex, uv) * vec4(step (0.15, o));
					}
					#endif
				ENDGLSL
			}
		}

		SubShader {
			Pass {
				Tags { "LightMode" = "Always" }
				CGPROGRAM
					#pragma vertex vert
					#pragma fragment frag
					#pragma fragmentoption ARB_precision_hint_fastest
					#pragma only_renderers d3d9 opengl

					#include "UnityCG.cginc"

					struct v2f {
						float4	pos : SV_POSITION;
						fixed3	uv : TEXCOORD0;
					};

					v2f vert (appdata_base v) {
						v2f o;

						o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
						o.uv.xy = v.texcoord.xy;
						o.uv.z = dot(ObjSpaceViewDir(v.vertex), v.normal) / o.pos.w;

						return o;
					}

					sampler2D _MainTex;

					fixed4 frag(v2f i) : COLOR {
						return tex2D ( _MainTex, i.uv.xy ) * step ( 0.15, i.uv.z); // Colour  outline.
					}
				ENDCG
			}
		}
	}
	FallBack "BeforeDeath/Unlit"
}

That’s not an iOS problem; that’s an old PowerVR GPU problem.

https://developer.apple.com/library/ios/documentation/DeviceInformation/Reference/iOSDeviceCompatibility/OpenGLESPlatforms/OpenGLESPlatforms.html