double precision floating point in CG code?

does unity CG support double data types?

I tested this mandelbrote code with double hamburger = 1.0; and there was a bug in it.

Shader "Fractal" {
    Properties {
       zoom ("zoom", Float) = 1
	   hz ("hz", Float) = 20000
	   }
	SubShader {
    Pass {

		CGPROGRAM
		#pragma vertex vert
		#pragma fragment frag
		#pragma target 3.0
		#include "UnityCG.cginc"

		uniform float zoom;
		uniform float hz;
		struct v2f {
		    float4 pos : SV_POSITION;
		    float2  uv : TEXCOORD0;
		};

		v2f vert (appdata_base v)
		{
		    v2f o;
		    o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
		    o.uv = v.texcoord;
		    return o;
		}

		half4 frag (v2f i) : COLOR
		{
		//double hamburgerwithcheese = 1.0;
				float2 mcoord;
				float2 coord = float2(0.0,0.0);
				mcoord.x = (((1.0-i.uv.x)*3.5)-hz/zoom)*zoom*.0001;
				mcoord.y = ((i.uv.y*2.0)-1.0)*zoom*.0001;
				float iteration = 0.0;
				const float _MaxIter = 29.0;
				const float PI = 3.14159;
				float xtemp;
				for ( iteration = 0.0; iteration < _MaxIter; iteration += 1.0) {
					if ( coord.x*coord.x + coord.y*coord.y > 2.0*(cos(fmod(0,2.0*PI))+1.0) )
					break;
					xtemp = coord.x*coord.x - coord.y*coord.y + mcoord.x;
					coord.y = 2.0*coord.x*coord.y + mcoord.y;
					coord.x = xtemp;
				}
				float val = fmod(((iteration/_MaxIter)),1.0);
				float4 color;

				color.r = clamp((3.0*abs(fmod(2.0*val,1.0)-0.5)),0.0,1.0);
				color.g = clamp((3.0*abs(fmod(2.0*val+(1.0/3.0),1.0)-0.5)),0.0,1.0);
				color.b = clamp((3.0*abs(fmod(2.0*val-(1.0/3.0),1.0)-0.5)),0.0,1.0);
				color.a = 1.0;

				return color;

		}
		ENDCG

	    }
	}
	Fallback "VertexLit"
}

I have to confess that I didn’t think CG supported double datatypes - nothing to do with Unity’s implementation of it.

Certainly, double is not mentioned in the “Types” section of the CG language specification document, although it is mentioned in the nVidia CG tutorial.

Either way, since the addition of that variable declaration causes this shader to fail, I think it’s fair to say that Unity doesn’t support double datatypes in CG :slight_smile:

It goes further than just CG. Most of the GPUs don’t support such formats. There are ways to emulate it with code alone, but i’d expect it would be awfully slow.