Get unclamped floats as shader output

I want to render high precision float values to a texture, but the values I read are clamped to [0, 1]. Is there anyway to get unclamped values?

Here is the fragment portion of the shader code:

float4 frag (v2f i) : SV_TARGET
{
    return float4(0.098, 0.698, 200, 100000); // Obviously a contrived example.
}

Here is the code creating the texture to render to:

RenderTextureDescriptor rtd = new RenderTextureDescriptor(1, 1, RenderTextureFormat.ARGBFloat);
RenderTexture rt = RenderTexture.GetTemporary(rtd);
RenderTexture.active = rt;
  
mainCamera.SetReplacementShader(selectionShader, "");
mainCamera.Render(); // OnPostRender will be called
mainCamera.ResetReplacementShader();

Read code (in OnPostRender):

Texture2D hitTexture = new Texture2D(1, 1, TextureFormat.RGBAFloat, false);
Rect rect = new Rect(0, 0, 1, 1);
hitTexture.ReadPixels(rect, 0, 0, false);
     
Color pixelSample = hitTexture.GetPixel(0, 0);
Debug.Log("Sample: " + pixelSample);

The output is “RGBA(0.098, 0.698, 1.000, 1.000)”, rather than “RGBA(0.098, 0.698, 200, 100000)” as hoped/expected.

Here is the complete shader code:

Shader "Unlit/NewUnlitShader"
{
    SubShader
    {
        Tags{ "RenderType" = "Opaque" }
        LOD 100

        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            #include "UnityCG.cginc"

            struct appdata
            {
                float4 vertex : POSITION;
            };

            struct v2f
            {
                float4 vertex : SV_POSITION;
            };

            struct FragOut
            {
                float value : SV_TARGET;
            };

            v2f vert(appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                return o;
            }

            float4 frag(v2f i) : SV_TARGET
            {
                return float4(0.098, 0.698, 200, 100000);
            }
            ENDCG
        }
    }
}

I just thrown together a quick test case in my testing project. Since I’m still use Unity 5.6.1f1 I can’t use the “RenderTextureDescriptor” class as it has been introduced in 2017.1 i guess.

I create my render texture like this:

rt = new RenderTexture(1, 1, 0, RenderTextureFormat.ARGBFloat, RenderTextureReadWrite.Linear);

Note that the choosen color space is linear. I’m not sure if sRGB is true by default. You can try setting RenderTextureDescriptor.sRGB to false. Maybe there is a color space conversion going on.

Everything else is pretty much the same as your posted code. I print the color channels manually like this:

    Color p = hitTexture.GetPixel(0, 0);
    Debug.Log("r: " + p.r + " g:"+p.g+" b:"+p.b+" a:"+p.a);

Even i don’t get the desired values I still get those values:

r: 0.09796143 g:0.6977539 b:200 a:65504

Any value that is larger than 65504 comes out as 65504. However 65503 comes out as 65472. So it looks like somewhere along the line it’s using a half precision float which has an 11 bit mantissa.

65504 in binary is 1111111111100000. That’s 11 significant bits. So the smallest change possible at this size is 32 which is exactly what we see here.

This could be a general limitation or maybe a limitation of my hardware.(NVIDIA GeForce GTX 750 Ti).

Keep in mind that you only render a single pixel. If your camera pixel rectangle is the “normal” screen rect you actually sample the top left pixel. If the pixel rect of the camera has a width and height of 1 it actually samples the center of the screen. Maybe you don’t sample the expected pixel?

I came back to this issue and solved it by using RenderWithShader:

RenderTextureDescriptor rtd = new RenderTextureDescriptor(camera.pixelWidth, camera.pixelHeight, RenderTextureFormat.ARGBFloat, 24);

RenderTexture selectionTexture = RenderTexture.GetTemporary(rtd);

camera.SetTargetBuffers(selectionTexture.colorBuffer, selectionTexture.depthBuffer);
camera.RenderWithShader(selectionShader, "");

In case you still haven’t find out the way to get the full precision pixel from RenderTexture. I figured that you can use a Compute Shader to do that.

I’ll paste my code here just in case someone need it.

In a MonoBehaviour:

public Vector4[] GetPixels(RenderTexture texture, ComputeBuffer buffer, RectInt roi)
{
    MapGeneratorShader.SetInts("GetPixelsStartPixel", ToIntArray(roi.position));
    MapGeneratorShader.SetInts("GetPixelsSize", ToIntArray(roi.size));
    var getPixelKernel = MapGeneratorShader.FindKernel("GetPixels");
    MapGeneratorShader.SetTexture(getPixelKernel, "GetPixelsTexture", texture);
    MapGeneratorShader.SetBuffer(getPixelKernel, "GetPixelsPixels", buffer);
    MapGeneratorShader.Dispatch(getPixelKernel, Mathf.CeilToInt((float)roi.width / 8), Mathf.CeilToInt((float)roi.width / 8), 1);
    var pixelsFloats = new Vector4[roi.width * roi.height];
    HeightRoiBuffer.GetData(pixelsFloats);
    return pixelsFloats;
}

In a ComputeShader:

#pragma kernel GetPixels

RWTexture2D<float4> GetPixelsTexture;
RWStructuredBuffer<float4> GetPixelsPixels;
int2 GetPixelsStartPixel;
int2 GetPixelsSize;

[numthreads(8,8,1)]
void GetPixels (uint3 id : SV_DispatchThreadID)
{
    int2 bufferPos = id.xy;
    int bufferIdx = bufferPos.x + bufferPos.y * GetPixelsSize.x;
    GetPixelsPixels[bufferIdx] = GetPixelsTexture[id.xy + GetPixelsStartPixel];
}

Note that the ComputeBuffer need to be created before hand with the same size of the width of the ROI that you want. This assumes that the texture is in RGBAFloat format. You can change the format of the ComputeBuffer for your pixel format and it should just work.