How do I render scalars to a Floating Point RenderTexture?

Hi all,

I’m interested in transferring floats to a shader by rendering them into a RenderTexture using the RFloat RenderTextureFormat. But I find the documentation for this somewhat sparse, and it’s unclear to me how I’m supposed to use this format, both on the CPU side and GPU side.

My goal is to give an object a color that corresponds to the float I need and then render this object using a camera that writes to the RFloat RenderTexture. Does anyone know how the RFloat format works when a camera is rendering into it?

Suppose I simply make a color out of the four 8 bit components stored in a 32bit float by bitshifting each color channel out of the float, i.e. Color.r becomes the first 8 bits of the float, Color.g the next 8 bits, and so on. If I use the camera to render an object with that color into the RFloat RenderTexture, will the texture then end up storing the original float? And if so, how do I get it back out of the texture in Cg?

Normally, when you read from a texture, you store the result in a float4, e.g. something like this:

float4 color = tex2D(sampler, uv_coords);

But in the shader, I don’t want a 4 component color vector when reading from this texture, I want just a float - It’s a floating point texture after all. Is there a version of tex2D that just returns float, or am I supposed to repack the float4 into the original float, or how does this work?

[PS: I also posted this identical question on the forum: 2]

I have recently solved this problem in a very ugly and very low-level fashion myself. To get the floats properly transferred to the GPU, I encoded them into colors in Big Endian order like this:

private Color EncodeFloatInColor(float scalar)
{
    byte[] floatBytes = BitConverter.GetBytes(scalar);
    return new Color32(floatBytes[3], floatBytes[2], floatBytes[1], floatBytes[0]);
}

I then spent an excruciating amount of time studying the bit pattern of 32 bit floats in Big Endian notation using the Wikipedia article on the IEEE 754 Single Precision floating-point format, which allowed me to come up with this decoder function for shaders, implemented in Cg/HLSL:

inline float UnpackFloatRGBA(float4 c)
{
    // First, convert the color to its byte values
    int4 bytes = c * 255;

    // Extract the sign byte of the float, i.e. the most significant bit in the red channel (and overall float structure)
    int sign = (bytes.r & 128) > 0 ? -1 : 1;

    // Extract the exponent's bit parts which are spread across both the red and the green channel
	int expR = (bytes.r & 127) << 1;
	int expG = bytes.g >> 7;

    int exponent = expR + expG;

    // The remaining 23 bits constitute the float's significand. They are spread across the green, blue and alpha channels
	int signifG = (bytes.g & 127) << 16;
	int signifB = bytes.b << 8;

	float significand = (signifG + signifB + bytes.a) / pow(2, 23);

	significand += 1;

	// We now know both the sign bit, the exponent and the significand of the float and can thus reconstruct it fully like so:
	return sign * significand * pow(2, exponent - 127);
}

The idea is to assign the Color generated by EncodeFloatInColor to some object or series of objects (in my case I render them into individual pixels of a texture), and then read back the color with a call to tex2D in the shader. The returned float4 in the shader will contain the float data represented as an RGBA color with its 4x8 bits stored in the 4 channels of the color. The Cg function UnpackFloatRGBA takes that ‘color’ and restores the original float from it, without any precision loss.

I don’t know if that’s useful to anyone else but me, but it was a total pain in the ass to write, so I felt like sharing it. :wink: I will close this question now.

what’s the range of your floating-point values ? If it’s [0…1], then you can use a 32bit ARGB texture and use EncodeFloatRGBA / DecodeFloatRGBA in the shaders: