Need help baking very precise values into just about any format I can get away with

Hi there! I’ve been pulling my hair out all day trying to get a bunch of objects in the HDRP to bake their fragment’s world position into a texture, or a float4[,] array. I believe I have most of this working, but I want to be able to bake large numbers - specifically in the range of -360 to +360, plus a fair number of decimals, for X,Y, and Z.

I can get the objects I want to render properly in UV space (shader below). However when they get to the Texture and I read them again, it appears that Unity is clamping the output values between 0 & 59. I’m not sure when this is happening - if it’s the rendering process, or the limits of the RenderTexture it’s self. I sorta suspect it’s my rendering process, honestly, because this fails even when I’m using a R32B32G32_SINT type texture, and supporting that format and then truncating it would make absolutely no sense what so ever.

Any idea?

The shader I’m using to render objects to a texture is below. I’m basically putting each object one by one on an exclusive layer, making sure it’s visible inside a camera’s frustum that has a RenderTexture as a target, and then I’m manually Rendering the camera. If there’s a better way I’m all ears. As far as I can tell this is just about the only way because I need to rely on the rasterizer to allow me to get at the fragment positions… doesn’t seem possible to do this from within a compute shader…??

v2f vert(appdata v){
                v2f o;
                float3 worldPos =  mul(unity_ObjectToWorld, v.vertex);
                o.color = float4(worldPos.x, worldPos.y, worldPos.z, 1);
                float2 realUVBroh = v.uv * unity_LightmapST.xy + unity_LightmapST.zw;
                o.pos = float4(realUVBroh.x * 2.0 - 1.0, realUVBroh.y * 2.0 - 1.0, 1.0, 1.0);
                return o;     
}

float4 frag(v2f i) : SV_Target{

                //I'm using this weird code as a test. I'm just rendering out an absurd value
                //and as the most recent attempt, I'm rendering it to an SINT render texture...
                //hence the *1000 below, which of course is to preserve decimal points.
                float4 outputColor = fixed4(300,-40.523,355.23,1);
                                

                return outputColor*1000;
}

Does it work with a floating point format like R32G32B32_SFloat or R16G16B16_SFloat?

No, I’m afraid not - I tried that first one first!

I would debug it with RenderDoc. Don’t see why this wouldn’t work.

Oh damn, I wasn’t aware. Thanks, I’ll try it right now.

Update from me. First of all, turns out RenderDoc didn’t help much because as far as I can tell, I can’t get it to capture frame debug data for a camera that is rendering to a RenderTexture (although it’s most likely I’m missing something)

However I did use Unity’s frame debugger and found this odd little thing:

it seems to be rendering to a B10G11R11, and then lower down here:

It’s drawing THAT to my render texture. Which honestly 100% explains what’s going on, but it doesn’t help me too much.

I’ll be honest this seems like a broken pipeline. What’s the point of supporting R32G32B32 if in order to actually RENDER to them the pipeline will loose all the data? I must be missing something. I suppose there’s compute shaders, but unfortunately they won’t help me in this case.

I think… my next attempt will be to try command buffers. maybe I can use them to bypass the camera system entirely?

Update from me. Good LORD this has been a brutal process.

Command buffers work! For anyone who finds this thread while searching, here’s my Command Buffer code:

CommandBuffer commandBuffer = new CommandBuffer();
commandBuffer.name = "Bake World Positions";
//this is a R32G32B32 RenderTexture
commandBuffer.SetRenderTarget(targetTexture);
commandBuffer.ClearRenderTarget(true, true, Color.clear);

//I have no idea if this code is correct - I do not yet understand matrices.
//however, I'm 99% sure it doesn't matter because my shader is transforming the verticies to screen space.
//....... i think.
commandBuffer.SetViewMatrix(Matrix4x4.TRS(new Vector3(0,0,-10), Quaternion.identity, Vector3.one));
commandBuffer.SetProjectionMatrix(Matrix4x4.Ortho(-1, 1, -1, 1, 0.1f, 1000));

Matrix4x4 renderMatrix = Matrix4x4.TRS(Vector3.zero, Quaternion.identity, Vector3.one);

int safetyCount = 0;
foreach (MeshFilter meshFilter in meshFilters)
{
       //I was encountering a situation where the editor was crashing so I started doing this in passes.
    safetyCount++;
    if (safetyCount > 30)
    {
        Debug.LogError("Safety count exceeded. Aborting.");
        break;
    }
    Material oldMat = meshFilter.GetComponent<Renderer>().sharedMaterial;
    meshFilter.GetComponent<Renderer>().sharedMaterial = worldPositionMaterial;
    commandBuffer.DrawMesh(meshFilter.sharedMesh, renderMatrix, worldPositionMaterial);
    meshFilter.GetComponent<Renderer>().sharedMaterial = oldMat;

}
Graphics.ExecuteCommandBuffer(commandBuffer);

HOWEVER - now the command buffer is rendering every object on top of its self. it would appear that while rendering from a command buffer, the shader keyword unity_LightmapST is not being respected. I think.

not to figure out if I can get that #%@# data and pass it into the command buffer.

OKAY! It works. good god that was long and painful.

for anyone who finds this, the answer to the above is, I assume, that keywords like unity_LightmapST and even unity_ObjectToWorld are part of the rendering pipeline, so if you’re using command buffers, they don’t exist, becuase you’re circumventing that pipeline. However, you can simply pass the data into the material you’re using manually by defining a Matrix4x4 in the shader as well as a Vector4 and then passing them in with Material.SetVector and SetMatrix.

However there’s one more issue - you can’t chain a bunch of DrawMesh calls this way because, presumably, the material exists outside of this chain - so it will have the data for whatever you last set it to by the time the command buffer gets executed. so you have to execute the command buffer once per draw call.

lastly, I had some weirdness with ClearRenderTarget - I moved it to start but even so, the RT was being cleared each time the command buffer executed. I don’t know why this setup works 100%, but this setup works:

public RenderTexture targetTexture;
public Material worldPositionMaterial;

int meshIndex = 0;
List<MeshFilter> meshFilters = new List<MeshFilter>();
public Matrix4x4 muhViewMatrix = Matrix4x4.TRS(new Vector3(0, 0, -10), Quaternion.identity, Vector3.one);
public Matrix4x4 muhProjectionMatrix = Matrix4x4.Ortho(-1, 1, -1, 1, 0.1f, 1000);
// Start is called before the first frame update
void Start()
{
    ClearTex();

    // Find all GameObjects in the scene that are marked as static
    meshIndex = 0;
    GameObject[] allObjects = FindObjectsByType<GameObject>(FindObjectsSortMode.None);
    meshFilters = new List<MeshFilter>();
    //Debug.Log("Got this many objects: " + allObjects.Length);
    foreach (GameObject obj in allObjects)
    {
        if (obj.isStatic && obj.GetComponent<MeshFilter>() != null && obj.GetComponent<Renderer>() != null)
        {
            if ((GameObjectUtility.GetStaticEditorFlags(obj) & StaticEditorFlags.ContributeGI) != 0)
            {
                meshFilters.Add(obj.GetComponent<MeshFilter>());
            }
        }
    }

    commandBuffer = new CommandBuffer();
    commandBuffer.name = "Bake World Positions";
    commandBuffer.SetRenderTarget(targetTexture);
    //commandBuffer.ClearRenderTarget(false, false, Color.clear);

    commandBuffer.SetViewMatrix(muhViewMatrix);
    commandBuffer.SetProjectionMatrix(muhProjectionMatrix);
    //Graphics.ExecuteCommandBuffer(commandBuffer);
}
void ClearTex()
{
    RenderTexture.active = targetTexture;
    GL.Clear(true, true, Color.clear);
    RenderTexture.active = null;
}
// Update is called once per frame
bool hasExecuted = false;
CommandBuffer commandBuffer;
void Update()
{
  
    //commandBuffer.Clear();
    Matrix4x4 renderMatrix = Matrix4x4.TRS(Vector3.zero, Quaternion.identity, Vector3.one);


    if(meshIndex < meshFilters.Count)
    {
        MeshFilter meshFilter = meshFilters[meshIndex];
        Material oldMat = meshFilter.GetComponent<Renderer>().sharedMaterial;
        meshFilter.GetComponent<Renderer>().sharedMaterial = worldPositionMaterial;

        float4 offset = meshFilter.GetComponent<Renderer>().lightmapScaleOffset;
        worldPositionMaterial.SetVector("_LightmapPosition", offset);
        worldPositionMaterial.SetMatrix("_localToWorldMatrix", meshFilter.transform.localToWorldMatrix);

        //int oldLayer = meshFilter.gameObject.layer;
        //meshFilter.gameObject.layer = 30;
        Debug.Log("Drawing mesh: " + meshFilter.name + "with offset " + offset);
      
        commandBuffer.DrawMesh(meshFilter.sharedMesh, renderMatrix, worldPositionMaterial);
        meshFilter.GetComponent<Renderer>().sharedMaterial = oldMat;
        //meshFilter.gameObject.layer = oldLayer;
        meshIndex++;

        Graphics.ExecuteCommandBuffer(commandBuffer);
    }
    if(meshIndex == meshFilters.Count && !hasExecuted)
    {
        hasExecuted = true;

        RenderTexture.active = targetTexture;
        // Create a new Texture2D to read the RenderTexture data.
        Texture2D texture2D = new Texture2D(targetTexture.width, targetTexture.height, TextureFormat.RGBAFloat, false);

        // Read the RenderTexture contents to the Texture2D.
        texture2D.ReadPixels(new Rect(0, 0, targetTexture.width, targetTexture.height), 0, 0);
        texture2D.Apply();

        Color testColorValue = texture2D.GetPixel(510, 535);
        Debug.Log("Color value at 510, 535: " + testColorValue);
        Destroy(texture2D);
    }

}

Edit: Nope! sorry! that command buffer initialization needs to be moved down into update, so a new command buffer is created and presumably destroyed each frame. probably a more optimized way of doing that, but it works.