Project Blender 3d Mesh to RenderTexture?

I have a sculpted 3d object from blender and I would like to project it into a texture for further use. What would be your approach in solving this problem? Some use cases might be to create a map, or to gain a heightmap set of data for further use (i.e. find the highest point).

I was able to create a render texture and orthographic camera to produce what I was looking for as a starting point however it doesn't resolve how I might get the top layer of height data for processing. It seems like I would need some sort of sampling & projecting to get this grid of data into a heightmap (i.e. a 2 dimensional matrix with a height as value).

9063568--1253443--upload_2023-6-7_9-55-51.jpg

I was able to get an image rendered in blender which is getting me closer 9064138--1253626--upload_2023-6-7_12-49-37.jpg

After further research it looks like I could utilize the camera's depthTexture capabilities
Unity - Manual: Cameras and depth textures (unity3d.com)
How to view Camera's Depth Texture? - Unity Forum

I have something close but appear to be having issues getting the depth height value not being correct & overlapping geoemtry. Anyone with shader experience able to help? Really just trying to get a grayscale heightmap output to the SV_TARGET and just view the texture in the editor.

I was able to get this reference code working, but have since struggled to apply it properly for my use case William Chyr | Unity Shaders - Depth and Normal Textures (Part 1)

using UnityEngine;

[RequireComponent(typeof(MeshFilter))]
public class DepthTextureGenerator : MonoBehaviour
{
    public int textureResolution = 512;
    public Shader depthShader;

    [HideInInspector]
    public Texture2D InspectorDepthTexture;

    private Camera depthCamera;

    public void RenderDepthTexture()
    {
        Mesh mesh = GetComponent<MeshFilter>().sharedMesh;
        if (mesh == null)
            return;

        if (depthCamera == null)
            depthCamera = CreateDepthCamera();

        RenderTexture previousActiveTexture = RenderTexture.active;
        RenderTexture depthRenderTexture = new RenderTexture(textureResolution, textureResolution, 0, RenderTextureFormat.ARGB32);
        RenderTexture.active = depthRenderTexture;

        depthCamera.targetTexture = depthRenderTexture;
        depthCamera.RenderWithShader(depthShader, "");

        InspectorDepthTexture = new Texture2D(textureResolution, textureResolution, TextureFormat.ARGB32, false);
        InspectorDepthTexture.ReadPixels(new Rect(0, 0, textureResolution, textureResolution), 0, 0);
        InspectorDepthTexture.Apply();

        RenderTexture.active = previousActiveTexture;
        DestroyImmediate(depthRenderTexture);

    }

    private Camera CreateDepthCamera()
    {
        GameObject depthCameraObject = new GameObject("DepthCamera");
        depthCameraObject.transform.parent = transform;
        depthCameraObject.transform.localPosition = new Vector3(0f, CalculateOrthographicSize() * 2f, 0f);
        depthCameraObject.transform.rotation = Quaternion.Euler(90f, 0f, 0f);

        Camera depthCam = depthCameraObject.AddComponent<Camera>();
        depthCam.depthTextureMode = DepthTextureMode.Depth;
        depthCam.enabled = false;
        depthCam.orthographic = true;
        depthCam.orthographicSize = CalculateOrthographicSize();
        depthCam.clearFlags = CameraClearFlags.SolidColor;
        depthCam.backgroundColor = Color.cyan;

        return depthCam;
    }

    private float CalculateOrthographicSize()
    {
        Bounds bounds = GetComponent<MeshFilter>().sharedMesh.bounds;
        float maxBoundsSize = Mathf.Max(bounds.size.x, bounds.size.z);
        return maxBoundsSize * 0.5f;
    }
}
Shader "Custom/DepthShader" {
    SubShader{
        Tags { "RenderType" = "Opaque" }
        Pass {
            CGPROGRAM

            #pragma vertex vert
            #pragma fragment frag
            #include "UnityCG.cginc"

            struct v2f {
                float4 pos : SV_POSITION;
                float2 depth : TEXCOORD0;
            };

            v2f vert(appdata_base v) {
                v2f o;
                o.pos = UnityObjectToClipPos(v.vertex);
                UNITY_TRANSFER_DEPTH(o.depth);
                return o;
            }

            half4 frag(v2f i) : SV_Target {
                UNITY_OUTPUT_DEPTH(i.depth);
            }
            ENDCG
        }
    }
}

9077179--1256239--upload_2023-6-13_7-33-10.png

In the end I went with a physics ray casting solution. I created a plane with the same planar bound as my blend and then casted away at the blend to get the resulting 2-dimensional grid I was looking for. All other approaches were to complex or didn't provide the accuracy needed. Now onto the fun part of using this grid of data, first stop creating a map!

The depth based solution I was able to get to work after tweaking the min/max range of the camera depth to get a gradient range.