Rendering directly from compute buffers

We generate a bunch of vertex and index data via compute, then copy the data back to the CPU side in order to create Meshes, just to be able to render those meshes.

I’ve been looking into Graphics.RenderXXXX methods to see if it is possible to avoid all this needless copying.

I found two dead-end topics, one of which looked promising but then fizzled out in 2021:

GraphicsBuffer, Mesh vertices and Compute shaders - Unity Engine - Unity Discussions

GraphicsBuffer and Mesh - Unity Engine - Unity Discussions

Is there any way to render from GraphicsBuffers directly?

RenderPrimitivesIndirect

Ah right… do these work with Shader Graph materials, i.e. anything that would work with regular meshes? Making custom shaders to work with these buffers won’t be feasible in our case.

Create graphics buffers using Mesh.SetIndexBufferParams, Mesh.SetIndexBufferData, Mesh.SetVertexBufferParams, and Mesh.SetVertexBufferData. Then, write to the graphics buffers obtained via Mesh.GetIndexBuffer and Mesh.GetVertexBuffer from a compute shader.
Please note that you need to manually update the bounds.

1 Like

Thanks for this outline of the approach. I’ve spent a bunch of time this morning attempting to implement things this way.

Unfortunately I’m stuck with a situation where the final meshrenderer just renders… nothing.

Maybe I’m missing some small but crucial step or settings, so any help would be greatly appreciated.

The rough outline of what I’m doing is as follows…

  • Create a mesh from code (new Mesh()). Set up a new game object with a mesh filter, mesh renderer, etc. This part is nothing special and worked before today’s change-over.
  • Set the mesh vertex / index buffer params and get the buffers.
  • Pass the buffers to a compute shader, which generates all the mesh/index data.
  • Debug - do a readback from the GPU to confirm that the buffers indeed exists and are filled in with good data.

So all this works, including the readback I do to validate the data for my own sanity. No errors. The gameobject, mesh filter with assigned mesh, and mesh renderer all exists. Just… nothing is rendererd.

The mesh object itself reports vertices, normals, etc. when I put a breakpoint and inspect those items. It matches the length of the graphics buffers.

However… the mesh.triangles property is always an empty array. Not sure if this is relavent or not.

BTW, I’m not doing SetVertexBufferData() and SetIndexBufferData() from code - because all the data is actually created and set from the compute shader. The buffers exist after SetIndexBufferParams and SetVertexBufferParams, as I can inspect them, and as mentioned do a readback to confirm they exist with valid data.

Any ideas?

Going crazy here :slight_smile:

When creating a Mesh from a script, you must always set up a SubMesh.
Additionally, if you access it as a StructuredBuffer from a ComputeShader, make sure to set GraphicsBuffer.Target.Structured.
As for SetVertexBufferData and SetIndexBufferData, you probably don’t need to call them.
Here is some sample code for your reference.

using System.Runtime.InteropServices;
using UnityEngine;
using UnityEngine.Rendering;

public class SampleComputeMesh : MonoBehaviour
{
    [StructLayout(LayoutKind.Sequential)]
    struct Vertex
    {
        public Vector3 position;
        public Vector3 normal;
        public Vector4 color;
        public Vector2 uv0;
    }

    [SerializeField] Material material;
    [SerializeField] ComputeShader computeShader;
    [SerializeField] Bounds bounds = new Bounds(Vector3.zero, new Vector3(1000, 1000, 1000)); // set your value

    Mesh mesh;
    GraphicsBuffer indexBuffer;
    GraphicsBuffer vertexBuffer;
    float simulationTime;

    const int vertexCount = 3;
    const int indexCount = 3;

    void Start()
    {
        var meshFilter = this.gameObject.AddComponent<MeshFilter>();
        var meshRenderer = this.gameObject.AddComponent<MeshRenderer>();

        mesh = CreateMesh();
        meshFilter.sharedMesh = mesh;
        meshRenderer.sharedMaterial = material;

        indexBuffer = mesh.GetIndexBuffer();
        vertexBuffer = mesh.GetVertexBuffer(0);
    }

    void OnDestroy()
    {
        indexBuffer.Dispose();
        indexBuffer = null; // for GC

        vertexBuffer.Dispose();
        vertexBuffer = null; // for GC

        Destroy(mesh);
        mesh = null; // for GC
    }

    private void Update()
    {
        var kernel = computeShader.FindKernel("CSMain");
        computeShader.GetKernelThreadGroupSizes(kernel, out var x, out _, out _);
        var groups = (indexCount + (int)x - 1) / (int)x;

        computeShader.SetFloat("Time", simulationTime);
        computeShader.SetBuffer(kernel, "IndexBuffer", indexBuffer);
        computeShader.SetBuffer(kernel, "VertexBuffer", vertexBuffer);
        computeShader.Dispatch(kernel, groups, 1, 1);

        simulationTime += Time.deltaTime;
    }

    Mesh CreateMesh()
    {
        var mesh = new Mesh();
        mesh.name = "TestComputeMesh";

        mesh.vertexBufferTarget |= GraphicsBuffer.Target.Structured; // for access as StructuredBuffer from compute shaders
        mesh.indexBufferTarget |= GraphicsBuffer.Target.Structured; // for access as StructuredBuffer from compute shaders

        mesh.bounds = bounds;

        var vertexLayout = new[]
        {
            new VertexAttributeDescriptor(VertexAttribute.Position, VertexAttributeFormat.Float32, 3),
            new VertexAttributeDescriptor(VertexAttribute.Normal, VertexAttributeFormat.Float32, 3),
            new VertexAttributeDescriptor(VertexAttribute.Color, VertexAttributeFormat.Float32, 4),
            new VertexAttributeDescriptor(VertexAttribute.TexCoord0, VertexAttributeFormat.Float32, 2)
        };
        mesh.SetVertexBufferParams(vertexCount, vertexLayout);

        var initialVertices = new Vertex[vertexCount] {
            new Vertex { position = new Vector3(0f, 0f, 0f), normal = new Vector3(0f, 0f, -1f), color = Vector4.one, uv0 = new Vector2(0, 0) },
            new Vertex { position = new Vector3(1f, 1f, 0f), normal = new Vector3(0f, 0f, -1f), color = Vector4.one, uv0 = new Vector2(1, 1) },
            new Vertex { position = new Vector3(1f, 0f, 0f), normal = new Vector3(0f, 0f, -1f), color = Vector4.one, uv0 = new Vector2(1, 0) }
        };
        mesh.SetVertexBufferData(initialVertices, 0, 0, vertexCount);

        mesh.SetIndexBufferParams(indexCount, IndexFormat.UInt32);
        var indices = new int[indexCount] { 0, 1, 2 };
        mesh.SetIndexBufferData(indices, 0, 0, indexCount);

        mesh.subMeshCount = 1;
        mesh.SetSubMesh(0, new SubMeshDescriptor(0, indexCount), MeshUpdateFlags.DontRecalculateBounds);

        return mesh;
    }
}
#pragma kernel CSMain

struct Vertex
{
    float3 pos;
    float3 norm;
    float4 color;
    float2 uv0;
};

float Time;
RWStructuredBuffer<uint> IndexBuffer;
RWStructuredBuffer<Vertex> VertexBuffer;

[numthreads(32,1,1)]
void CSMain (uint id : SV_DispatchThreadID)
{
    float offset = sin(Time * 2) * 0.01;
    VertexBuffer[id].pos = VertexBuffer[id].pos + float3(offset, offset, offset);
    IndexBuffer[id] = id;
}

1 Like

Thanks! The creating of the submesh was indeed one issue.

But I’ve run into another hurdle. I’m sensitive to the fact that I can’t expect others to debug my code for me, but being stuck for 5 hours on some random hurdle, when 99.9% of the code is in place… well, let’s just say it is beyond frustrating and stressful.

After adding a lot of debug buttons and breakpoints and GPU readbacks… I’ve come to the conclusion that the Mesh is overwriting (re-creating) the GPU buffers after the computer shader has done its job to generate the vertex/index data directly into the buffers.

To summarise my test/debug steps (simplified)…

  • Create a gameobject, mesh, meshrenderer, meshfilter etc…
  • Use the “advanced” mesh API to set the buffer layouts and fill the buffers with 1 test triangle (3 verts).
  • Wait for a debug keypress, while rendering to the screen so we can see the test triangle.
  • Upon pressing the debug key, ask for the mesh buffers, send them off to the compute shader.
  • Wait for another keypress.
  • Do a readback from the buffers to see that the compute shader wrote good data.
  • Fetch the buffers again from the mesh via GetVertexBuffer() and do a readback.

On this last step, the freshly fetched buffers have totally different buffer handles, and contains data matching the original test triangle that was added when the mesh/gameobject/etc was first created.

So it seems no matter how much delay I put between the original creation of the objects and the compute shader part… after the compute shader runs, the buffers used by the shader are stale, and the mesh has a new set of buffers containing data matching the CPU-side of the mesh.

Oh well… thanks for listening, this is therapy :slight_smile:

I don’t know where to go next with this… maybe an entirely different approach, but I will need to do some more research.

EDIT:

Some additional info… in all cases where I ask for the buffers via the mesh, I’m tracking the meshes Instance ID, so I’m pretty sure of the obvious pitfalls here like making sure we are indeed fetching buffers from the expected mesh instance.

UPDATE:

Finally figured out that something I was doing (I have a few candidates), was causing the Mesh to upload to the GPU side at a bad moment. Prime candidate is that I was calling UploadMeshData() manually just before fetching the buffer, ironically to ensure I have the right ones… but I guess at the end of that frame or soon after, this caused the GPU buffers to be recreated, leaving the ones I use for the compute shader stale.

Everything is working now!

1 Like