Drawing geometry from interleaved buffer.

Hi!
I need to draw geometries using an indexed draw call with an interleaved buffer that I am given.

To begin with, I would like to just draw a single triangle on the screen. From there, the rest should be simple.
Currently, I am setting up my buffers and issuing the draw call like this:

using UnityEngine;
using Vector2 = System.Numerics.Vector2;
using Vector3 = UnityEngine.Vector3;

public class Test : MonoBehaviour
{
    // Define the structure of your vertex data
    private struct VertexData
    {
        public Vector3 position;
        public Color32 color;
        public Vector2 uv;
    };

    private GraphicsBuffer vertexBuffer;
    private GraphicsBuffer indexBuffer;
    private Material material;

    void Start()
    {
        int vertexCount = 3;
        vertexBuffer = new GraphicsBuffer(GraphicsBuffer.Target.Structured, vertexCount, System.Runtime.InteropServices.Marshal.SizeOf(typeof(VertexData)));
        VertexData[] vertices = new VertexData[vertexCount];
        vertices[0] = new VertexData { position = new Vector3(0, 0, 0), color = new Color32(0, 0, 255, 255), uv = new Vector2(0, 0) };
        vertices[1] = new VertexData { position = new Vector3(1, 0, 0), color = new Color32(0, 255, 0, 255), uv = new Vector2(1, 0) };
        vertices[2] = new VertexData { position = new Vector3(0, 1, 0), color = new Color32(255, 0, 0, 255), uv = new Vector2(0, 1) };
        vertexBuffer.SetData(vertices);

        int indexCount = 3;
        indexBuffer = new GraphicsBuffer(GraphicsBuffer.Target.Index, indexCount, sizeof(int));
        int[] indices = new int[indexCount];
        indices[0] = 0;
        indices[1] = 1;
        indices[2] = 2;
        indexBuffer.SetData(indices);

        material = new Material(Shader.Find("Test/TestShader"));
    }

    private void OnDestroy()
    {
        indexBuffer?.Dispose();
        vertexBuffer?.Dispose();
    }

    void Update()
    {
        RenderParams rp = new RenderParams(material);
        rp.matProps = new MaterialPropertyBlock();
        rp.matProps.SetBuffer("vertexBuffer", vertexBuffer);
        Graphics.RenderPrimitivesIndexed(rp, MeshTopology.Triangles, indexBuffer, 3, 0, 1);
    }
}

and have written the corresponding shader:

Shader"Test/TestShader"
{
SubShader
{
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag

#include "UnityCG.cginc"

struct VertexData
{
    float3 pos   : POSITION;
    float4 color : COLOR;
    float2 uv    : TEXCOORD0;
};

struct v2f
{
    float4 pos   : SV_POSITION;
    float4 color : COLOR0;
    float2 uv    : TEXCOORD0;
};

StructuredBuffer<VertexData> vertexBuffer;

v2f vert(uint vertexID : SV_VertexID)
{
    v2f o;
    VertexData vertexData = vertexBuffer[vertexID];
    o.pos = float4(vertexData.pos, 1);
    o.color = vertexData.color;
    o.uv = vertexData.uv;
    return o;
}

float4 frag(v2f i) : SV_Target
{  
    return i.color;
}
ENDCG
}
}
}

The shader seems to compile fine but I am not seeing the triangle rendered on the screen. I suppose I am forgetting something. From what I have read, if I do not set a render target explicitly, the screen is used as default render target.

Does anyone know where I am going wrong?

I’d try checking your winding order. Unity uses clockwise winding for front-facing polygons, at first sight your triangle seems to be using CCW.

Also your vertex shader is not transforming vertex coordinates at all (don’t know if this is intentional, just pointing that out). Typically you want your mesh vertices expressed in object space, then converted to clip space in the vertex shader.

Thanks for the answer!
You are right, I am not doing any transformations. That is because I want to draw in screen space. I couldn’t find a good way to draw a fullscreen quad in Unity so I figured I’d simply draw into the camera’s render target directly.

Unfortunately, that doesn’t seem to work. I got to a point where I can draw the triangle in the scene, using the UNITY_MATRIX_VP builtin matrix. Without transformations though, nothing is rendered.

FYI drawing a fullscreen quad in Unity is typically done using Blit(). This allows you to render a fullscreen quad using a custom material/texture onto a RenderTexture, or the screen:

Vertex range in clip space varies depending on the graphics API used. Unity abstracts this away for you when you use UNITY_MATRIX_P (or _VP, or _MVP) or UnityObjectToClipPos(pos), however if you’re not going to transform the vertex then you must account for this yourself.

What’s probably happening is that your triangle is outside of or at the very limits of clip space and getting culled by the GPU.

See: https://docs.unity3d.com/2019.1/Documentation/Manual/SL-PlatformDifferences.html

Here is a macro to build full screen triangle(-1,-1,z,1),(3,-1,z,1),(-1,3,z,1) with uv(0,0),(2,0),(0,2).
The z is 1.0 for most API but -1.0 on GL. You also need to flip v if API is not GL and render target is screen backbuffer.