DX11 PSIZE Bug, PLEASE HELP!

I think I have found a bug in DX11 in Unity and would like to hear your advice on if this is a bug and how to get around it.

What I have working so far is a point cloud renderer that estimates the size of each point depending on it’s distance to the camera. Therefore, those points that are close to the camera are rendered larger as they are closer, and those poihnts that are further away are rendered smaller. This works perfectly in DX9 and also in OpenGL (v2 & v4). As I move around the PSIZE value of each vertex is set and DX/GL uses this value to draw each point.

However, when I use DX11 it ignores the PSIZE value of a vertex, and all points are rendered with the size of 1 (very small). Typically, and quite disastrously, another component of the project needs DX11, so I need to use DX11.

Can you suggest a work around where each point will be rendered according to it’s PSIZE in DX11? I need it to run FAST so I can renderer all of the content, thanks in advance for any help

Mesh Renderer Setup Code:

mesh.vertices = points;
mesh.colors = colours;
mesh.SetIndices(indicies, MeshTopology.Points, 0);
mesh.uv = new Vector2[pointCount];
mesh.normals = new Vector3[pointCount];

Shader Code:

Shader "Custom/PointCloudEx"
{
    Properties
    {
        _PointSize("Point Size", Float) = 10.0
    }

    SubShader
    {
        Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"}

        Pass
        {
            cull Off ZWrite On Blend SrcAlpha OneMinusSrcAlpha

            LOD 200

            CGPROGRAM
            #pragma exclude_renderers flash

            //tells the rederer which function to use to process the rentering of each point (vertex) in the mesh
            #pragma vertex vert                

            //tells the renderer which function to use to process the renderering of each pixel (fragment) in the scene
            #pragma fragment frag            

            #include "UnityCG.cginc"

            float _PointSize;

            struct VertexInput
            {
                float4 v : POSITION;
                float4 color: COLOR;
            };

            struct VertexOutput
            {
                float4 pos : SV_POSITION;
                float4 col : COLOR;
                float size : PSIZE;
            };

            VertexOutput vert(VertexInput v)
            {
                VertexOutput o;
                o.pos = mul(UNITY_MATRIX_MVP, v.v);
                o.col = v.color;
                //Set the point size to be relative to the vertex's distance from the camera
                o.size = (1.0/ length(WorldSpaceViewDir(v.v))) * _PointSize;

                return o;
            }

            float4 frag(VertexOutput o) : COLOR
            {
                return o.col;
            }

            ENDCG
        }
    }
}

DX11 doesn’t support point mode rendering anymore according to the documentation, so I’m surprised it works at all. The idea behind that is that it is replaced by geometry shaders. So you’ll have to generate a quad at every vertex yourself using a geometry shader.

Thanks for the response jvo3dc. Ignoring my misgivings of DX dropping this really fast and useful function (boo Microsoft) can you give more detail as to a work around? I’ve gone and looked into a Geometry shader myself and my latest working prototype is below.

However, when I use this shader it makes Unity 5.3 go REALLY slow (2fps on my machine) and when I stop the running scenario Unity crashes (but is solid till I press stop).

I seem to have found the issue, if I remove the line labelled “TROUBLE LINE” then it goes really fast and doesn’t crash, but I need that line to scale the points. Help?

Shader "Custom/PointCloudEx2"
{
    Properties
    {
        _PointSize("Point Size", Range(0.001, 1)) = 0.005
    }

        SubShader
    {
        Pass
    {
        Tags{ "RenderType" = "Opaque" }
        LOD 200

        CGPROGRAM

#pragma vertex VERT
#pragma fragment FRAG
#pragma geometry GEO

#include "UnityCG.cginc"

    struct VERT_INPUT
    {
        float4 pos : POSITION;
        float4 color : COLOR;
    };

    struct GEO_INPUT
    {
        float4    pos    : POSITION;
        fixed4 color : COLOR;
    };

    struct FRAG_INPUT
    {
        float4    pos    : POSITION;
        fixed4 color : COLOR;
    };

    float _PointSize;

    GEO_INPUT VERT(VERT_INPUT v)
    {
        GEO_INPUT o = (GEO_INPUT)0;
        o.pos = v.pos;
        o.color = v.color;
        return o;
    }

    [maxvertexcount(4)]
    void GEO(point GEO_INPUT p[1], inout TriangleStream<FRAG_INPUT> triStream)
    {
        float3 cameraUp = UNITY_MATRIX_IT_MV[1].xyz;
        float3 cameraForward = normalize(UNITY_MATRIX_IT_MV[2].xyz);
        float3 right = cross(cameraUp, cameraForward);

        float4 v[4];
/*TROUBLE LINE*/
        float size = (1.0 / length(WorldSpaceViewDir(v.pos))) * _PointSize;
        v[0] = float4(p[0].pos + size * right - size * cameraUp, 1.0f);
        v[1] = float4(p[0].pos + size * right + size * cameraUp, 1.0f);
        v[2] = float4(p[0].pos - size * right - size * cameraUp, 1.0f);
        v[3] = float4(p[0].pos - size * right + size * cameraUp, 1.0f);

        float4x4 vp = mul(UNITY_MATRIX_MVP, _World2Object);

        FRAG_INPUT newVert;

        newVert.pos = mul(vp, v[0]);
        newVert.color = p[0].color;
        triStream.Append(newVert);

        newVert.pos = mul(vp, v[1]);
        newVert.color = p[0].color;
        triStream.Append(newVert);

        newVert.pos = mul(vp, v[2]);
        newVert.color = p[0].color;
        triStream.Append(newVert);

        newVert.pos = mul(vp, v[3]);
        newVert.color = p[0].color;
        triStream.Append(newVert);
    }

    fixed4 FRAG(FRAG_INPUT input) : COLOR
    {
        return input.color;
    }

        ENDCG
    }
    }
}

Also found lots of references in the DX11 documentation to point rendering so I wouldn’t say they’ve dropped it altogether. One of Microsoft’s DX11 render modes in D3D11_PRIMITIVE_TOPOLOGY is POINTLIST.

Yes, it is a bit confusing. A similar talk is going on here. So, lets say that as far as I’ve heard, they dropped point mode rendering. But that does not seem 100% true. It seems they just dropped point sprites, which is probably exactly why PSIZE doesn’t work anymore.

When it comes to your shader, you should probably try to just generate the quads in the geometry shader. Then pass along the size to the vertex shader and move the vertices to the right position there.

Ye I had that exact same thought, but same effect. Can you replicate it your end?

I haven’t tried to replicate it, but 2 fps does sound really slow. You could also try to not use WorldSpaceViewDir. Just directly take the distance between the world space position of the vertex and the camera. So something like:

float3 pos_world = mul(UNITY_MATRIX_MVP, v.pos).xyz;
float size = _PointSize / distance(pos_world, _WorldSpaceCameraPos);

Though this should not make that much of a difference really.