Point cloud too slow: is it possible to pass to a shader an array of Vector position without a mesh?

I have been using a custom mesh with a custom-crafted shader to render a very efficient point cloud in my game (I only use that custom mesh as a venue to get the positions of the points, which are assigned to the mesh vertices).

However, whenever my point cloud grows in size and I happen to have to update the position of any of the points, a huge performance bottleneck arises because to best of my knowledge, one cannot update a vertex of a mesh without updating the whole mesh - which is pretty expensive.

So, I was wondering if there is a way of directly passing to a shader the information I am currently only using the custom mesh as a venue for. Which means, basically, passing to mesh an array of Vector3 for position and an array with information for the color of the points.

That way, in my understanding (correct me if I am wrong) it would become cheaper to update the points in my shader-based point cloud. Otherwise, does anyone have any better suggestion on how could I eventually update the position of the points in the point cloud without have to incur in huge update-mesh costs?

Thanks in advance for your time and for any ideas you might have.

This could maybe be done more efficiently with a geometry shader, but what I have done previously is to use intermediate rendertextures.

  • First I put all the information that varies for each particle into a simple mesh. That would typically be position, color, and size (which you can fit into vertex position, and either tangent or color). The uv coordinates are fixed and point to fixed positions in the rendertexture. The index array for the mesh is always the same 0,1,2,3,4, etc. sequence (in point mode).
  • I then render the mesh (in point mode) to a couple of intermediate rendertextures (SetRenderTarget with all target textures). The uv of the mesh is used for the position, while the vertex position of the mesh is used as an output value. So each pixel in one texture will store the position, and the corresponding pixel in the other will store colour and size.
  • I then perform the actual rendering with one or more meshes that are completely static. They just read the variable info for each particle from the intermediate rendertextures.
    If you are only updating a few points, you could in principle perform a render of just those points to the intermediate rendertexture to update their information.
    Remember to MarkRestoreExpected() on the rendertextures to preserve the contents from frame to frame.

Hi there, @cblarsen . Thanks for this insightful reply! So, what I have been doing is precisely to build quads on the fly in the geometry shader - using as a basis the position of each of the vertices of the mesh to which the shader is attached. That way, each vertex ends up representing a point in the point cloud. So:

  1. you said that a geometry shader could maybe used to do this efficiently, but while I am using a geometry shader, I still can’t see how to pass the new positions to the shader efficiently (i.e. updating the mesh vertices is super costly);

  2. now turning to your implementation using rendertextures, conceptually it sounds a great idea, but I got confused about how exactly are you passing the position of the ‘particles’ to the shader if not trough the mesh vertices? Are the info on their position stored in the rendertextures somehow, instead of in the mesh vertices?

Would you be able to share a snippet of your implementation for the sake of illustration (specially the part on reading from the textures, if I understood it correctly)? Because what I am trying to do is exactly what you focused on in your last sentences: updating just a few points every once in a while.

Many thanks

I don’t own the rights to the code I wrote (my employer does), so sorry can’t show it, but I think I can explain.

You are correct, that I do have to use mesh vertices. The point is, that if only some particle positions change. I can keep the information on the graphics card in rendertextures, and only update those pixels in the rendertextures for which I have a change per frame.
If all particles change position per frame, then you have to use mesh vertices or do the particle math in a compute shader (I know nothing about compute shaders in practice)

Below I will give some more details about my RenderTexture algorithm, but I also wonder if part of your performance problem comes from your scripting? Do you for example use generic lists when building your vertex lists, or reallocate arrays every frame? If you do, that is probably going to hurt on top of the time to send the vertices to the graphics card.

Part of my algorithm in pseudocode:
Allocate a RenderTexture of type ARGBFloat, so each pixel can hold a position
It should have FilterMode.Point
and another RenderTexture of type whatever, that can hold any additionalInfo

Then for every frame

mesh.vertices = vertexPositions; // this array holds particle positions as usual
mesh.tangents = additionalInfo; // Could be colors, scale, etc.
mesh.uv = pixelPositions; // The uvs point out where in the renderTexture i want to write this info. For an ordinary shader, this information would have to go into mesh.vertices, but I am saving a coordinate here :slight_smile:

Then my first shader looks something like

// Still pseudocode, haven't tested this

struct appdata
{
	float4 vertex : POSITION;
	float2 uv : TEXCOORD0;
        float4 tan : TANGENT;
};

struct v2f
{
        float4 vertex : POSITION;
        float3 particlePos : TEXCOORD0;
        float4 extraInfo : TEXCOORD1;
};

v2f vert(  appdata v )
{
        v2f o;
        o.vertex = float4( v.uv, 0, 1);  // Write all the info to this pixel position in the rendertexture
        o.particlePos = v.vertex.xyz;
        o.extraInfo = v.tan;
        return o;
}

struct perPixelOutput 
{
      float4 position : COLOR;
      float4 extraInfo : COLOR1;
};

perPixelOutput frag( v2f i )
{
       perPixelOutput result;
       result.position = float4( o.particlePos, 1);
       result.extraInfo = i.extraInfo;
       return result;
}

Once I have rendered the mesh to these two rendertextures, the positions and any other info is now on the graphics card.
I can then use a previously created mesh that never changes to actually render.
That mesh is structured like this:
drawMesh.vertices = pixelPositions; // These are the same positions that went into the uv of the original mesh. However these should be constant, so no need to change them every frame
//Since I didn’t use a geometry shader, I actually had to duplicate the vertices 4 times and also add uvs like this
drawMesh.uv = quadCornerUvs. // an endless series of (0,0), (0,1), (1,1), (1,0)

and the vertex shader for the final render goes something like

v2f vert( drawAppData v )
{
     v2f o;
    o.position = tex2Dlod( _MyFirstRenderTexture, float4( v.vertex.xy, 0, 0)); // Use the position as uv
    o.uv = v.uv; // This is only one corner of a quad
   o.extraInfo = tex2D(_MySecondRenderTexture, float( v.vertex.xy, 0, 0)); 
  return o;
}

Hope that made it a little clearer

Ah, many thanks for such a detailed example. Very cool idea, and yes it got much clearer! I think I got the concept and will try to implement it soon, before jumping into any broader questions. The only thing that called my attention by reading and thinking of the design is that in the first code snippet of your pseudo-ish code, is that line

result.position = float4( o.particlePos, 1);

Should be actually:

result.position = float4( i.position, 1);

Besides that, I will go forward with playing around with the whole thing before any further questions. Thanks again!

You were right that it was wrong, but it should actually be

result.position = float4( i.particlePos, 1);

Happy coding!