I don’t own the rights to the code I wrote (my employer does), so sorry can’t show it, but I think I can explain.
You are correct, that I do have to use mesh vertices. The point is, that if only some particle positions change. I can keep the information on the graphics card in rendertextures, and only update those pixels in the rendertextures for which I have a change per frame.
If all particles change position per frame, then you have to use mesh vertices or do the particle math in a compute shader (I know nothing about compute shaders in practice)
Below I will give some more details about my RenderTexture algorithm, but I also wonder if part of your performance problem comes from your scripting? Do you for example use generic lists when building your vertex lists, or reallocate arrays every frame? If you do, that is probably going to hurt on top of the time to send the vertices to the graphics card.
Part of my algorithm in pseudocode:
Allocate a RenderTexture of type ARGBFloat, so each pixel can hold a position
It should have FilterMode.Point
and another RenderTexture of type whatever, that can hold any additionalInfo
Then for every frame
mesh.vertices = vertexPositions; // this array holds particle positions as usual
mesh.tangents = additionalInfo; // Could be colors, scale, etc.
mesh.uv = pixelPositions; // The uvs point out where in the renderTexture i want to write this info. For an ordinary shader, this information would have to go into mesh.vertices, but I am saving a coordinate here
Then my first shader looks something like
// Still pseudocode, haven't tested this
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
float4 tan : TANGENT;
};
struct v2f
{
float4 vertex : POSITION;
float3 particlePos : TEXCOORD0;
float4 extraInfo : TEXCOORD1;
};
v2f vert( appdata v )
{
v2f o;
o.vertex = float4( v.uv, 0, 1); // Write all the info to this pixel position in the rendertexture
o.particlePos = v.vertex.xyz;
o.extraInfo = v.tan;
return o;
}
struct perPixelOutput
{
float4 position : COLOR;
float4 extraInfo : COLOR1;
};
perPixelOutput frag( v2f i )
{
perPixelOutput result;
result.position = float4( o.particlePos, 1);
result.extraInfo = i.extraInfo;
return result;
}
Once I have rendered the mesh to these two rendertextures, the positions and any other info is now on the graphics card.
I can then use a previously created mesh that never changes to actually render.
That mesh is structured like this:
drawMesh.vertices = pixelPositions; // These are the same positions that went into the uv of the original mesh. However these should be constant, so no need to change them every frame
//Since I didn’t use a geometry shader, I actually had to duplicate the vertices 4 times and also add uvs like this
drawMesh.uv = quadCornerUvs. // an endless series of (0,0), (0,1), (1,1), (1,0)
and the vertex shader for the final render goes something like
v2f vert( drawAppData v )
{
v2f o;
o.position = tex2Dlod( _MyFirstRenderTexture, float4( v.vertex.xy, 0, 0)); // Use the position as uv
o.uv = v.uv; // This is only one corner of a quad
o.extraInfo = tex2D(_MySecondRenderTexture, float( v.vertex.xy, 0, 0));
return o;
}
Hope that made it a little clearer