Am i correct in assuming that the mesh recalculations immediately apply to the updated part of the collider? No mention of this in the document. If so, would be also nice to see these optimizations ported into the Terrain collider.
Though with the:
SetFoo(List<T>, int start, int length)
linear methods it wouldn’t be enough. If possible, it’d be nice if the methods were extented to follow SetHeights(int xBase, int yBase, float[,] heights) functionality where only the changed patch of it would be updated.
I’ve been doing some procedural animated meshes and the need of copying/setting the entire mesh data every frame kills the performance. From tests with my case scenario, it’s completely memory access(copy) bounded. Having such “low-level” access would be a “game changing” pun intended.
Not sure what the use case for generating a mesh from an SDF on the scripting side would be, as an SDF is usually something generated from geometry you already had. Generally you simply render the SDF in shader in a way that makes it look like a geometric object, without having to process any significant geometry. The only thing I could see it being useful for on the scripting side is if you wanted to generate a collider from it at runtime, but that’s a complicated feature to develop for very niche utility. What would be cool is if PhysX or Unity’s DOTS based physics they’re working on could utilize SDFs as collider data itself to do physics operations on…
I want it to be easy to modify meshes on the GPU with Compute Shaders without unnecessary memory transfer to the CPU. It’s very critical that this is possible.
I haven’t looked into it but I think this might be already possible with Graphics.DrawProcedural. If not that definitely should be in there.
Graphics: Ability to specify custom Mesh vertex data formats, and set vertex buffer data from NativeArrays. See VertexAttributeFormat, VertexAttributeDescriptor, Mesh.SetVertexBufferParams, Mesh.SetVertexBufferData.
Graphics: Ability to specify Mesh submesh information directly, and set index buffer data from NativeArrays. See SubMeshDescriptor, Mesh.SetIndexBufferParams, Mesh.SetIndexBufferData.
I would like to have a static custom data chunk in a mesh. Then able to read/write from a vertex function of a shader. These data chunks should independent from the current vertex or uv index as a global info.
The most important thing for me is performance. I am not interested in naming conventions or high-level apis with hidden costs. Just give us low-level stuff to let us write better games.
Ideally, we can write directly to vertex streams in uncached/write combined/video memory/whatever using an appropriate platform format from another thread (I don’t care if we can or have to use DOTS or whatever).
Same for indexes
Any existing API’s that could be made thread safe would be nice.
Bonus points if there were a way to blit vertex/index streams to readable memory asynchronously on write-only meshes for those rare cases in which you might need access to vertex information from one mesh (but not the entire FBX) and you can afford to wait for it. Like when your artists have 300 meshes within an FBX and there are rare cases that you want vertex information from one of them for some UI stuff but you don’t know which ahead of time and your target platform doesn’t always support compute shaders. We could do the translation from platform formats like F16 to CPU friendly formats and deinterleave data to pick what we want.
This is possible with DrawProceduralIndirect, which is how I’m handling it currently, but you have to use a custom vertex shader to read the data back out, which means you can’t combine it with the HDRP shaders or shader graph. You used to be able to with custom master nodes, but you can’t any more. The ability to assign a buffer to a mesh or otherwise tell the standard shaders “this is where your vertex data is” without sending the data back to the GPU would be a godsend.
The good news - I was able to modify the HDRP shaders to optionally take a compute buffer for vertex data without breaking anything else (as far as I could tell).
The bad news - shader graph is totally independent of that and has some crazy C# shader string builder that I’ll need to figure out.
I’ve been testing with the new mesh slice set/get and the results are not good. (2019.3.0a5)
My case is very simple: every frame a generate the “skeleton” of the mesh(variable number of vertices) and expand it in the geometry shader.
Using the the Mesh class it takes ~16ms per frame to setup the data. Skipping the entire mesh class and passing the same data through compute buffers and using DrawProcedural takes ~0.7ms per frame.
I have no idea how much copying/reallocating and checking there is inside the mesh class, but in its current form it is far away from the performance it should be.
Looking forward for the SetIndexBufferData & SetVertexBufferData.
Is there no version of SetTriangles that takes a NativeArray of ushort or int?
Using: filter.sharedMesh.SetIndices(Triangles, MeshTopology.Triangles, 0);
To work around this for now, but it seems like an oversight.
I modified the Shadergraph package to inject my custom vertex code into the generated shaders (right now, it’s just a hardcoded triangle for testing). It works in the preview windows, but the main preview doesn’t reflect it, and neither does using the shader in a scene. Does the master node get its vertex information from somewhere else? What is going on here?
Oh crap, I know EXACTLY what’s going on here. That “position” input is just completely replacing anything I do with vertex shaders. I’m a dummy.
Double Edit:
It should be possible to do all of this with just a custom function node if I could get
SV_VertexID into it. I don’t think that’s possible right now, but I’m going to keep digging through the code. Might be a fairly easy request for the shadergraph team.
I figured it out - Shader Graph does use the HDRP shaders, but ONLY for the actual output, not the preview nodes. I didn’t actually have to modify the Shader Graph package at all. I just had to turn on the define I used to turn the procedural vertex piece on. I created a dummy custom function that didn’t do anything except sneak a #define in there.
This api is very useful for me. In my game I’m serialize flat mesh data (Vector2 [ ] vertices and ushort [ ] triangles) to BlobAsset, then at runtime casts BlobReference → NativeArray and sets it to mesh. Is it already possible set vertices dimension to 2? Now I’m always get error:
ArgumentException: SetVertices with NativeArray should use struct type that is 12 bytes (3x float) in size
Will it be possible in future? It will be wrong serialize flat mesh vertices as Vector3 with always z=0.
As soon as we get SetFoo, can we please have GetFoo(NativeArray)? including GetTriangles(NativeArray) and GetColors(NativeArray) pretty please?
Having SetTriangles(NativeArray) and SetColors(NativeArray) would be also awesome).
It’s a bit hard to count bytes and pointers.
+1 for the above mention of unmanaged getters - GetXXX(NativeArray).
The copy-free internal data getters would be ideal (mentioned as “under consideration” in the doc), but if it turns out they are not feasible, a way to still access data without going through managed memory (i.e. just native->native copy) would be highly appreciated.