I’m looking for a way to access the per-vertex bone positions and weights in either the vertex shader, or ideally directly in the fragment shader.
Long story short, I want to compute a spheroid normal from the vertex position to its associated bone(s) as center.
How can I get the per-vertex bone info in the shader? I suppose the data can be accessed through some magic variable name.
If the above is not possible, what would actually be the best way to get the data?
This is not data that’s exposed to Unity shaders by default. Skinned meshes are pre-transformed either on the CPU or GPU prior to the shader getting access to them, so data about the bone weights or bone positions are stripped from the meshes before they’re ever seen.
The “best” way to do this is to implement mesh skinning on your own from scratch… which also means not using Unity’s built in skinned mesh renderer component, or animation systems. But at least then you’ll have access to all the skinning data you’re looking for.
Yeah, not exactly simple.
The next option would be to copy the bone indices and weights into an unused UV set on your mesh via c#, and then iterate over all of the bone game objects and copy their transforms into an array to pass to your material at runtime via another c# script. If you only care about the bone with the highest weight, then you only need that one bone’s index. But you’ll still need to manually store it in some unused component somewhere.
Lastly, you will need to do this in the vertex shader and not the fragment shader. The reason is fairly straightforward. The fragment shader gets an interpolated value from the vertex shader, and if you try to pass the singular bone index from the vertex to the fragment you’ll get an interpolated “index” that is just the index being treated as a floating point value which means it’s entirely useless as an index once you get it in the fragment shader. You might get a value of “20” in the fragment shader, but that value might be because it was 20 on all 3 vertices of the triangle or if all 3 vertices have completely different indices that happen to interpolate to 20 at the current pixel. You could use nointerpolation or an integer interpolator which would keep the value constant across the entire triangle, but then you have a new issue which is you won’t really know beforehand which vertex it’s going to use across the entire triangle (and it can change on different platforms), and there’s no way to smoothly blend between different normals.
There are ways around this, but they’re complicated, and in this case just passing the one normal value is likely more than good enough for your use case.
As for getting the interpolated position, I was planning on doing this in the vertex shader, and only transmit said interpolated position so I can use a simple varying/interpolator and a per-pixel adjusted position on the fragment shader.
You recommend using Mesh.SetUVs(). Is there anything, with respects to lightmaps/GI etc., I should take into account when selection the UV slot?
@bgolus : Thanks for your help. I actually managed to implement this today (see attachment). It’s not exactly optimized, nor performant for that matter, but the basic idea works.
Obviously, I could optimize the computational part using Burst and Unity.Mathematics, but I guess uploading vertex data every frame is not recommended.
→ What would a more optimal solution look like? Perform the calculation in compute shader and directly write into the vertex buffer? Upload the bone transforms/positions into a SSBO (shader storage buffer) and access from vertex shader?
How would I do either of this?
The more optimal solution would be to encode the bone indices & weights into the vertex data, and update the bone transforms in a vector array or structured buffer you assign onto the material every frame.
You could also have the indices and weights in a structured buffer you read from using the vertex ID instead of modifying the mesh at all. That extra indirection might not be very fast on mobile hardware, but should be fine on desktop & consoles.
The more optimal solution would be to encode the bone indices & weights into the vertex data, and update the bone transforms in a vector array or structured buffer you assign onto the material every frame.
Good, this is exactly what I started doing.
In the long run, I might just ask the artists to directly add the bone indices/weights as UVs directly, instead of patching them in, but for now this seems fine.
Question about StructuredBuffer: do I have to pass them as GraphicsBuffer or ComputeBuffer to the material?
Generally you want to use a ComputeBuffer. A GraphicsBuffer is mainly for letting you access built in vertex data arrays.
Some general thoughts:
You probably want to use a material property block to set the buffer on the renderer rather than modifying the materials directly. Create a MaterialPropertyBlock in Start() and reuse it in Update(). https://docs.unity3d.com/ScriptReference/Renderer.SetPropertyBlock.html
Don’t create a new buffer every Update(). You also only need to create one for each renderer component once in Start() and update the data inside each. You also only really need to assign it once, apart from issues where buffers can become “detached” when alt-tabbing or changing resolutions. But otherwise just updating the data in the buffer with SetData() will update the shader.
Calling buffer.Dispose() means you’re destroying the buffer and flushing it from the GPU. So yeah, it’s missing, because you’re nuking it.
Going further, I shifted the responsibilities to have a single monobehaviour per SkinnedMeshRenderer to handle the data setting. Makes the whole code easier.
So, question time: I update the UpdateBonePositionsJobController.cs to use the job system b/c figures, copying the data could be done in parallel in a burst-compiled job.
(Also this would be the first step to move to ECS if DOTS were working correctly in 2021.1, but let’s leave this issue aside for now.)
The attached code compiles, but I get the following runtime error:
InvalidOperationException: UpdateBonePositionsJob.transformAccessArray.m_TransformArray uses unsafe Pointers which is not allowed. Unsafe Pointers can lead to crashes and no safety against race conditions can be provided.
If you really need to use unsafe pointers, you can disable this check using [NativeDisableUnsafePtrRestriction].
Unity.Jobs.LowLevel.Unsafe.JobsUtility.ScheduleParallelForTransform (Unity.Jobs.LowLevel.Unsafe.JobsUtility+JobScheduleParameters& parameters, System.IntPtr transfromAccesssArray) <0x16c7a3010 + 0x00072> in <dc812d4b5c74494fae1251d76d294592>:0
UnityEngine.Jobs.IJobParallelForTransformExtensions.Schedule[T] (T jobData, UnityEngine.Jobs.TransformAccessArray transforms, Unity.Jobs.JobHandle dependsOn) (at /Users/bokken/buildslave/unity/build/Runtime/Jobs/Managed/IJobParallelForTransform.cs:86)
UpdateBonePositionsJobController.Update () (at Assets/Scripts/Jobs.v1/UpdateBonePositionsJobController.cs:68)
The IJobParallelForTransform can’t hold a TransformAccessArray, it seems. The attached version now has no runtime errors.
Questions, though:
TransformAccessArray: do I need to set transforms each frame (in Update()) before launching the job, or does it contain the transforms as a C#-typical list of references to the actual data, i.e. once updated through the animation process, the data is the same?
The NativeArray (_bonePositions in my case) that I give the job each frame, is it just a reference/pointer to the same data or do I need to copy it back/set directly from the job data for ComputeBuffer.SetData() ?
Additional questions about GraphicsBuffers and ComputeBuffers
From the doc, I take it I can set named GraphicsBuffers per MaterialPropertyBlock. Does that mean, I could set fixed vertex attributes (i.e. bone indices and weights) through a GraphicsBuffer attached to a MaterialPropertyBlock, which in turn can be set per SkinnedMeshRenderer, meaning I would not have to modify the mesh?
Would this idea work, or if not, what would work?
Additional questions about MaterialPropertyBlock.SetConstantBuffer() and MaterialPropertyBlock.SetBuffer()
What’s the difference? What kind of parameters are set differently? What does each function entail to in the engine code?
Thanks for the info - this is working pretty well for me. There’s some little issues, but they may be blending discrepancies. This is the per-bone matrix I’m sending to shadergraph for rotating stored deformation vectors: