Lots of tiny kernel invocations in Burst with a lot of routing setup/teardown
Rendering issues:
Skin matrices are not shared between unique skinned meshes using the same rig
Skin matrices are uploaded regardless of whether the skinned mesh is culled
Skin matrices are uploaded via memcpy on the main thread rather than writing to the compute buffer in parallel like how the instanced properties are written
The skinning compute shader may have cache coherency issues
Has anybody ever tried storing bone positions/rotations into textures instead of vertex positions? You would have to do the bone/skinning calculations on the GPU I think and I assume that would lower the total amount of characters. However, it seems that it would allow far more animations per character vs storing vertex positions in the texture. I remember reading a blog post or paper about it but have never seen it done in practice. I was eventually going to try it myself, but right now I lack the knowledge to make it work. (Mostly skinning, compute shaders, etc.) Still, I’m curious as to how well it would work.
@Bivens32 I remember seeing discussion while ago about it, on the main DOTS forum.
Ant there was also git repo.
I am no sure, if not even made by Joachim him self?
I actually used Joachim’s solution in one of my first game jams using DOTS, back in Entities 0.1.0.
It is not immediately obvious because it was meant to be an easter egg, but the sheep kick their legs occasionally. That solution scaled a lot better than what Unity has now. Unfortunately that repo hasn’t been updated to the latest hybrid renderer. Although it probably wouldn’t take very long to update it since the latest hybrid renderer removed the need for many of the repo’s hacks which are currently broken.
Count sheeps before sleep?
I have noticed two, which kicked legs.
So why so rare kicking?
Why couldn’t all kick legs?
By the design, not for technical issues as I understand?
I just looked it up on GitHub. I think the one Joachim posted just stores the xyz position of each vertex inside the texture. Then at runtime, it just reads the positions from the texture and assigns them to the vertices in the shader. It should be the same technique as the videos you posted with the skeletons. What I’m talking about is that you would instead store only the bone positions/rotations inside of a texture. This would save a ton of memory at the cost of more computations inside the shader for all the local/world transformations etc. It would still have the same limitations as the videos you posted as far blending animations goes, but you could probably easily store like 100 animations inside of a single 2k texture or something like that.
Because I was too tired to make a proper looping animation. So I just copied the rest pose keyframe and pasted it like 30 seconds out. So all the sheep are technically playing the animation (and paying the performance cost). Whether you see anything interesting is based on a random playback offset set during instantiation.
I made a vertex animation texture solution a while back for a project, but I never considered doing it for the bones. That could probably save a ton of storage in the long run, considering you’d 1) no longer need a new texture for every different mesh that uses the same rig and 2) not need to store every single vertex in the texture