One of the strict requirements of the project I’m experimenting with porting to ECS is deriving object positions based on animation-driven Transforms. Before this was simple as I could directly read the Transform positions. However, in ECS Hybrid, Entities drive the positions of Transforms, not the other way around. From here I’ve found several ideas on how to do this:
-
Use DOTS Animation to sample animation, copy data into Translation/Rotation/Scale on Entities, use hybrid links to write the results to the Transform components which drives the SkinnedMeshRenderer. (I still need cloth physics, which doesn’t seem to be supported by both DOTS Animation and DOTS Physics do not support right now).
-
Use a custom job to sample a Playable on the main thread then use a separate job with a TransformAccessArray to copy this data back into their linked Entities. Compute the locations for the simulation objects, then use the hybrid link to write the same data back into Transforms.
-
Bake either at startup or at edit time the transform data for into a BlobAsset to skip runtime data sampling entirely and just read the per-game tick transform data from the BlobAsset. The associated GameObject only follows the animation sample rate as a view for the underlying simulation data.
Until DOTS Animation is in a more usable state, what’s the suggested way to approach this problem? Related to two of the ideas I’ve listed here, is are the TransformStream computations done in Animation Jobs or Playables computed with FloatMode.Strict or FloatMode.Fast? Determinism is one of the big reasons we chose to start experimenting with DOTS, and if sampling Playables or using Animation Jobs breaks determinism, they’re no longer viable solutions for us.