Proposal for hybrid interoperability

I’ve seen a lot of discussions on the forums and the unity discord about hybrid workflows in Entities 1.0. Without runtime conversion or companion link, our options are pretty limited. Several people (myself included) have come up with various solutions to bridge the gap. Most of the solutions involve one of the following:

  • Runtime instantiation of a game object prefab. This means the prefab cannot have any scene references.

  • Runtime instantiation of an entity from a MonoBehaviour in the scene. Cannot use Baking to populate the non-managed parts of the entity.

  • Runtime setup of static variables. Works for singleton-like data, but is hard to map to corresponding entities.

  • Assembly ref hacks to expose the old companion link workflow in a Baker.

In my opinion, none of these options are great. They all have downsides and most of them are non-trivial to create.

After some discussions with other folks and unity devs on the unity discord, I came up with a mock up that shows what a possible hybrid workflow could look like.

When a game object is placed inside a subscene, the inspector will show an extra “Bake Unmanaged” checkbox for each UnityEngine.Component attached to the game object. By default, all of these check boxes are unchecked. When they are all unchecked, baking works exactly as it does now. I.e. for each UnityEngine.Component, it looks for bakers of that type and bakes the authoring component into the entity data. However, if any UnityEngine.Component has zero Bakers implemented, it will show a warning in the inspector. This is mainly to prevent confusion when a MonoBehaviour is added to a game object in a subscene but it has no effect at runtime because the baker is missing.

You can see this warning in the inspector under the Animator component. This warning also says that the “Bake Managed” checkbox can be checked. When this is checked, unity will keep the authoring game object alive at runtime, instead of deleting at the end of baking. Any UnityEngine.Components with “Bake Managed” checked will stay on the game object during runtime. Additionally, the UnityEngine.Component will automatically be added to the baked entity as a managed component (using AddComponentObject()). Since a game object must have a transform, checking any of the “Bake Managed” check boxes will force the transform to also be added as managed component.

Here is what it looks like with the Animator’s “Bake Managed” checked (and the Transform implicitly):

Personally, I think this would be a big improvement to the user experience for a hybrid workflow. It is entirely opt-in and gives plenty of control and flexibility.

But there are still some things I’m not sure about. The name of the “Bake Managed” checkbox could probably be improved. Unfortunately, the word “component” is very overloaded so its hard to come up with a succinct name to describe what the checkbox will do. Maybe a good tooltip is sufficient? I also am not sure if that is the best place for the checkbox. It feels like it shouldn’t crowd the row with the component name.

I’m also not sure if the underlying companion game object should be hidden in the hierarchy or not. At first I thought it should be, similar to the old companion links. But thinking on it more, I kind of like the idea that the game object in authoring IS the same as the game object at runtime. The only difference is that some MonoBehaviours were baked into entity data and removed from the game object. I also think it is better to show the user that there is still this managed game object floating around instead of tricking the user into thinking everything is fast baked entity data.

You may have also noticed that I changed the “Entity Baking Preview” window in the screenshots above. Personally, I find the current layout very hard to parse and not at all helpful. Prior to 1.0, conversion allowed multiple monobehaviours to contribute to the same set of components. But in 1.0, Bakers cannot modify components added by other Bakers. This means that baking is always one-to-many between MonoBehaviours and IComponentDatas.

By changing the layout to break it down by the different Bakers, it makes easy to understand what is adding each component. This also can be used to show the managed components added by the “Bake Managed” checkboxes as well as the companion game object that may persist at runtime.

After talking to some of the unity devs in discord, I understand that there may be technical reasons why this is infeasible. However, I do hope that the idea or some other improvement to hybrid is considered. I think that would go a long way towards improving the onboarding experience for new ECS users.

Regardless of what happens with hybrid, I think the warning about a lack of a Baker is needed. It is too easy to make something that seems like it should work but is silently not doing what you expect. I also think some love should be given to the Entity Baking Preview window in the inspector.

Let me know what you think or how this could be improved. Cheers!

13 Likes

Wait, we are not supposed to use even companion link anymore?

Very nice mockup @scottjdaley I heavily agree that Companion or Hybrid components will be here to stay for a long time.

Currently, it is very confusing what works and what doesn’t, what is supported out of the box, and what needs some custom magic to make it work in a hybrid way (and lifetime management, etc.)

Ideally, we would have everything working directly with Entities in a fast and reliable way, but we are many years away from that, and embracing a hybrid approach that is convenient to use is the way to go until that is possible.

I really hope that unity will provide a solution similar to what you proposed (which will also work with Prefabs) before 1.0 is officially released.

1 Like

It is still there, but it locked down to a specific set of types (found in CompanionComponentSupportedTypes.cs). Not sure how well it works or if they plan to keep it for the final 1.0 release. I also can’t find anything about hybrid or companion links in the 1.0 docs so it would be good to get confirmation one way or another.

Thinking on this a bit more, I would also be fine with something like this:
8496920--1131038--upload_2022-10-7_9-22-34.png

This mono behaviour could be manually added to a game object in a subscene, it populates with a list of all the components on the game object. Each entry has a checkbox to either include or exclude from the companion game object at runtime. Just like before, clicking any checkbox would auto-check the Transform checkbox and setup the companion game object behind the scenes.

The main advantages of this approach is that it doesn’t require any custom editor integration or crowd the component name, like in my initial mock up. Otherwise, it is functionally identical.

One of the things that confuses me about the current companion link implementation is why they are doing everything on a clone of the authoring game object. It seems like a more straightforward setup would be to simply keep the authoring game object alive in the runtime world (minus any monobehaviours that aren’t needed at runtime, like authoring-only stuff).

I’m tempted to try implementing this myself. However, baking deletes the authoring game object in the runtime world. If there is a way around this, it seems possible to implement this today. Although, I’m sure there are lots of complications with other parts of the editor, like entity hierarchy, data modes, scene view picking, etc.

1 Like

I just want to throw out a crazy idea I’ve considered trying.

So what if we had a BakeHybridHierarchy component we could attach to GameObjects. It contains a byte[ ] and a (GameObject, Entity)[ ]. It has a function called BakeHierarchy() which does the following:

  1. Gets all components in all children
  2. Runs bake in on-the-fly created world
  3. Captures the primary baked entity and stores it in the (GameObject, Entity) array.
  4. Strips all baking types

There is also a RemoveBakedComponents() which removes all Components on the GameObjects which have a baker associated to them.

Then there’s an EmitBakedHierarchy() which does the following:

  1. On the baked World, adds a component to each Entity in the (GameObject, Entity) array containing the instance ID of the associated GameObject.
  2. Merges the World into the destination World.
  3. Using the instance IDs of entities, adds ComponentObject components for each component on the associated GameObject.

Now one of the big issues is that bakers only work in the Editor. So here’s how we get around that:
In the Editor Play Mode, Awake does these:

  1. BakeHierarchy()
  2. RemoveBakedComponents()
  3. EmitBakedHierarchy()

In builds, all instances of BakeBuildHierarchy are gathered from scenes and prefabs and run BakeHierarchy(), RemoveBakedComponents(), and a serializer of the world into the byte[ ]. Then Awake at runtime deserializes the world and runs EmitBakedHierarchy().

Lastly, a BakeHybridHierarchy should have a drop-down that specifies whether GameObjects or Entities should control the transform hierarchy. In the former, no bakers run for Transforms and Transform ComponentObjects are added to the entities. In the latter, modifications to Transforms by components are always overwritten by the Entities transforms just after the transform system runs.

Thoughts?

1 Like

This is insanely convoluted.

Just go for EntityArchetype → CreateEntity → Add data via ECB.
Playback is pretty fast, and with hybrid components you’ll get stuck with MonoBehaviours in runtime anyway.
Nobody cares about world bs after you’re done spawning entities.

Almost makes me want to drop my runtime solution to the github, but then again, doing tests for 1.0 exp compatibility on a friday night. Nah.

>> Don’t overengineer temporary solution that will be replaced in future. <<

In ideal world, Entities will be compatible with managed types & hybrid workflow.
I’d assume it would just take some time for UT to figure out.

1 Like

How is that different from this?

Its not, really. Workflow is generalized though, and components are visible from the inspector.

In the end, you don’t care about checkmarks (which are user error prone).

All that is matter what data is written via implementation. Just like it used to be.
But the simulation speed is different (way faster).

End results would be MonoBehaviour(s) [that cannot be converted] + Entity + Data.

So it doesn’t really matter whether [unmanaged struct] data would get stripped and added back later on. Since you’d need managed component anyway. And implementation of the above would be probably slower than authoring from MonoBehaviours directly. Plus more of a pain to maintain.

Are you proposing to just not use baking at all? If not, how does your approach generalize the workflow?

Would this work on game objects in the regular scene or also game objects in a subscene? Are proposing making Bakers work on regular scene game objects?

I actually think that transform syncing should be done separately from this hybrid setup. There are too many cases to account for by a single drop down on the top level game object.

I’ll give a real example from the game I making. I have a main character game object that has a skinned mesh and animator. I want the game object’s transform to be driven by an entity. But the character also has a hierarchy of bones. These bones are transformed by the animator. On some of these bones, I want to have entities follow the bone positions. This allows me to attach pure entity objects to various points on the animated character.

Currently, the way I solve this is I have the character game object hierarchy in the regular scene and the character entity and bone entities in a subscene. At runtime, I lookup the game objects by tag and add managed components to the entities with the various transform syncing components I need.

Ideally, I would have the game objects and entities authored from the same place. That way I wouldn’t need some ugly tag thing to get around cross-scene references.

My point is, regardless of the hybrid solution, I think it would be best to have separate controls for transform syncing per game object.

In case it wasn’t clear, my original proposal was a suggestion for how unity could provide this functionality. While a custom version of it might be possible with the current state of 1.0, I think an official solution would be best.

How do is non-managed component data authored for the entity?

This actually sounds very similar to something I wrote for scene game objects that only need a few monobehaviours added to a runtime-instantiated entity. I guess the same thing could be uses to add non-managed data to the entity, but now it sounds like you basically reinvented runtime conversion (not that that is bad, but it was removed because unity had trouble getting this to work reliably).

If TL;DR it works like this:

  • MonoBehaviour implements an interface:
    → GatherEntityTypes(HashSet types); to which types are written to which should be added to the result Entity. This call is editor only.
    → SetupEntity(Entity entity, EntityCommandBuffer ecb) which is called on [all] MonoBehaviour(s) on the GameObject after entity has been created with specified archetype in runtime. Then ecb can be used to apply data in any way.

Managed components can be set by accessing “main” script that handles entity creation → e.g. EntityBehaviour.Add(someManagedRef) from the SetupEntity;

Nothing else is required. There are a bunch of extra’s, such as .Add / .Set / .XBuffer / .Remove equivalents for manipulating entities from MonoBehaviour side.

Basically, it works alike conversion, but it does not store data in subscenes.
Which was my initial goal, since subscenes never worked on Unity Cloud Build. But it turns out to be fast enough to use for any project I’ve done thus far for mobile, and currently using for standalone PC hobby project. Plus, its way simpler to write & maintain, since you don’t need a bunch of extra classes, separate authoring scripts, ticking checkboxes etc.

In my experience, you can’t go faster than bursted playback in terms of setting data.
Unless you’re willing to hack your way into Entities package.

And real slow part is not the entity authoring actually. Its MonoBehaviour instantiation.
Which cannot be avoided, as long as hybrid components are used.

I’ll look into posting this solution to github. I just need some time to setup 2022 beta, run basic tests, write manual & examples.

Yes, this is what you want. And, this is what I’m doing with hybrid transformations. Using two jobs to sync data via TransformAccessArray before simulation runs, and after simulation is done. There’s no point in maintaing shallow hierarchies, when you actually need only some data. Rest is handled by Unity’s GameObject transforms. Its pretty fast, actual cost is how slow passing TAA to job & scheduling is.

Transforms package shouldn’t exist actually. There should be a base Transforms system that is underneeth for both GameObjects & Entities. This way data could be accessed from both freely. But alas, Unity does not want to go this route. [Backwards compatibility, probably]

Yeah, it just went overboard a little :stuck_out_tongue:

You can though. Its called baking. No runtime adding/setting data. Just streaming the raw data from the already baked subscene.

But I don’t think speed is the main issue with hybrid workflows. Runtime instantiation and playback on an entity works totally fine for the vast majority of hybrid use cases. The problem is that inconsistent authoring workflow. We have to Bakers<> for monobehaviours in subscenes and we have something like your custom interface for monobehaviours on regular scene objects.

Clearly, unity doesn’t want to continue to support runtime conversion. We can implement it ourselves, but I don’t think they will re-add an official solution for it. So, as an alternative, I’m hoping they provide some way to attach UnityEngine.Components to entities at bake time.

Not for hybrid / +MonoBehaviours setup. Context here is important.

Managed components instantiation is slow. So unless you’re doing a pooling technique and go smart on object management - performance would be abysmal. Been there.

Conversion worked by going reverse.
Maintaining GO / prefab reference & adding corresponding data to manage it. But still, it is slow.
And poor Object management makes it worse somehow.

Adding types is easy. Storing, setting up managed data, deserializing & creating actual Object. That’s the tricky [and slow] part.

I’m proposing making bakers work on regular scene game objects with hybrid augmentation. This aspect would not work in subscenes. I don’t think there is any sane way to preserve references between game objects inside a subscene. Maybe those with more experience with serializing game objects could chime in.

There’s also a lot of dangerous cases when you try and mix and match. The checkbox would just be a quick way to make the entity hierarchy authoritative and let the built-in baking do its thing. If you uncheck it, you can add the individual entity transform components yourself. I would imagine synchronization to use an existence-based model where if components exist on the entity, they always “win”. Copying game object transforms to entities should always be done in user code. Otherwise you’ll have circular logic which ECS struggles with.

But doesn’t this require you to reimplement and maintain conversion for built-in Unity features? Transforms, physics, graphics, ect?

Its possible to attach components to Entity directly, via EntityManager.SetComponentObject.

So… No? You can access those directly. (Either via EntityManager.GetComponentObject or via system query)
That’s the whole point of using hybrid approach. Those components and logic can be used as is. No conversion required.

Here’s an example:

using System;
using System.Collections.Generic;
using EntitiesExt;
using EntitiesExt.Contracts;
using Unity.Entities;
using UnityEngine;

namespace Examples {
   public class HybridExample : MonoBehaviour, IEntitySupplier {
      [SerializeField]
      private float _force = 13.37f;

      [SerializeField]
      private float _rotation = 69.420f;
 
      [SerializeField]
      private Rigidbody _rgb = default;

      [SerializeField]
      private EntityBehaviour _entityBehaviour = default;

      public void SetupEntity(Entity entity, EntityCommandBuffer ecb) {
         _entityBehaviour.Add(this);
         _entityBehaviour.Add(_rgb);

         ecb.SetComponent(entity,
                          new ForceTest
                          {
                             Value = _force,
                          });

         var buffer = ecb.SetBuffer<RotationTest>(entity);
         buffer.Add(new RotationTest
                    {
                       Value = _rotation
                    });
      }

#if UNITY_EDITOR
      public void GatherEntityTypes(HashSet<Type> types) {
         // Used as an example, for querying over specific MonoBehaviour, HybridExample type not actually required
         types.Add<HybridExample>();
         types.Add<Rigidbody, ForceTest, RotationTest>();
      }

      protected virtual void OnValidate() {
         gameObject.SetupEntityBehaviour(ref _entityBehaviour);
         if (_rgb == null) _rgb = GetComponentInChildren<Rigidbody>();
      }
#endif
   }

   [Serializable]
   public struct ForceTest : IComponentData {
      public float Value;
   }

   [Serializable]
   public struct RotationTest : IBufferElementData {
      public float Value;
   }
}

That’s 62 LOC for everything. Compare it to the conversion / baker flow.

And an example system:

using Unity.Entities;
using UnityEngine;

namespace Examples {
   public partial class HybridExampleSystem : SystemBase {
      protected override void OnUpdate() {
         Entities.ForEach((in Rigidbody rgb,
                           in ForceTest forceTestData,
                           in DynamicBuffer<RotationTest> rotationTestBuffer) => {
                    float rnd = Random.value;
                    Vector3 dir = rnd <= 0.5f ? Vector3.up : Vector3.down;
           
                    rgb.AddForce(forceTestData.Value * dir);
                    rgb.useGravity = false;

                    for (int i = 0; i < rotationTestBuffer.Length; i++) {
                       var rotationTestData = rotationTestBuffer[i];
                       rgb.AddTorque(rotationTestData.Value * Vector3.up);
                    }
                 })
                 .WithAll<HybridExample>()
                 .WithoutBurst()
                 .Run();
      }
   }
}

Closest to the MonoBehaviour workflow as it gets.

Worth noting that interface can be implemented as well on anything, e.g. ScriptableObjects, and data can be separated away from the MonoBehaviour & kept inside SO instead.

Edit:
Also, it seems like everything works just fine except for API changes done to system / buffer access.
So custom solution wins over conversion conversion to bakers flow :smile:

So you are using Mesh Renderers and PhysX?

For visualization layer - yes, HDRP / built-in renderers.
For some titles that require optimizations → Graphics API / CommandBuffers.

And for physics - it depends. Some things are pure entities (such as trigger interactions → Quad / Octrees + custom queries), some are hybrid Physx + jobs (RaycastCommand / SphereCastCommand / potentially OverlapSphereCommand once its stable).

I honestly don’t know what would be best. For the record converting to baking was trivial for us. But we are totally reliant on object pooling and run time instantiation.

If anyone’s interested, I’ve added the solution to the github (see my signature for the link);

1 Like