As ECS is only half-ready, developers have to use hybrid approach, so now I have to bake half of stuff and then later in the systems connect to entities created during level load in the pure-ECS.
Create all those one-time-fired systems just to find baked and non-baked stuff and connect things together - is this the way Unity wants developers to approach this?
I mean, look at Prefabs in ECS workflow - this is ridiculous. And this is 1.0-pre version, not an alpha.
Why not have a clean API like Entity EntityManager.Convert(GameObject, ComponentType[ ]) which would solve every problem for both who uses Editor/Scene way and resources/Prefabs. So it could be written in a single place in defined and expected order. I mean code like:
var entityA = entityManager.Convert(gameObjectA, typeof(LocalTransform) ... );
var entityB
....
entityManager.AddComponentData(entityY, new Parent { Value = entityB });
etc.
Which software/system/API design principles followed when designing those subscenes and Bakers? Unnecessary complexity and unpredictability? So far this is not Data-oriented, this is Workaround-oriented design everybody has to follow as there is no way.
Keyfactor here is performance.
Simplified:
So when you do “Convert”, you’re expecting Bakers to run in runtime. That’s not what Unity’s intention is.
Underhood, everything you write into Bakers are actually stored as one “blob” of data. Adding that blob of data is fairly trivial in terms of CPU operations, and ends up being memcopied into destination. This is fast. Really fast.
Moreover, this ensures no extra editor data is added to the build. This is extra memory saved, less ops to be performed.
This is DOD. CPU does only stuff that CPU has to do.
When you do “Convert” in runtime, you’re mixing editor only data with runtime data, and editor only logic with runtime only logic. Meaning each Baker has to run at least once to convert editor only data (that is included now in the build) make a copy of that data, perform extra ops to get final results.
This is OOP style.
CPU though does not care about game designer shenanigans.
But this still wastes both CPU time during load and instantiation, as well as memory.
Its a flexibily vs performance. You have to pick one. Unity chose hard performance.
In ideal world though, there are no “gameObjects” in the build. There are only prototype entities with blobs of end result data.
Problem lies in a fact that hybrid approach and conversion logic got phased out before hybrid flow got phased out.
So now we got hybrid flow without proper engine support. Which is fine if you don’t use it, or use custom solution.
There’s no good solution [from Unity’s perspective] for this though, unless rewriting whole engine in a single iteration. Which is not realistic.
Let’s completely sacrifice any sign of convenience then! They could have their “Blob” still, but give ability to convert data (using the blob) at the moment developer needs it, not at some random point in time through several boilerplate structs and classes through tons of redundant code which does practically nothing.
Ideally - yes, not game objects and prototype Entities stored in blob, but the thing is that the only way to create prototype now is to create a GameObject and it does not seem to change any time soon.
A subscene is essentially an auto convert from GO authoring comps to baked runtime entities.
Runtime conversion was deprecated because it was slow. (and unnecessary)
What’s the problem with prefabs? As long as it referenced and you call GetEntity(prefab) in a baker it will be converted and stored in the subscene. Those prefabs you reference can exist in any folder located in Assets.
I have no issues with the baker workflow and think it’s one of the better things Unity designed. (Well corrupted subscenes aside and a serialization bug -.-)
Many, and that’s very apparent, can’t wrap their heads around it at first and I can’t pinpoint if that’s just a lack of understanding, experience or I’m just not seeing what’s wrong with the workflow.
Some people have a hard time understanding how there is so much going on with bakers and the baking workflow, yet to have hybrid scene Entity/GameObjects, they either need to make their GameObject a prefab or do some runtime hookup with separate authoring GameObjects for each “half”.
Baking is doing a lot! Separation of authoring and runtime data is not a new concept, but is new to Unity. In other engines that use this paradigm (mostly in-house engines of larger studios) iteration time is notoriously bad. However, Unity’s baking workflow is fully incremental, which solves the iteration time problem (mostly, they still have some issues to work out, but those aren’t fundamental design issues). with that said, your frustration is certainly warranted, because the hybrid situation is awful. But don’t blame the car when the previous driver never filled up with gas.
This API won’t work for me. How could I possibly get some complex data layouts that are totally different from the data layouts that our GD and Level Designer work on. These complex layouts would be not trivival for Unity to automatically convert. So we would have to attach some custom MonoBehaviours that implement some interfaces that this API can pick up and invoke some methods so that we can build the data layout we want. Also these code can’t be stripped from the final build. Because the runtime require them to do conversion. But at the end of the day, the ECS world only works with the converted data, disregarding any human-readable data from the authoring time. So if we go your way, we would have a lot of redundant code lingering at runtime without doing anything useful to the machine. And they also increase build size unnecessarily.
Let’s be clear, the “editor baking” approach cover less use cases than the “runtime baking / conversion” approach, that’s a fact, not opinion.
Also I’d kindly request to the people saying “conversion” is “slow” and “unnecessary” to stop issuing such self-centered statements…
Slow is absolutely relative, it might be slow for your use case but it was never slow for mine.
Unnecessary is relative again, it might not be necessary for your use cases, it was for mine.
Increased build size? well, unity always had a code and shader stripping function, add a flag to manually strip this oh so big and nasty runtime-bakers (aka conversion). You can have it set to true by default
Sub-scenes? from the usability and workflow pov they are one of the worst things I’ve seen in unity… introducing such a Frankenstein new engine concept just to store serialized entities, breaking 10+ years of workflows and conventions? yeah, sounds perfect…
Ofc u can’t store 10000 entities directly as yalm in a scene file, but why break existing workflows? introduce a new GO component “entities container” and have all your scene entities be descendants of that object… the engine can then editor-bake the entities and serialize the whole bunch in whatever way it wants under the hood…
But hey, nothing prevents sub-scenes as they are to live along runtime baking / conversion so I’m just ranting here.
Sorry for the long text, I don’t want to be belligerent, I just don’t like when every time someone manifest discontent about the “editor-only baking approach”, a bunch of people come with the same “true but narrow minded reasons” to say how the current situation is undeniably better than before, ignoring (or choosing to ignore) the use cases unity threw out of the window at the last minute.
Those are Unitys words. They deemed it slow, problematic, buggy and messing up archetype variations. I summed it up in “unnecessary” because runtime conversion was just deleted in 1.0.
I haven’t heard a good proposal of how it should work instead and I also don’t understand what’s holding devs back from still doing runtime creation of entities/archetypes.
I ok with the baking workflow. I’ve been following entities from the begining and to be honest, I don’t regret the time we had to create entities from scratch. The separation between authoring and runtime data is in my opinion very important.
I suspect most people getting into ECS don’t see the point because they start with simpler use case that result in a one to one mapping between runtime and authoring data. So, they don’t get why it’s important to have the separation.
I do regret that don’t have built in conversion of GO to entities because it could be useful to use with adressable.
That being said I think unity is working on a DOTS version of adressable (Content Management) so that would solve “my” issue.
And one more thing about ECS being half ready, I’d like to remind that ECS is not DOTS.
Yes DOTS does not have support for animation, audio, vfx and many other things that are available to Monobehavior, but those won’t be in the ECS package.
They will be part of DOTS and will probably take time to implement and release.
In the mean time we can benefit from the performance provided by ECS, Jobs and Burst in the area they cover.
It isn’t so much the code as it is the authoring content itself, which tends to be a lot larger than the runtime content. As an example, my animation system uses a lossy compression technique that can guarantee a specific lossy error. In order to reap those benefits, it needs to operate on uncompressed animation clips. With baking, I can compress those clips and only ship the compressed clips. With runtime conversion, I would also have to ship the uncompressed clips. Yes, it is faster at runtime to sample a compressed clip than an uncompressed clip with this specific compression implementation.
That’s what subscenes are.
And this is the crucial piece. Runtime conversion is actually fairly easy to implement when you aren’t worried about serialization. But there are edge-cases. And each person who wants runtime conversion will likely want these edge cases handled differently. That makes this a great candidate for 3rd-party solutions on GitHub or the Asset Store. If you want one right now, this one exists: https://github.com/VergilUa/EntitiesExt
I’m sorry that you don’t like my response. But that’s how I see the current state of baker. I’ve done many experiments with entity creation at both runtime and authoring time. Finally I choose baker. However, I can say that I do have disappointments at other features that Unity hasn’t had a good solution yet, namely 2D and Generics. But I do understand their situation, the hardship they have been through to provide us this (somewhat unstable) foundation. Things might change in the future when they have solidified this foundation to move on with other features. Or the community will pick up the work (as we usually do) to support other use-cases that Unity don’t want to do. And it has already been starting, just look at the repo EntitiesExt above!
Creating entities from scratch was never a problem. But with conversion you could take any prefab from an external source: asset package, addressable, or just a binary and convert it to entities. Can I just manually recourse over the GO prefab and create an equivalent entity? yeah ofc, but that’s quite a lot of extra work for something that was already there (with or without the corner cases).
Isn’t that true for each and every workflow? I have bmp texture on the editor but the compiled game gets a nice and platform compliant format. I don’t see why that merits removing runtime conversion. By that logic, loading a texture at runtime from an image file should be removed from the engine.
I don’t want to take editor baking away, I just wanted them to don’t remote the conversion code from runtime.
That’s the point so why not having them be exactly that? instead they get to be this “special” thing that’s not a GO, not a prefab and has special loading and unloading rules, and messy (at least for now) referencing…
Thanks for the link, I’ll check it out.
But after the long track of half assed and semi abandoned features of unity that has to be extended by 3rd parties I wanted the initial release to cover as many use cases as possible.
(I’m still waiting for the GO UI system introduced in 5 to natively implement basic widgets… but won’t happen, the feature was “soft dropped” rather quickly after release to work on something new and (not) better T_T)
Oh yeah I remember this, it was stated in some page in some version of the docs or something…
I’ll take a look. Same as above I have 0 faith in unity teams adding features after 1.0 release, they reputation pressed them T_T
I’m not aware of the corner cases or problem with the conversion approach but I’d rather have them document the limitation and leave it as an “experimental” code only api or some “use under your own risk” code only api than just removing it entirely.
Anyway, at this point I’m just hoping they don’t take another year to add support for 2023 since 2023.1 is not far and I really want to use the new async stuff.
Edit: checked out the github, its nice but not what I’m looking for. I personally miss the ability to convert prefabs, addressables and external binaries using 1/2 lines of code into a base entity to then efficiently instantiate as many times as necessary.
(well… I could try adding this script to a GO prefab, instantiating a copy, let it create an entity and then grab it for further use)
If you don’t need the hybrid behaviour [GameObject], you can always store EntityArchetype as SerializedArchetype in editor.
How to
Use ArchetypeLookup in runtime and it will give you EntityArchetype without duplicates that matches that hash. (e.g. via SerializedArchetype.AsArchetype)
Can also combine multiple component suppliers via EntitiesBridge.GenerateArchetype.
Technically, you can author entities from ScriptableObjects if you want.
Or from literally anywhere if you’ve got SerializedArchetype.
Alternatively grab SerializedArchetype from EntityBehaviour
(it already contains neccessary component hashes in editor time);
And to write data to the entity / buffer you’d do something like:
// Grab prefab's main EntityBehaviour
prefab.TryGetComponent(out EntityBehaviour entityBeh);
EntityManager em = // Grab EntityManager from somewhere depending on the context;
SerializedArchetype arch = entityBeh.Archetype;
// ArchetypeLookup accessed as a managed system in latest version,
// in previous version, you'd use:
EntityArchetype entityArch = arch.AsArchetype(em);
// otherwise use ArchetypeLookup.GetCreateArchetype(arch);
// Write data to the buffer
Entity entity = ecb.CreateEntity(entityArch);
entityBeh.WriteDataTo(entity, ecb);
As long as you don’t modify prefab instance inside IEntitySupplier.SetupEntity declarations, you should be good.
P.s.
I’ve updated the repo, in case if you need some feature you can always ask @ github or here at the forum thread. (see my signature for the link)
If that runtime texture loading was built on a third-party library which stopped supporting newer platforms and that method was broken more often than not, then yes. Unity should remove it from the engine until they can get a proper team together to implement a solution that actually works. The removal of the feature wasn’t out of principle, but because the implementation was incompatible with the new 1.0 baking workflows. And Unity has chosen to de-prioritize implementing a replacement because they would rather focus on stability for specific use cases. They may revisit it later.
Do you have to agree with their decision? Of course not. But that’s why they did it, which is the topic of this thread.
The reason they went with nested scenes over prefabs is likely because there are a few things nested scenes support that are more useful to the use cases they are trying to solve. Lightmapping is probably the biggest one. All the other custom workflow stuff has nothing to do with subscenes versus prefabs and has everything to do with handling the baked serialized data.
That’s not Unity’s goal and they have been very clear about that in their marketing materials since 0.50 dropped last year. If this is what you want, look elsewhere. Or wait until they get things stable and start working on new features again.
That’s my point. Unity doesn’t know what you are looking for either. A much more valuable discussion would be discussing the particular edge cases of any runtime conversion solution, and the ways to fit such a solution in the broader 1.0 ecosystem that minimizes boilerplate, redundancy, and runtime performance costs.
The better defined a problem is, the easier it is to prioritize.
Well that’s the key, I know unity’s “stated” reasons, I’m saying half of them are b…s… coming from a billion dollar company. And sorry but the topic of this thread is what we think of the current editor time baker only situation, not about official issued reasons.
Let me rephrase part of my previous statements in case it wasn’t clear enough.
I’m not asking for “new” things, I’m not asking for extra support, I’m not asking for editor tooling to support the “conversion with serialization” approach.
All what I (and people with similar use cases) needed was that they left the existing runtime conversion code somewhere so that we could grab a prefab from an asset bundle and turn it into a base entity for further instantiation with only the 2/3 lines of code that where required in the original versions.
That’s it, just don’t delete the code from the package and make the minimum adjustments to account for the transform component overhaul.
The package already had code that perfectly accommodated the kind of use case I needed. That code was entirely removed for reasons that don’t have anything to do with those use cases and u are telling me I should stop complaining they did that and instead discuss how to reimplement the thing that was already working (for me)…
They could have at least released that removed code to the general public so we can add it back ourselves…
Just to make it clear, one last time, the kind of use case I’m talking about:
Empty scene, no actual gameplay GO nor Entity.
From a script, download an asset bundle or addressable from a remote source, get a bunch of prefab references.
Turn the prefabs into “base” entities in memory. Just 1 entity per prefab.
From those “base” entities, instantiate 50000+ copies to build a “terrain”.
It was really easy to do when runtime conversion was a couple lines of code. It’s a royal pain in the arse now…
I think a lot of people here are mixing concepts (well, Unity mixed them to begging with...)
"Runtime conversion of GO to entities", I think it should be clear that everything is serialized as GO and only turned into entities and entities compatible formats at runtime. Trying to do fancy stuff to avoid long loading times/transformation times is not a "conversion" workflow but rather some hybrid approach, and from what I've been reading in the past year, that's where 99% of the "problems" were.
I get that most people understand both things as "conversion workflow" but they are actually 2 completely different problems. Trying to hybridize and tie "baking/serialization" into the "conversion" idea was a silly approach as history demonstrates.
But again, that's not "runtime conversion", that's "hybrid half baked half runtime approach". The pure "conversion" aspect doesn't have to "pay" for the sins of the clunky attempt to "serialize" stuff at editor time to save on performance.
Bakers introduce a proper way to turn editor elements (they are not GO anymore) into a serialized stream and load them back, perfect, good job.
But that has nothing to do with a true GO to entity transformation at runtime (let me run bakers at runtime??) which cover entirely different use cases and was working reasonably well.
They did, or rather they released a version of 1.0 that still had runtime conversion in exp.8 and exp.12. But…
It is not just Transforms. Entities Graphics saw a major redesign targeting 2022.2. And I don’t think there was ever a version where that redesign worked with conversion. It was likely developed alongside bakers from the start.
What about Physics? What about the Character Controller? What about NetCode? Do you really expect Unity to maintain two separate authoring → runtime workflows when they can barely keep one afloat? What happens if Unity comes up with a better design in the future? Do they abandon the old one? Unfortunately, Unity is bound by professional expectations which they already often struggle to meet. Restoring runtime conversion is a way more difficult problem for them than it is for some random member of the community who can move fast and isn’t being held to any expectations.
Don’t get me wrong, I would love a runtime conversion workflow for things like cameras and whatnot. All I’m trying to do here is shed light on the reality of the situation. Maybe you already knew this. Maybe you just don’t care because big companies blah blah blah. But the more you understand about the problem, the better you can formulate your requests such that they seam reasonable to those at Unity making those decisions. Or you can just do your own thing because I’ve been very wrong about Unity doing “reasonable” things on occasion.
Yeah I agree that there are several different concepts that are all talked about under the umbrella of “conversion”. I think it’s useful to break the problem down even further by considering what the desired output is. I find that there are three main categories of “objects” in the game:
Pure entities. The set of ECS components on those entities entirely represents the objects.
Pure game objects. The set of Monobehaviours on those game objects entirely represents the objects.
Hybrid objects. The object requires both ECS components and Monobehaviours to represent the object. This means the object is represented by both an entity and a game object.
You could also argue for a 4th category which is a game object and/or set of Monobevahiours that you want to accessible from an ECS world but don’t have any unmanaged ECS components. For example, you may want an Entity that has a single managed component that provides access to the Camera Monobehaviour so that you can access it in ECS systems. However, I group this up with “Hybrid” objects since it requires both a game object and an entity.
Separately, there is the distinction of how you create and set up these objects:
Authored in the editor
Loaded at runtime
Default unity already handles pure game objects well. Author prefabs or scene game objects in the editor or load them at runtime using Addressables.
For pure entities authored in the editor, we have the current Baker and Subscene workflow. In my opinion, this also works great. However, my main problem with it is that there is nothing stopping a user from forgetting to define a Baker for a particular authoring Monobehaviour on a subscene game object. Without a baker, the game object is baked to an entity and that Monobehaviour is lost at runtime. While this is “by design”, the user should be warned that the Monobehaviour has no effect and will not exist at runtime. In general, I think the editor/inspector should very clearly communicate what the output of baking looks like for the particular subscene game object.
For pure entities loaded at runtime, my understanding is that the new “Content Management” aka “DOTS Addressables” is supposed to help with this. But at the moment, this is not a very well supported use case. I don’t currently have a need for this, so can’t really speak to what it could or should look like.
Hybrid objects, both authored in the editor or loaded at runtime, is difficult to do in the current version of ECS. It is possible to author some hybrid objects using the current Baker/Subscene workflow, but only for a small set of supported Monobehaviours. This is the old “companion” workflow. This allows you to have an authoring game object in a subscene with components such as “AudioSource”, “Light”, or “VisualEffect”. Internally, unity will instantiate a “companion” game object that is tied to the lifetime of the baked entity. The entity will have managed components that reference the Monobehaviours on the companion game object.
For any Monobehaviour that is not in this special set of supported companion components, we have to use other solutions. This where we get into the realm of “half baked, half runtime” hacks in order to get the output we want. As I brought up in this thread a while back, we have the following options:
Runtime instantiation of a game object prefab. This means the prefab cannot have any scene references.
Runtime instantiation of an entity from a MonoBehaviour in the scene. Cannot use Baking to populate the non-managed parts of the entity.
Runtime “linking” of a game object and entity. Either a MonoBehaviour searches for an entity and attaches managed components to it, or a system searches for game objects or monobehaviours to attach to particular entities. Both require a custom and brittle tagging solution that works across the scene/subscene boundary.
Runtime setup of static variables. Works for singleton-like data, but is hard to map to corresponding entities.
Assembly ref hacks to expose the old companion link workflow in a Baker.
All of these options have drawbacks and are not well supported workflows. I know there were problems with the old “companion” workflow, but I still think it would be best option, if opened up to all Monobehaviour types. Specifically, subscene authoring game objects would support baking pure entity data as well as setting up a companion game object to hold the Monobehaviours that could not be baked. In other words, for every Monobehaviour on an authoring game object, the user could choose to either define a Baker or add it as a managed component to the companion game object. And unlike the current editor integration, the output of baking would be more clearly shown to the user so that they could easily figure out what is getting baked and what is being linked to a companion object.
Somewhat related, but I also think that there should be built-in ways to do transform syncing for hybrid objects (either making a game object follow an entity or an entity follow a game object). This would be optional since hybrid objects don’t always need this, but it doesn’t make sense for everyone to reinvent this.
In short, I think the current baking workflow is great, but I think there needs to be better hybrid and runtime-loading workflows that can work in tandem with the baking workflows.
Btw, when I say “MonoBehaviour”, I really mean UnityEngine.Component. The word “Component” is way too overloaded at this point so I just avoided it.
I am working with DOTS and Unity ECS, from it’s early days. I have seen an evolution of entities creation API. I was building moddable systems using DOTS, with many entities variants.
Current project moddable RTS that I am working on, also uses heavily Entities.
I haven seen various of post discussing lack of entities runtime creation. Problem perhaps is different to mine use cases. But I personally had never problem with lack of “runtime baking”.
So how I tackle runtime creation of prefas entities?
First narrow the problem, to what is the basics entity structure for your prefabs. Create these at the game start. Doesn’t matter the baking or not baking work flow.
Now having the base prefabs, I can create prefab variations at the runtime and populate them with components and initial values.
This way decoupling baking from data structure.
Works with both Monobehaviour as well as Systems.
Rest is just system based entities creation and relevant.
These are good reasons to use baking. Baking will reduce storage space (if done correctly), and the best compression algorithm for realtime usage is one where the data is already pre-compressed. The actual use case for runtime conversion is for projects which still rely heavily on GameObjects and Addressables. But if you are starting a new project, I’d strongly recommend avoiding this. Just because Unity is missing some ECS implementations out-of-the-box for features doesn’t mean that implementations for such features don’t exist.