Premise
Since Time.time began, Awake, Start & OnEnable have been the place to find gameObjects, cache references, add listeners and do runtime setup. Whilst in a pure ecs project such MonoBehaviour methods are a thing of the past, we have many years ahead of a hybrid workflow where both approaches will be mixed.
Observations
A subscene entity ‘exists’ on the first frame when a converted subscene is in the scene… except… only when it’s open. When the subscene is closed, I believe the entities are unavailable until the second frame. e.g. caching reference to an entity in Start() works fine until you close the subscene.
Opinion
This is unexpected, frustrating to deal with and I believe substantially contributes to the ambient ‘bugginess’ feel of working with subscenes atm. It is also unintuitive to me that a subscene takes “longer” to load than regular GameObjects. I’m concerned that this is taking the ‘in practice nobody will notice a one frame delay’ philosophy a step too far. In particular when we have so many convenient hooks for the first frame and none for the second - code that would be simple to write (especially given some other utilities or tools) becomes very verbose. In fact I’m not sure what good code looks like in this situation other than writing as a System (which is not the focus of this post).
Request
At a minimum make the (open/closed) behaviour consistent. At worst by artificially delaying the entity load with an open subscene.
A much preferred solution imo would be to have subscenes part of a scene already have entities loaded before Start.
Clarity around timing wrt where Awake, Start, LateUpdate, InitializationSystemGroup, when a subscene is loaded and how these change when a) subscene open, b) subscene closed, c) in builds, d) any other conditions
Many thanks in advance for giving this a read and the team’s hard work. I suspect there’s very little appetite for my suggestion though I believe it would have a dis-proportionally large positive effect on the experience of a hybrid workflow.
(Observations as of Unity 2020.1.0f1, Entities 0.13)
Taking that comment in good faith, my aim here is to give feedback about what I perceive to be a UX issue that many will encounter. I’d hope you would agree (given the context of many long-standing Unity users) that if/when this happens, currently it’s less than ideal. Making a second caveat attempt, this post is in reference to when MonoBehaviours are mixed with Systems, not whether they should be.
I would hope we’re on the same page that the ideal way of working with Entities is to use Systems throughout.
I believe though that there are many occasions when this won’t exclusively be the case. Especially if it’s relatively subtle issues like these that are the main barrier.
Sorry I didn’t mean to sound so combative, however I believe that MonoBehaviours should never access the entity world.
I’m not saying that they shouldn’t co-exist, but when they do the GameObject world should be driven by the Entity world, not the other way around. i.e. Systems should write to GameObjects but never in reverse.
If not, it causes huge range of issues (e.g. this post) that I could write a paper on from experience.
I use GameObjects and MonoBehaviours nearly exclusively as a presentation layer, or at the very least standalone objects.
I agree with some of the above though I don’t see everyone using DOTS this way.
Trying to make this a little more concrete by using an imaginary scenario:
Monobehaviour + traditional UI menu
On button press play audio and triggger animation (let’s say sprouting trees in a forest)
Trees, animations etc all in subscene or spawned entity prefabs (perhaps there’s a simulation aspect)
pressing a button in MB land adds a tag or data to a singleton and the simulation & animation systems kick off
In that example I’d guess there are already 10 decisions you wouldn’t have made. My belief though is that a) this will be quite common, b) I appreciate it’s early but I don’t see much in the way of Unity clearly communicating or limiting this possibility space so it feels somewhat intended and c) Generally, actually, this probably works?
You could also consider asset store DOTS packages - let’s say a path-finding lib with static methods you call to update obstacles or whatever that then updates entity world. I think MonoBehaviours will potentially be doing quite a lot of indirect access and that used with this degree of indirection may even feel quite clean.
If issues/inconsistencies like above could be resolved I think that would go a long way to improving the workflow - It’s hard for me see how it would subtract. Appreciate the thoughts.
Thanks, OP!!! This starts to explain the awkward buginess when I was trying to place entities using the editor (placing them directly in this case; no subscenes involved, but likely the same root cause). They would spawn at 0,0,0 and then jump to their location at some future time when moving the camera or manipulating things. I guess entities just can’t be used in the edit world reliably right now.
Hey no problem. Generally though I’ve found the conversion workflow fairly reliable. I wonder if your issue might be related to your spawn system running after the tranform system so it’s at 0,0,0 the first frame? I mention it because it’s a fairly common problem and is resolved by manually setting the LTW as well as Translation when you spawn or else making sure your spawn system updates before the transform system. It might be unrelated to your problem but thought I’d mention in case.
They were being spawned from a MonoBehaviour, not a system. The MB has [ExecuteAlways], so it executed, but the transform system didn’t execute on it until the editor ticked again.