I’m looking for some pointers as reading the documentation and various repos did not enlighten me.
Context: I’m doing R&D for a simulation/rts game
I would like to create two worlds, one with the model/simulation which would be slow/deterministic ticking, and a rendering world.
I’ve read and understood the bootstrap and got already two worlds going, but I’m struggling to find details on a couple of topics:
Can I control during the baking which World the entity/components goes to ? Does it have to be World.DefaultGameObjectInjectionWorld ? Ideally I would have loved to control during the baking which Authoring goes to which world.
How can I from a system know the baking is done and all entities from the scene have been created ?
Bonus : Are there example code showing how to copy an entity over from one world to another ? The API does not provide a CopyEntity or a Add/GetComponentData without generics.
Typical loading is handled by the SubScene component which requests subscenes be loaded in all worlds with a SceneSystem. Configuring would boil down to disabling AutoLoadScene on the SubScene and using SceneSystem’s static methods to configure loading yourself. Systems can also query whether a subscene or a specific section of a subscene is loaded. You can add SceneSectionComponent to assign the section the entity will belong to, and use RequestSceneLoaded to control which sections are loaded in each world.
There are options to move entities matching a query from one World to another, specifically overloads of EntityManager.MoveEntitiesFrom that accept EntityQuery, and Entity.CopyEntitiesFrom allows to copy all entities in a world. EntityManager also has SwapComponents which could conceivably be used as a somewhat dodgy way to transfer specific entities (instantiate entity in source world to create a copy, create entity in destination world with same archetype, swap copy and destruction entities, delete instantiated entity in source world). Entity references wouldn’t survive the translation if that’s done.
which requests subscenes be loaded in all worlds with a SceneSystem .
Thank you !
That lead me in the right direction, I just did not realise there are fair few system required to get the scene baking/loading happening on the new world I created. And I got it now : )
The bonus part, I’m more after the ability to copy components from the model world over to the matching entities on the view world.
The model world would have simulation running at let say 10Hz. So I have plenty of time to modify my data. And when the time is up, I would like to copy that data to the rendering world (hopefully super fast).
So let’s say I have component A on entity1 in the model world, I can take 0.1s to finally write it still on the model world, then copy the same component A on the matching entity in the rendering world.
I already have the remapping data by using a unique id on entities so I can figure out that part. But the copying over in my naive implementation is utterly slow:
foreach (var (componentA, uniqueID) in SystemAPI.Query<RefRO<componentA>, RefRO<UniqueID>>())
{
if (!_modelToView.TryGetValue(uniqueID.ValueRO.ID, out Entity viewEntity))
{
//Get the matching entity in the view world
//_modelToView.Add(uniqueID.ValueRO.ID, viewEntity = viewEntityMatching);
}
_renderingManager.AddComponentData(viewEntity, wave.ValueRO);
}
I’m also unsure if Unity jobs properly decouple dependency betweens world, so that rendering component A would want/need job on component A in the model world to finish (that would be a deal breaker), I’ll have to assume it’s decoupled, but I don’t know yet how to debug/verify dependencies, nor how long an IJobEntity takes
I can picture using intermediate containers that you can pass between worlds and use in jobs.
Source side would populate lists of componentsand entities through an EntityQuery. Take those lists and the combined JobHandle on a system in the destination world, and schedule a job that uses these lists, specifically IJobParallelForDefer to account for a variable final entity count from enableable components, and use an ID to entity map along with a ComponentLookup<T> (with NativeDisableParalleForRestrictionAttribute to allow parallel writing) to write the components in parallel. This pattern should then do most of the work off of the main thread and assign component data in parallel.
The framework creates dependency data per world, so jobs should normally be independent. We don’t really get information on dependencies beyond seeing which component types are used by a system in the system inspector.
The gist of dependencies for systems is that using methods on SystemState or ComponentSystemBase to get queries, component/buffer lookups, etc. will add to the system’s tracked set of read/write dependencies, so the input job handle is initialized as the previous dependency combined with the current job handles for the system’s accessed component types, while the output is taken and used to set up the read/write dependencies for those component types as appropriate.
Making sure you use the system for getting queries and such instead of through EntityManager is the main thing you should be concerned with, along with making sure job handles inside the system are chained properly, which mostly amounts to making sure to use the resulting job handle whenever you pass a job handle into a Schedule method (var jobHandle = myJob.Schedule(anotherJobHandle);) and combining job handles if you have things run independently of each other but need one output (e.g. assigning back to the system’s dependency).
IJobEntity execution time depends on complexity and volume of entities (and whether you choose to use ScheduleParallel if possible for that task), but generally you expect these to run within one update cycle of the system. The entity iteration wouldn’t be as efficient as iterating through arrays, but given that entities are stored in chunks and IJobChunk (which IJobEntity codegens into) iterates linearly over each chunk’s entities, it’s pretty much as nice as it will get.
Systems happen to automatically complete their previous dependencies, meaning if a job is scheduled in the previous frame’s OnUpdate (and properly assigned back to the system’s dependency), the job will be force-completed if still incomplete, just before OnUpdate in the current frame.
I’m still working on a solution and trying to figure out the book keeping of modifications to pass over the bridge. I’ll report back when I have some progress.
In the meantime I was surprised by
generally you expect these to run within one update cycle of the system.
It seems like it’s not generally, it’s actually enforced to run within a frame from my testing. Unless I can schedule a job somehow letting it know it isn’t frame bound ? Maybe that’s where I need to handle dependencies manually ?
I got it working within a decent performance and with the ability to jobify the synchronization.
I created my own component writers that are keeping track of requested modifications per type on the model.
I carefully setup my groups / systems so that there is a single mainthread point where I apply the accumulated changes on the model, and copy the data over to a component reader which the view on the same tick can consume and do the same job. I indeed used NativeArray/List and can push that into jobs so I’m fairly happy.