Hello. I’ve decided to try separating simulation from rendering via multiple worlds. I have an idea on how it might work but wanted to get some feedback.
I imagine that each sim entity I create would of course have no render components, they would have everything they need for the sim plus one component called, say, PresentationLink. PresentationLink would contain an int PrefabID, a reference to a render entity prefab that contains the render components plus some lerping components used to achieve the smooth 60fps look.
As the camera moves, a simulation world system would be responsible for tagging ‘visible’ entities. These would have render world doppelgangers created with Entity references to their simulation world counterparts with some system or two keeping them in sync.
The authoring workflow would typically involve sim world GameObjects and render world GameObjects.
What do you think? Is this a good path? Have any advice or see any pitfalls?
I didn’t realise unity was doing this already. I just remember seeing a unite talk about this possibility, looked at the entity debugger and figured all systems tick down in a row. Didn’t see it mentioned in the manual either… perhaps I missed it.
Which performance? It’s not give you any performance with comparsion of current groups approach. Especially when they returns sim group to fixed timestamp.
The premise of the original post was the separation of simulation and rendering with different tick rates on the assumption that the default world had one single tick rate, as the documentation states:
SimulationSystemGroup (updated at the end of the Update phase of the player loop)
PresentationSystemGroup (updated at the end of the PreLateUpdate phase of the player loop)
Given this assumption, wanting the decoupling of rending and simulation means fewer simulation ticks can occur, while more render ticks can occur with smoothing. This allows many more entities to be processed in the simulation while still maintaining 60 FPS rendering.
tertle has stated that what I am desiring is (or will be) handled by unity by default.
I too see benefits in being able to run different system at different tick rates. For example for VR games you might run gameplay logic at 60Hz while rendering runs at 120Hz.
This can be somewhat achieved by running all systems at the highest rates and simply skipping processing some frames (early exit) on lower tickrate systems. I’ve actually done that in the past to save perf, like only ticking some particle systems at half rate.
It’s obvious, my point was:
You not take peforance gains with other World (as OP described), in comparsion with different timestep in one World in different groups or with manual timestep manipulation. On the contrary, this is an unnecessary complication to synchronize the data necessary for rendering between worlds.
For example you can create entities in other world with ExclusiveEntityTransaction without main thread block. It’s how sub scenes streaming works in Megacity.
I could see other use cases of multiple worlds. In my case, I need the simulation to be deterministic, serializable and light-weight.
If I could split my presentation in a separate world, it would make my life much simpler.
I don’t risk the presentation (not deterministic) breaking the simulation determinism because it only reads from it.
All the non-serializable stuff like textures, meshes, audio clips don’t get mixed in with simulation entities. That makes serialization much simpler.
The visual-related entities like particles are out of the simulation, keeping it light-weight. (i need my sim to be lightweight for serialization and faster copy)
Are any of you aware if splitting sim/view worlds is a viable option ? Or should I keep 1 world and try to work around my problems some other way ?
Different systems with different tick rates is my one and only dream with dots. @cort_of_unity Any update on when the sim groups are moved back to fixed update?