How stable is actually determinism for busy systems?

I am trying build up mechanics, with determinism, for large amount of active entities.
Then I want be able to fast forward simulation, by reducing time step.

My question is, how stable determinis is, at current DOTS dev. stage? Using 0.1.1 Entities package (preview).

If you know answer, you can skip further reading. But my current experimentation gives me unstable results, which I attempt to explain further below.

In my project, I use 0.1 sec as base time unit, for most calculations across systems, other than physics, rendering and I/O.
Then on top of that, I use period of 10 steps ( base time unit * 10 intervals = period duration; 0.1 * 10 = 1 sec), in which each of 1/10 interval has some designated systems to update.
Some systems execute on each step of period. Other systems may execute on 2, 4, 7 etc. interval of the period.

Basically it looks like that

int periodStep0to9 = (int) ( baseTimeStep % 10f ) ;

switch ( periodStep0to9 )
{
                case 0:                                
                    systemA.Update () ;
                case 9:
                    systemZ.Update () ;
}

Now I have recorded inputs, and designated time steps and replaying them back. I need make sure, I can have same expected behaviour back.
However, I really have trouble, making whole deterministic mechanics stable. I have tried multiple approaches. But I think I either missing something, or determinism is still in works.
In my case, application appears to work, when systems run on base time step of 0.1s. Which is fine. But when fast forwarding 5 to 10x, determinism seams start falling apart. Systems start lagging behind and desyncing. At least that is my impression. Meaning there is difference between 0.1s speed and 0.01s.

I expect systems to execute in strict order. Even if the frame rate drops, or job duration in system exceed 1 frame. Which can be expected, when accelerating simulation. But I observer, some systems skips their group, and execute much later, rather in next expected frame.

Mind, I use EntityCommandBuffer to create entities and involved some jobs, are multi-threaded. Also renderMesh is changed, as I think this may be important.

I tried use also
job.Complete () ;
But seems I can not see any significant change, since I expect another related job, to run in next frame anyway.

I read multiple topics, which briefly lead me to using fixed time step and manual creation and execution of systems within a group.

And of course applying [DisableAutoCreation] to systems.

I also have experienced in some trials, when systems are executed twice within a frame. Briefly mentioned in

Other reading.
https://gametorrahod.com/world-system-groups-update-order-and-the-player-loop/
https://docs.unity3d.com/Packages/com.unity.entities@0.0/manual/system_update_order.html

Another discussion from March 2019, mentioning on changes, regarding updates.

So repeating my initial question, how stable current determinism is? Can I rely on it?
Or maybe I do keep making a mistake somewhere?

Floats are not deterministic atm

Job update order != system update order, common mistake.

While job update order does take into account the order handles are returned (therefore system update order), it is also based off your read / write dependencies in the entity queries and optimises off this.

I see no reason this tree isn’t deterministic, but because it’s a tree some frames certain paths might update before or after other paths, but still always update in dependency order.

Systems will always update in order if you let the default loop work.

1 Like

What determines the default order of system (not job) updates?

I don’t know the answer. Maybe is defined on compilation time.
But I don’t think default systems execution order is anyhow reliable, to trust on.
I believe, it can be a bit like GameObject orders, which can fluctuate, hence is not something to rely on.
Hence manual system order is proposed.

1 Like

Regarding main thread, I have came up with following solution.
Not sure if is right, surely is not most elegant, and I believe, it can cause potentially lots of other issues, when comes to ordering systems. But could not find best alternative so far and yet it works for me, for accelerated fixed time, of up to x10.

Before going into it however, I just put reference to other thread that I asked relevant question,
** Can system execute same job multiple times, before this job is finished? **
You can skip quotation.

I asked

Resposne back

Proposed and in test solution

So my result of testing, as @tertle concluded, that jobs which runs longer than target frame rate, can potentially create duplicated data.

In pursue of seeking for the solution, I came with following:

  • Involved systems relies on EntityCommandBuffer, creation of entiteis and relevant component tags, which activates GetEntityQuery groups in next systems. Details not discussed here.

  • Using for systems creation

mySystem = World.Active.GetOrCreateSystem <MySystem> () ;
  • In this particular scenarion, I have decided not to use
systemGroup.AddSystemToUpdateList ( mySystem ) ;
  • To define strict order, I use mySystem.Update () ; in private void FixedUpdate () loop

  • Each system has [DisableAutoCreation] attribute. No other attributes.

  • In my case, I use switch, to control step of the period, in which relevant systems are executed sequentially. All works nicely, when x1 speed is running.
    In case of x5 and more, jobs start loosing sync, hence next step is required.

  • First I check, if system is ready to update
    bool shouldRun = mySystem.ShouldRunSystem () ;

  • Then making sure, no other job is executed, before critical job is running, I use enumerator

enum SysAlter
{
  Ready,
  Request,
  Set,
}
// First prioritize first system readySystem update.
if ( readySystem.ShouldRunSystem () && sysAlter == SysAlter.Ready )
{
  readySystem.Update () ;
  sysAlter = SysAlter.Request ; // Next system
}
...
// Next requestSystem update is executed, when my readySystem is complete.
// This is defined, by setting entity tag, and read by requestSystem GetEntityQuery group, 
// enabling system to run.
if ( requestSystem.ShouldRunSystem () && sysAlter == SysAlter.Request )
{
  requestSystem.Update () ;
  sysAlter = SysAlter.Set ; // Next system
}
...
// Then same for last system setSystem, as for requestSystem.
// Only difference is, the enumerator is set back to first system.
if ( setSystem.ShouldRunSystem () && sysAlter == SysAlter.Set  )
{
  setSystem.Update () ;
  sysAlter = SysAlter.Ready; // Back to first system
}

I understand this is not the best solution and I may realize, it does not work for A or B case, but this approach work so far for me, allowing me accelerate simulation, ensuring jobs not going out of sync.
When simulation is in fast forward, is not matter for me, if simulation isn’t smooth, as long is deterministic.

I need highlight also, I haven’t finished this implementation, but so far, it appears to me, it removed other issues with jobs duplicating data, which I was fighting just past week.

Thx @tertle for your inputs.

1 Like