GetComponentDataFromEntity(false) is effectively a sync point?

It seems that way although maybe I’m missing something.

It’s consistent in the profiler and entity debugger. The 3-4 systems we have that use GetComponentDataFromEntity(false) all show jobhandle.Complete in the profiler and entity debugger shows high cpu usage that can only come from dependencies being waited on.

Pretty sure I have an accurate handle on where all the explicit completes are, including DOTS feature level stuff.

The scenario here might help. I removed an explicit complete involving the entity queries in question. Evidently there for this reason just been there so long I didn’t remember why. That was higher up the system ordering from the jobs using GCD and also higher then most systems.

So what looks to be happening is by the time the GCD systems hit there were other dependencies built up, and the GCD caused all those to complete. That seems to happen over the 3-4 involved systems cumulatively causing a whole bunch of stuff to force complete.

If that is what is happening I need to stop using GCD’s that write.

I guess what might be happening is things overlapping. We don’t actually have a sync point most of the time of our own and we have enough separation so a lot of our stuff isn’t tied to DOTS features that have their own sync points. So maybe the dependencies basically wrapped around and got entangled with news ones taken out. So the explicit complete I had early on in Update was the correct thing to do. That explicit complete doesn’t end up causing any actual wait time as the jobs involved are finished just not in a completed state.

Are you calling GCDFE from within OnUpdate? If so, this behavior is expected only for the first frame the system runs. The following frames it should register the component type as one of its dependencies.

1 Like

Ya it’s within OnUpdate. Best guess is overlapping as explained above, that does sort of fit the pattern I’m seeing.

Just to be clear I put back in the force complete and the systems using GCD go back to basically no cpu time and total main thread time drops by that amount, it’s not just shifting somewhere else.

It is hard to say without seeing profiler data. A system’s Update (prior to OnUpdate) will complete the JobHandle from the last time it ran. Maybe that’s what you are seeing?

Except completing said dependency explicitly early in the cycle prevents it from forcing other work to immediately complete later in the same cycle… Somehow dependencies are overlapping. Like if GCDFE only resolves in OnUpdate that would definitely explain things. We do have conditionals on several systems including the ones in question that early out. And they always early out the first few frames at a minimum.

It’s the first time the OnUpdate hits the GCDFE of that particular component type that it will complete immediately. I use GCDFE a lot (writable), and have it working inside of large job chains (half my main thread time is waiting on jobs at the very end of the frame where I force completion), so I know it is possible to use GCDFE this way. I’m really struggling to understand what you are doing and what is the behavior you are describing.

A dependency from last frame is not completed until it’s combined with work started in this frame is what it looks like.

Also Update does not always complete. Or what’s it’s completing is something different then what’s in Dependency. Because in my case calling Dependency.Complete as the first line in OnUpdate changes the behavior drastically. It appears as though without that Complete call the dependency in question lives on and then gets completed later in the same update cycle, combined with other newly created dependencies forcing their jobs to complete also.

So there are a few things that happen in a SystemBase Update call (what a ComponentSystemGroup calls to tick a system). First, it completes the JobHandle Dependency from the last time it ran. Then it sets up a new Dependency variable with JobHandles for components it expects you to use based on past behavior (if you use something else, it will complete that dependency immediately). After that, it calls OnUpdate. Finally, it records the new Dependency value in its appropriate locations.

CombineDependencies creates a new node in the job dependency graph. Completing one of the input dependencies will not force the completion of the other input. JobHandles are essentially readonly pointers to these nodes, so assigning a new JobHandle to one of these variables is essentially just changing what node that variable is referring to.

The question is, based on the behavior what could be causing it.

The system where I’m calling Dependency.Complete has a single ForEach job. That call has a significant, reproducible impact on systems running after it. So it is completing something maybe just not what I was thinking. But what.

The increased cpu usage from the other systems is significant but hard to tell what it comes from. Profiler is no help it doesn’t show anything other then the same main thread time the entity debugger shows. No wait for job group anywhere. Appears like they have actually removed job related data recently from the proliler, maybe part of their dumbing down efforts I’m so fond of.

Are you using Timeline view in 2020.1? It is absolutely killer in figuring out these kinds of issues.

Calling Dependency.Complete in OnUpdate before the Entities.ForEach completes the JobHandle that would otherwise be the input dependencies for the Entities.ForEach job. Calling Dependency.Complete in OnUpdate after the Entities.ForEach completes the Entities.ForEach job.

Like I said profiler is not much help here. General how jobhandles work not that helpful I’m just not that new to this all.

Just to visually show what is happening even though it’s minimal info otherwise, here is an image. CombatEntitySystem is where I’m calling Complete. CombatTargetSearchSystem is the last system in the group. This image is where I’m not calling Complete. Notice the 3 systems over .8 ms. If I put in the complete call in CombatEnttySystem those all go down to close to 0.1. CombatEntitySystem stays where it’s at. The Complete causes no additional cpu time in that system.

And it’s three systems that jump like this. All using GCDFE. That pattern alone rules out quite a bit. Like it’s not something I missed in any single system, not some complete call or structural change I overlooked.

The other systems that are in the 0.3 range are also higher when not calling complete. It’s a small difference but it’s consistent. That and the profiler showing nothing about waiting on jobs leaves me wondering there might be a small chance waiting is not the actual issue. Maybe something else internal going on.

6165966--674496--upload_2020-8-3_18-6-32.png

The hierarchy view of the profiler window is not very helpful, nor is the systems view. If you can create some profiler captures with and without the Complete call and upload those files, I can look deep into the timeline view and see if I can figure something out.

Another thing I’ve observed is that change filters and doing EntityQuery.CalculateEntityCount can force completion of jobs early.

Also, does the overall frametime decrease with the Complete call? That would be super weird and would suggest some sort of bug. Are you on 2020.1 or 2019.4?

I just don’t have enough info to help you as is. All I can do is guess at problems or misconceptions you might have based on the words you have typed, which may or may not reflect your thoughts accurately enough.

2020.1. Yes complete is decreasing overall cpu time/framerate. Honestly I don’t think I missed anything obvious in the profiler. Unless they removed the waitforjobgroupid from the profiler, there isn’t any actual waiting going on and this is some obscure bug that I can probably only track down by putting entities into dev mode and debugging it the hard way.

They haven’t gotten rid of it. So unless your project is not deterministic, this sounds like a Unity bug. I would probably add a bunch of profiling samples inside the systems that fluctuate the most and try to narrow down the piece that is fluctuating.

I ended up narrowing this down somewhat. The jobs debugger amplifies the pattern a lot. Dependency handling cost can explode it seems and it might very well be the case that there are extra iterations in debug mode. But the cost is always there it just gets smaller with debugging disabled and then even more in a build. The important part being where it explodes is not really reflective of the system in question per say. It’s more about the entire dependency state at that point in the frame. My complete was basically short circuiting whatever cumulative logic is going on with dependencies.

What triggered the discovery of the pattern was actually a bad dependency I had that triggered ECS to throw an exception, a minor one that didn’t result in a bunch of working not being done but it did result in my fps shooting way up. It was short circuiting things in a similar fashion.

1 Like