Making a rendering system for sprites. Right now everything is just in one big system, so this one system gathers sprite components, does a culling pass, sorts them, batches draw calls, and does the actual rendering. I was wondering if it would be better, for performance reasons, to keep these all as one system, or split each task into its own system?
My instincts say to split them into multiple systems, but main concern is the extra cost of passing large amounts of data between systems. Right now all data is stored in lists that expand in capacity as needed, never shrinking, so new memory is very seldom allocated despite some lists having 100,000+ elements. I’m not sure I’d be able to maintain that while passing mass amounts of data between these individual systems. Thoughts?
It is one thing to deliver a house. It is another to deliver a key to a house. NativeContainers are like keys to houses. The actual C# structs hold pointers to their real memory.
As far as I’m aware though, only Dynamic Buffers can pass data between entities, correct? Or is there a way I can pass the pointer to a NativeList from one SystemBase to another? I suppose if I could, that would allow one system to use a NativeList from another system.
How about making public getters for your NativeList in your system that holds the data and let other systems access them by GetOrCreateSystem<YourDataSourceSystem>()? Remember to define the system update order correctly.
World.GetExistingSystem lets you get a reference to another system similar to a MonoBehaviour’s GetComponent. Whether or not you like passing data around that way is a different question. I personally don’t, so I built a custom mechanism to associate native containers with entities.
One way to do this is to create an abstract subclass of SystemBase and decorate OnUpdate so you can capture the Dependency at the end and write it to a container tracking mechanism. Then whenever the collection is obtained, you merge the JobHandle with Dependency of the actively running system.
So if you have different systems using the same native container data in jobs, the safety system is going to complain if a job in one system is writing and a job in another is reading. Systems have no clue about dependencies of other systems.
So you have to setup the dependencies yourself. The common idiom is you have a shared dependency, just a shared job handle. Every system that uses it combines with it before running it’s jobs, then sets it back after scheduling. You can abstract that out and automate fairly simply once you understand the flow.
But it should be used carefully because one job in the chain that gets force completed for some reason then force completes the whole chain of jobs sharing the same dependency. So while it’s a useful tool and sometimes necessary having a lot of entangled dependencies over multiple systems can be bad if misused.