So, all objects must have a transform component so that they can be rendered in the correct orientation and relative to each other… Simple enough.
But, here’s the question… How does the graphics system and the physics system access the transform? Do these systems (graphics or physics) have pointers(references) into all the game objects or are the transforms actually instantiated in the systems and the game objects have pointers(references) back into the systems?
I am trying to create my ‘game’ systems and utilize an ECS style of component control but, I am hung up on how unity manages components itself. If unity is following pointers all across memory just to access transforms then it must cause a lot of cache misses…
This is kind of the wrong forum to ask this. This is very low level engine specific stuff that’s not necessarily exposed to users. The people who do know (ie: people who have access to the source code) are likely under NDAs requiring that they not talk about or share the internal code.
The transform component that we have access to in C# is basically just a wrapper that calls into and pulls out of native code.
Unity staff has talked a little bit about the position internally being a hierarchy of matrices and the actual position / rotation / scale values being calculated on request. In the past it calculated the whole chain on every request, so doing something like accessing the x, y, and z position on three separate lines would actually traverse the whole hierarchy every time. Now it caches the value unless dirty, at least for C# access.
Whatever their means of actually accessing those matrices was it was fast enough that they didn’t see a need to fix it recalculating every time until quite recently.
The graphics system and physics both sit mostly behind that wall of native code, but assume the external representation available from the C# side in no way mirrors the internal one.
I will also say the seriously hardcore Unity devs tend to use as little of Unity’s built in stuff as possible. Some go as far as managing all of their component positions and physics in custom C# and use the absolute bare minimum of Unity’s built in functionality. Something like having a single C# object that hooks the various Update / LateUpdate / etc. functions and handling scheduling and calling of their components through that object. Rendering is done manually via Graphics or GL calls rather than having renderer components. That way they have the most control possible, but it means you’re loosing many of the strengths of using the engine to begin with.