I’m working on a game which will spawn buildings at runtime. I want these to be part of occlusion culling.
It seems to me that Unity’s occlusion culling baking system just sorts the world into a BSP tree or an Octree, and then casts a whole bunch of rays every frame during gameplay to figure out the objects belonging to which tree cubes should it draw.
If that’s it, then why can’t we insert new geometry into the BSP tree / Octree at runtime? There’s no reason why we should not be allowed. And if the developer can even afford the time it would take to asynchronously (re)build the entire occlusion tree at runtime, then why not let them?
“Make your own occlusion system, or buy one from the glorious asset store!” - Sure, but doing it in C#, and using gameObject.SetActive() is not as fast as it should be. I also bet that casting your own rays is much slower than Unity’s internal rays as well. I can’t seem to get enough rays cast over say 20 frames to cover the world precisely enough while not slowing the game to a crawl.
Are there any other conceptual reasons why Unity isn’t giving us access to the occlusion culling system at runtime?
Am I missing something in the logic behind making such a system?