I decided to optimize my project through dynamic occlusion. I found some ready-made solutions in Asset Store, they worked fine in Windows builds, but on Android they loaded the processor 100%, even on top-end Snapdragon. So I decided to make my own dynamic occlusion. The principle of work is simple: there is a main class on the scene, responsible for dynamic occlusion, and on all dynamic objects I put a class that adds a class of dynamic object to all child objects with Mesh Renderer, so only objects in the frame should be displayed, not those that are behind them. For example, if you attach this component to a car, the trunk contents will not be rendered until the player looks in the trunk. The object class itself finds the dynamic occlusion class on the scene and asks it if this object is visible and if the object is in the camera’s field of view and there are no objects in the path, it should render, and if not, it won’t. And that works fine:
But the problem is that on dynamic objects I use separate Convex mesh colliders with a smaller number of polygons to optimize the physics, and the current implementation via Raycast only sees objects with colliders (although some objects without colliders work almost properly, but these are rare exceptions). What are some options for finding obstacles without colliders?
That may be a bad example. The way to make “occlusion culling” work in this case is to have the “inside trunk” renderer disabled until the player opens the trunk. And when the player leaves the car, have the trunk close again by itself.
Not sure what asset you worked with but I kinda question whether your approach will be that much faster. Have you profiled it thus far? Have you profiled the original asset to see why it’s taking this much time? Perhaps it may be a simple fix.
Your approach with multiple raycasts per renderer, shooting at each corner of the bounding box, on the surface doesn’t look like it would be a competitive solution to begin with. Just a stomach feeling.
Another obvious issue with your implementation is the (garbage) allocation of the Vector3[] array every time you make that call. If you call this per renderer as the parameter indicates, that could be a lot of garbage.
You are also repeatedly using Vector3.Distance rather than squared distance, to avoid taking the square root (comparatively slow).
If your goal is to create a raycast system that doesn’t use colliders then you’re going to have to roll your own solution. But why? You’d basically just be re-implementing the exact same thing. You’d need: collider shapes, a spacial partitioning system, the math libraries for detecting raycast collisions, and then roll your own solution to efficiently perform those raycasts via jobs. Not to mention a custom editor so that you can configure all of this information.
Or you could just put a set of simple colliders on their own layer and use a RaycastCommand to schedule a batch of raycasts via the jobs system? I still think it would be overkill and likely not scale well at all. But it’s a start and would give you something to work with in the profiler very quickly.
I don’t think CodeSmile was suggesting a bespoked bit of code for every object in the world here. Rather, just a simple scripted flag that can be attached to anything and turned on or off depending on an open/close state. Hence the reason he suggested that particular example being a poor choice. Occlusion is usually more suited for open spaces where line of sight can sometimes obscure another object merely due to the point of view. In the case of a container that can easily be represented as visible or not and nothing in between there’s no need to come up with a whole framework of calculations just to determine a simple state flag. That would be the opposite of optimization. At the end of the day the most general solution isn’t always the best. And optimization isn’t about making the most elegant solution.
Did you run the Profiler to see where your time is spent?
Maybe you are not considering some easier, alternative optimization techniques such as LOD, draw distance, and so forth. Without a clear understanding why the game is running slowly any effort towards optimizing is essentially stabbing in the dark, and a waste of time if you build systems around what may just be an assumption.
If I didn’t know what the problem was, I wouldn’t have wasted my time inventing dynamic occlusion. In my case LOD is used, but only for its direct purpose - it substitutes the model at a distance. But, as I understand, you propose to make Culled already 5 meters away from the player.
But in my case it won’t help, because it will only break the players’ experience of the game. When a small car is behind a big one and it is not visible in the frame, it still continues to render and on scenes with a lot of traffic of cars or other dynamic objects there are performance problems, because not every device can normally pull 5 million polygons in one frame.
That’s exactly what I’m trying to get around, there shouldn’t be colliders on the meshes I need, otherwise a lot of the mechanics of the game will just break and I’ll have to do it all over again.
Based on this example, this will work for maybe 2% of the cars and likely involves a bus or truck in front. That doesn’t sound like it’s going to have a net positive effect on framerate.
If on the other hand we’re talking GTA like cityscapes, then the most effective occluders are the buildings. Assuming they are rectangular shapes (no see-through parts) you could use those to create another culling plane much like the camera frustum, and every object that falls wholly inside does not need to get drawn.
If you do have a city, and assuming the layout is on a grid like Manhattan, you can even use a PVS that is predetermined, ie from no point on 42nd street can you see anything of or on 43rd street and beyond. Something like this would simply let you mark some streets as inactive, and thus all objects moving along that road are also have their renderer disabled (assuming their logic should continue working).
99% of the game time, the action takes place in the city
No, 100%, the game is first-person, so even identical cars overlap each other, so disabling certain elements that are not in the frame should increase performance.
And again I repeat, static occlusion doesn’t work on dynamic (not static) objects…
This is quite confusing to me. If you want to raycast you need a shape to test against. Period. Either you design your game to work that way upfront or you don’t have that option at all.
The only choice here is if you want to use the system already built into the engine you’ve chosen to go with or do you want to reinvent the wheel. In some cases there might be a reason to do the latter but I’m having a hard time believing that is the case here with my limited understanding of your project.
I had to find a workaround: I created a group of objects and bound Convex Mesh Collider to them, and based on the Raycast camera hitting this collider I turn on all MeshRenderers bound to it. It doesn’t work as well as I wanted, but it’s better than standard Unity features and ready-made solutions from Asset Store. On weak devices I managed to gain up to 15% performance…
A slightly more robust method would be to do a raycast to each vertex of the mesh collider. But overall the only place I see this kind of culling being useful and reliable is in an indoor setting. Perhaps a procedurally generated maze game with high poly monsters running around.