Frustum Culling by Distance for Big 2D Strategy Game?

I have a big 2D sci-fi strategy game where the gameboard is an enormous grid of Tiles. The Tiles may have GameObject units (spaceships, planets, UFOs, etc.) within them, or they may be empty. Now, when the units are rendered, I imagine that they would be rendered with a unit icon, a nametag, and maybe some simple, color-coded HUD display to the side, like this:

But when the player zooms out the main camera past some distance, the game would no longer render the nametags, just to keep the game visually easy on the eye:

And if the player zooms out even further, the HUD display should vanish next:

So depending on the distance between the Main Camera and the gameboard, the game should render (or not render) the various components of the game’s units.

I think I could figure out a solution on my own. For instance, I have some C# code where a Tile object can compute its distance to the camera:

public class TileSquare : MonoBehaviour
{
    public int computeDistToMainCamera()
    {
        int layer = gameObject.layer;
        Vector3 screenPos = Camera.main.WorldToScreenPoint(transform.position);     // Here, "transform" is the Transform component of the GameObject
        return (int)Math.Floor(screenPos.z);
    }
}

And in the TileSquare Update() method, I could use…

Renderer renderer = this.GetComponent<Renderer>();
renderer.enabled = false; // Disable rendering
renderer.enabled = true; // Enable rendering

…to switch the unit’s nametag / HUD / icon on and off.

But my question is, should I take this approach? I’ve been reading up on Frustum Culling with distance and Occlusion Culling with distance (like here), and I don’t think those concepts really address what I want to do.

I’m also concerned about performance issues, as my gameboard may have hundreds of Tiles and GameObjects, and if the CPU is forever measuring the distance between every object and the Main Camera, couldn’t that bog down processing?

Any advice is appreciated! Thanks.

Well, you can set up multiple cameras, where each camera’s view blends seamlessly (think turn-based strategy games that have an overlay map), or make these objects have LOD groups, letting you can easily configure this as a fire and forget system.

You can also make shaders react to viewing distance automatically, and maybe animate the items’s alpha (and scale) during transition for extra polish.

If you really want maximum control, for example how Stellaris’ icons stack on top of each other when they’re too clumped together, you really want to make a dedicated system for this, that combines various approaches, but likely also uses some spatial partitioning system (like a hash grid or quadtree etc) to quickly find the neighboring icons and apply some rulesets. (edit: This also addresses your question on performance when you have hundreds of icons.)

With all that said, if your game is truly simple in what it tries to convey, you can pick a simpler route, of course you can make each icon react individually through MonoBehaviour, if it works, it works. But these are some of the better options out there.

I would personally make a lightweight system that’s pretty rough on the edges, but with a future expansion in mind. If you do this you need to decide early what features you’d like to cover, but then you’ve saved yourself from an inevitable headache, because everything already passes through this bottleneck, and is ready to be processed in some fashion.

Before you commit yourself to any particular solution, however, try to build a proof of concept, see what you like and what the constraints would be.

1 Like

@orionsyndrome THANK YOU!!! This advice is pure gold for a newbie like me, thanks for taking so much time to pour in such useful detail. I’m researching everything you mention (LOD groups, shaders, hash grids, quadtrees). LOD groups look the most promising, I think. You rock, I owe you a big favor. Good karma on you…!