Well you’ve linked our very old implementation, in pinned thread you can find much more performant and visually better one: Unity DOTS case study in production page-2#post-4939121
But it’s also very outdated. Our current one looks even better and smoother. But still based on similar approach, just with renderer feature and a bit more improved collection step. It currently gather fog affectors, set graphics buffer, and then just fill dynamic and static fog map in compute shader with spatial checks and early outs (every X frames, depends on quality settings). And read both maps data back to CPU through async readback.
Create FOW texture with required resolution (resolution can be altered later if the need arises)
Prepare FOW affectors. Each affector is basically a point with a “visible” radius around it.
With compute shader or ordinary renderer draw visibility circles on FOW texture.
WIth async GPU readback read computed FOW texture back to CPU and use it for visibility determination for game logic.
But this is a very basic algorithm. More complex “Field of View” and “Line of Sight” systems use various obstacle information for visibility determination:
In this game, every map point calculates visibility information from all FOW affectors using raytracing. Ray can be blocked by buildings, forests, and/or other obstacles. Raytracing algorithm is implemented in compute shader and its execution is split by several frames (due to algorithm heaviness and complexity).
not an answer but something related: iirc Tertle used raycasts with a modified navmesh to determine visibility of objects which I think is kinda cool too