Age of Empires type Fog of War from Point Light

Hi guys,

I’m trying to create a Fog of War effect like Age of Empires (i.e. shows the world (when lit and in line of sight)), shows black when never-visited, and shows a greyer version of the world when previously lit but not currently lit (previously visited).

My world is top-down, orthographic camera, the player has a single point light attached. Walls and closed doors cast shadows onto the ground plane.

I’m not sure how to take the light information used to render the ground plane and use it to create a fog of war effect. I figured something like

  • a render texture that is solid black
  • updated each frame to be the same color value as the ground plane if there was light reaching the pixel.
  • if there was no light reaching it, left as-was if black, otherwise reduced to greyer version of itself.
  • render texture than applied to the main camera drawing the scene?

But I am very new to shaders, post-processing, etc. I’m unsure about

  • Should this be a custom surface shader?
  • How do I access the light level of a pixel (accounting for shadows) in the shader?
  • Having a shader that branches based on light level is bad for performance (I assume) are there ways round this?
  • Does this approach sound generally viable?

I have managed to create the effect I was after. To answer some of my own questions:

Should this be a custom surface shader?
No, a custon Vertex Fragment shader.

How do I access the light level of a pixel (accounting for shadows) in the shader?

Three parts must exist in the shader. In the vertex function output struct,

struct v2f
	{
		half4 pos : SV_POSITION;
		LIGHTING_COORDS(0,1)   // must include this line
	};

In the vertex function, transform the lighting coords for use in the fragment shader

v2f vert(appdata_base v) {
		v2f o;
		o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
		TRANSFER_VERTEX_TO_FRAGMENT(o);  // include this line
		return o;
	}

In the fragment shader, access the light level this way:

half4 frag(v2f i) : COLOR{
			// Do not use if statement, for performance.
	
		half attenuation = LIGHT_ATTENUATION(i);  //access light here
		clip((attenuation * 10) - _Cutoff);  
		half value = attenuation * 1;
		return half4(value, value, value, 1.0);
	}

Having a shader that branches based on light level is bad for performance (I assume) are there ways round this?

If statements slow down shaders by limiting the GPU’s ability to parallelise the computation. (I still can’t quantify how much.) Rather than do

	  	if(attenuation < _Cutoff)
		clip(-1);   // -1 means always clip

I did:

clip(attenuation - _Cutoff);

I also replaced all references to float with half

Does this approach sound generally viable?

The end result gave me ~40-50 fps on an iPad Mini 2. (retina quality)

Details - Main Charater & Level Setup

  • One point light (child of player). It casts hard shadows at high resolution. It has a large radius - to match how far the player can see (not how far their lantern light extends)
  • The world has many oblongs (dynamically placed in my case) that cast shadows (but are invisible) (i.e. mesh renderer set to “shadows only”)
  • The player has a quad (“LightQuad” (centered on and a child of the player), which extends as far as the player’s lantern range. This quad has a shader that receives shadows. This quad is not drawn by the main camera. The shader turn any pixel transparent that doesn’t receive light over a certain threshold. (Either through being too far from the player, to receiving a shadow)
  • The quad is drawn by two cameras -
  • One: “ever seen” which is sized and positioned to see the whole level at once, and “Clear Flags” set to “Don’t Clear”. It renders to a rendertexture - EverSeenRT
  • Two: “seen now” which is a child of the main camera and sized to see the same part of the level as the main camera. It renders to a rendertexture - SeenNowRT
  • The main camera has a script which triggers on OnRenderImage which blends the final color of the level based the pixel values in the EverSeenRT and the SeenNowRT.
  • The reason to split out EverSeen from SeenNow, is it allows the level to be brightly lit if lit now, and moderately ‘lit’ if ever seen. To correctly blend textures representing different parts and amounts of the level, a half4 vector was set each frame describing the relative scale and position of the maincam’s view relative to the everseen (whole level) view.
  • As the render textures were large (either to cover the entire level, or provide high resolution for the seen now information), they were of format: R8 - this allowed a 2048x2048 RT to require only 4MB.
  • To allow some parts of the world to have their own lighting, and this visible even if outside of the player’s lantern range, rooms would have a quad (similar to the player’s quad for light) which had a similar shader to the player’s light quad, but the required light value (attenuation) for pixels to not get culled, was very, very low.
  • For pixels to not be clipped in the players light quad, the light reaching them had to be (say) 100, and this value was chosen to give the lantern a range that felt right. But the threshold for prelit areas was 0.001 - i.e. the player can see infinitely far - hence why the point light was setup with a large range.

Attached is a low quality video, sufficient to show the basic end result.[67013-example.zip|67013]