This is an interesting problem. I don't know how to get data out of a shader into a script directly, but that would be neat if it can be done - Based on the pipeline though, I don't think its possible.
What SetTexture Does
It would be great if there were a way to output a texture. While the documentation for SetTexture ("Assigns a texture") kind of doesn't convey this in and of itself, since a shader is a compiled program processed on the GPU, it doesn't exactly output textures back to the calling application. I'm not sure about all of the magic that ShaderLab is doing exactly, but in the end, it is still written within a shader which is compiled and sent to the GPU to process with a vertex list, input data and scene information (lighting, etc.), only to output pixels in an image. Obviously more shaders, vertices and data are sent at once in a single render call for the scene so that lighting and other things can be calculated.
To demonstrate, take a look at the built-in shaders: (This is from the Normal-Glossy built-in shader of Unity 2.6) `SetTexture [_MainTex] {constantColor [_Color] Combine texture * primary DOUBLE, texture * constant}`. What this is doing is equivalent to saying at each point MainTex *= Lighting or VertexColor * 2 * Color. If this changed the MainTex that was stored in the material, then the mainTex would be changing constantly as it rendered, since the output texture is dependent upon the input texture. You will often see `SetTexture [_MainTex] {combine texture}` which will tell it to combine the texture with the preceding and will trigger the Specular to be added when SeparateSpecular is on, but you don't see the textures in the materials changing.
Using RenderTexture (Pro Only) (Recommended)
This is the easiest and most correct approach. Because lighting is calculated by the GPU through the shader programs, the net result can't really be certain until it finsihes the render anyway.
I can think of a few resonable ways to go about this. Using renderTextures applied as texture 2D's, you could use the GetPixel to get the information you want. If you crop any secondary renders around your character, you could at least cut their cost and using shader replacement (everything is opaque black except the character who is white and does receive lighting), you could even more effectively cut that cost and the cost of checking against the render's results (grayscale images are easier to check than colour).
- Get the renderTexture of your main camera and check the color values where your character is. Performant since it doesn't require additional rendering unless you use shader replacement and even then, it only requires one additional render.
- Far more expensive, but you could render from the point of view of the lights and if they see your character, then the character is lit by the light. The amount they are lit is relative to the light's settings and the amount of the character visible. Can be optimized to only render from lights that could see the character. Not the most performant as it renders once for each light.
Using Raycasts (Limited)
You would need colliders on every shadow casting object. Cast a ray to your character from each light facing them and perform lighting calculation on your own. This will really only check against a single point on the character per raycast, so you should pick a point that is exposed on the character to check against. Performant enough with optimizations, few enough sufficiently spaced lights.
- First you would check if the light is facing the character and if not, continue to the next light - you would have to check fewer lights if you know they all have a range less than some value - This can be done using dot products.
- You then check the range of the light against the distance to your character.
- If your character is faced by the light and within the distance, raycast to the character from the light and if it hits, they are lit.
- You could then perform your own lighting calculation based on the shader - the math is actually pretty simple for the standard blinn and you don't really need to worry about specularity.
Using Shadow Volumes (probably slow and expensive)
You could generate your own shadow volumes, but each mesh would be assumed to be convex for simplicity. By calculating the vector from each light to each vertex within its lighting volume (cones, spheres, etc), you could define a series of hulls. Not very performant for scenes with any complexity as it has to iterate through every vertex in the scene.
- Every vertex whose normal facing the light defines the a face exposed to the light, with the most exterior of the vertices connected to these defining the perimeter of the face of the hull.
- The face of the hull is extended back into infinity along the vector to the light to define a shadow volume.
- Any vertex within a shadow volume is in shadow. Any vertex not within a shadow volue must be checked if it defines a shadow volume.
- Every vertex outside of the light volume is in shadow if no other light marks it as exposed to the light or in shadow.
- The intensity of the lighting would have to be specified on a per vertex level based on the light.
You could potentially speed this up by doing a check against representative objects (collider or bounding volumes of some sort) to see if an object is in shadow as checking against 8 vertices for a bounding box is a lot cheaper than checking against an entire mesh.