Use Vertex Lighting for detecting Light Levels on character in stealth game?

I’m thinking about making a stealth game, similar to the old classic Thief games from Looking Glass Studios, where light falling on the character determens how visible you are. I’ve looked around on the internet and the forums and most people suggest that it is not worth the computation time and that using Colliders and “Shadow Volumes” instead is a better way of doing it. But I want to have dynamic lights like guards walking around with tourches and stuff like that.

So I was thinking. Isn’t there a way to store the light information from Unity (Direct and Indirect Illumination) in the vertices of the character mesh as vertex colours, and then calculate the average Light Level of the character? I mean, the lighting information is allready there for the rendering and a shader dealing with vertex colours should be pretty fast (especially on PC/consoles), right? To improve performance perhaps a low poly proxy mesh for the character could be used instead, and perhaps using multiple meshes for different body parts to determin if only certain body parts are in light or shadow.

What do you think? I’m a beginner programmer with some intermediet knowledge of C# and basic knowleadge of shaders, so please keep the discussion on a not too technical level. I’m not currently looking for a specific solution, but rather tips, ideas and inspiration for me to play around with this concept in an effort to learn more about game development in Unity - I like a challange! :slight_smile:
If you feel inspired and want to test this out yourself, feel free to do so, and please share your results and experience with it.

So for one thing, a more concrete question, how do I get the lighting information from the renderer and store it as vertex colours in my character mesh?

Feel free to add your own thoughts and ideas on the subject matter, let’s have a creative discussion! :slight_smile:
Thanks!

You don’t, because that’s not how it works.

Generally speaking things are designed so the CPU sends data to the GPU very quickly, but not the other way around. Getting data back from the GPU can take several milliseconds, perhaps several tens of milliseconds. That might not sound like a lot, but consider 30fps is 33 ms. So while the cost of the GPU calculating the lighting might be close to “free”, the cost of getting access to that data may not be.

The way this is handled in games and engines that do GPU side computation is one of a few ways.

The most direct method is an async readback where you do the work on the GPU and then use an async readback and only use the data when it comes back some number of update frames later. There are some community projects to add support for this to Unity as it’s not something that’s available for the non-beta releases, though it’s been added to Unity 2018.1.

A second method is to actually move all of the code that needs the GPU onto the GPU, and the CPU never even gets access to it. A common case would be particle effects or non-gameplay relevant physics effects. In your case that would mean moving gaming code to the GPU, which while possible isn’t really ideal.

The worst case, and honestly surprisingly common case, is to duplicate the work on the CPU. If you really need that data now then using the CPU is the best option. Water simulations are a good example; a water surface and effects may all be calculated on the GPU, but the boat bobbing on the water is using a simultaneous CPU side sim of the same water.

Ignoring all that, a vertex shader can’t write back to the mesh data. It’s not what they’re designed for. In fact if you write to the incoming mesh’s “vertex data” in a vertex shader it’s really just modifying a local copy of the data that’s thrown out when that vertex shader invocation finishes. They’re designed to output temporary data that gets interpolated and passed onto the fragment shader, which in turn outputs a color value to a pixel in a frame buffer or render texture. The output of the vertex shader is inherently temporary and gets thrown away immediately after the fragment shader invocations for that mesh are finished.

There are ways around this, like writing to structured buffers within the vertex shader and then reading back that value or potentially running a compute shader to calculate that final single “in shadow” value so the amount of data coming back to the CPU is as small a small possible. It might be a fun experiment, but it’s unlikely going be faster than a pre-baked shadow amount volume augmented with range & raycasts against the few real time light positions. You can likely get away with just a hand full of points on your characters being tested against.

Thanks bgolus for all that information on the underlying rendering technology, there was a lot in there I had no idea about. I’ll have to rethink my method :slight_smile:
Perhaps the percentage of light cover could be calculated on the GPU side and then there would just be a single float value to return, rather then all the vertex colors.

Does anyone know how the old Thief: The Dark Project did their light level stealth system back in 1998(!)? :slight_smile:

How accurate does this have to be?
Could you have a few “probes” swirling around the character, with each one doing a physics line cast to all the light sources in a set range?

Like collect all lights there are, then check if one of the outside points (the probes) is able to see any of those lights.
Then for each light a probe can see, you’d add some value (depening on how far away you are from the light).
This only works for direct illumination of course.

Another (very expensive) way would be to render a frame at the position of every light, point the camera to the player and render it using some special / very cheap shader (maybe draw the player in white, the rest in black, or use depth information as well to figure out how far away the player is). Then use a compute shader to analyze the image / collect pixels that are white.

Sure, it was primarily a software renderer, and very early hardware accellerated game. They didn’t have to deal with any of this complication since they had to compute the lighting on the CPU for everything to begin with. For static lighting they would have likely sampled from the light map data on the ground you’re standing on, then doing raycasts to the dynamic light sources.
https://nothings.org/gamedev/thief_rendering.html#lighting

I’m pretty sure you can do a raycast, and with the raycast hit, test that pixel in the lightmap: How to get pixel color using raycast on object with Lightmap created by "Beast"? - Questions & Answers - Unity Discussions

So possibly you can test the brightness of the spot under the player, and use that as your brightness level. No shaders needed.