Well, I’m looking right now at some LPV (Light Propagation Volumes) paper, and as far as I can think, I could use a camera at Lights GameObject to gather the color information from the surfaces, but you can already think on the limitations, like, what’s outside of the camera would not contribute, would increase drawcalls, Point lights would need multiple cameras, and such, what would be the correct way of doing It (Access colors for bounces and bleeding)? Thanks!
You are correct in your thinking. The issue is that there is really no way at runtime to know what is needed and what isn’t, so you have to include everything. This means for every directional light you need to render a color texture, and for every point light you either have to render a cubemap or use one of the warping techniques available.
It is usually sped up by using low resolution and doing the calculations over multiple frames. Crytek cascades their light propagation volumes and I am fairly certain they don’t all update once per frame.
LPV as a technique is a lot more suitable for outdoor environments than indoors, where the lack of detail isn’t as noticeable and you can get away with only using GI from a single light source.