Is there any technology similar to kinect fusion that instead of scanning real world scenes would be usable to scan 3d scenes?
I look for this kind of tech because there are 3d scenes made for high quality architecture rendering that are not optimized for real time use. If you have to optimize these scenes manually it takes a lot of time to have everything perfectly done. Specially when you want to bake the light into a lightmap, there are many problems that can generate light leaks if the modelling is not perfect.
I imagine if it was possible to scan a 3d scene by defining a navigable area we would be able to generate only the necessary polygons for intended detail visibility:
If with a ālimitedā technology like kinect you can already have some good results scanning real world places, I imagine what you could get with a āvirtual kinectā.
In the case of lightmap only Iām already using a similar solution with reasonable success, I define the navigable area with a grid of points, and I calculate the GI on each of these points one by one, so if there is a problematic area similar to what was described in the picture above, the light would leak from inside to outside where itās not visible. But it still leaves wasted space in the lightmap that will be loaded into the memory, and in some cases when you have a thin wall dividing two rooms with a continuous polygon defining the ceiling, you may still get the light leaking.