We are attempting to make a game that will consist of an “endless world” of sections that are loaded at runtime.
Each section will be contained in a scene with it’s own lightmaps. Currently we are using the LoadLevelAdditiveAsync method to test loading the scenes into an empty ‘loading’ scene. First we load the scenes, then we load the baked lightmaps from each scene into the LightmapData array, then we assign each renderer in the scene to look at it’s corresponding lightmapIndex, and we set the lightmapScaleOffset back to it’s original value.
Here is a video of the one time that I actually managed to get it working… but I haven’t been able to get it to work since.
Here is a video using another scene with my own meshes. (to test that UV mapping of unity primitives wasn’t the problem). The first part shows the scenes with their lightmaps perfectly applied - no lights in the scene. The next part shows the loading scene with the scenes loaded side by side and lightmaps applied. As you can see, the lightmaps come in at the wrong scale and placement.
There are a couple of things I have noticed that I’d like someone to please explain to me or give me some insight on:
- No matter what value we set the lightmapScaleOffset value to, it doesn’t change how the lightmap is loaded onto the renderers. So then my question is, what does the lightmapScaleOffset actually do?
- When I place a Lightmap Snapshot from one of the lightmapped scenes into the Lightmaps section of the 'loading" scene, the scene that the snapshot came from loads correctly.
- I opened the LightmapSnapshot.asset file as text, and I noticed that there are some promising looking attributes in Enlighten System Information: rendererIndex, rendererSize, atlasIndex, atlasOffsetX, atlasOffsetY. Would it be at all possible to use the attributes to set up our loaded lightmaps correctly?
Any insight or solutions to this problem would be much appreciated