Hi All,
I’ve seen various questions on forums on CombineMeshes not retaining lightmap details (over and above negative feedback on the CombineMeshes documentation page). I actually managed to get this working and wanted to summarize what I did here. I hope people find this useful.
Background:
- This is all done in URP and Unity 2019.3.13f1
- I baked my lightmaps and saved the lightmap data to individual prefabs using this script GitHub - Ayfel/PrefabLightmapping: Script for saving lightmapping data to prefabs. Used through the Assets tab in Unity. Place your prefbas in the scene with this script at the root. Set up your lighting and in the editor go to Assets->Bake Prefab Lightmaps. After is processed you can now spawn your prefabs in different scenes and they will use the lightmapping from the original scene.. This script helps to save lightmap data to the origial prefabs, so that when I instantiate prefabs at runtime, the prefab scene instances retain the lightmap data.
- My prefabs are instantiated at runtime, but to improve performance i wanted to also combine the meshes of these prefabs at runtime do reduce my Batches count. I am the unity Mesh.CombineMeshes function, this is where it started to “not work” for a lot of people as the main problem was that although the Meshes combined successfully, the lightmap data from the individual prefabs was not being carried forward to the combined mesh. Unity - Scripting API: Mesh.CombineMeshes
Below is a summary of what I did to get step 3 above working:
I did not use Unwrapping.GenerateSecondaryUVSet(mesh) on the combined mesh. In my experience, this seem to generate another UV for the lightmap, when what I really wanted was the lightmap UVs that were auto generated by Unity upon import of the individual meshes of the individual prefabs (Mesh model import settings). The solution of using this GenerateSecondaryUVSet method was recommended by the following link but did not work for me:
I had to manually set the lightmapScaleOffset value of the combine array before passing it to the CombineMeshes function. Something like this:
combine*.lightmapScaleOffset = filter.gameObject.GetComponent<MeshRenderer>().lightmapScaleOffset;*
*mesh.CombineMeshes(combine, true, true, true);*
*```*
_I got this as a clue as i looked at the documentation that stated *Set hasLightmapData to true to transform the input Mesh lightmap UV data by the lightmap scale offset data in [CombineInstance](https://docs.unity3d.com/ScriptReference/CombineInstance.html) structs. The Meshes must share the same lightmap texture.* Doc link: https://docs.unity3d.com/ScriptReference/Mesh.CombineMeshes.html_
_**The renderer in the object holding the combined mesh, needs to have its lightmapindex value filled manually.** It is only when i did this, did the lightmap image in the inspector on the "Mesh Renderer" component of the gameobject holding the combined mesh... If i had skipped this manual lightmapindex assignment, the renderer component would not show a lightmap image in the inspector. This also assumes all your original meshes that were combined, were originally using the same lightmap with index of 0._
*- ```renderer.lightmapIndex = 0;```*
*See screenshot below (in play mode) the baked lightmap is populated in the renderer because i set lightmapIndex manually.*
*- *
_**Your gameobject holding the combined mesh, needs to be marked as static.** Without this, the "lightmapping" dropdown option would not even appear in the inspector. See screenshot below. Note this is a screenshot of the same gameobject as the previous screenshot, except that the below is not in Play mode._
*- *
*I hope some of us find this post useful.*
*Kevin*