Completely Broken Lightmap Results

Hi,

I’m having a hard time getting lightmaps to look even remotely decent. The lighting is completely scattered but seems to repeat where the mesh repeats.

The baked light map.

I assume it has something to do with the generated lightmap UVs. I can’t use the normal “checkbox” way of generating the UVs because the meshes are generated and saved directly to file, so I use Unwrapping.GenerateSecondaryUVSet.

I can’t seem to interpret the UV Charts to know if it looks wrong. It would help if I knew what a correct UV Chart looked like.


I’m also using LOD Groups, and not sure if that will cause problems as well.

Anyone have an idea what is wrong?

I adjusted the scene all the way down to one mesh, one light, no LOD group and it still looks broken.

2232294--148755--Tower.png

To generate the light map UVs, I select the mesh file in the project hierarchy and run this editor script.

static void CreateLightmapUVs()
{
   foreach (Object asset in Selection.objects)
   {
     Mesh mesh = asset as Mesh;
     if (mesh == null) continue;

     Unwrapping.GenerateSecondaryUVSet(mesh);

     EditorUtility.SetDirty(mesh);
   }
}

I tried clearing uv2, hoping that they would be auto-generated, and got similar results.

I’m really out of ideas here. All I want is direct lighting without the cost of 200 real-time lights. I don’t care at all about bounces and AO and whatever.

Is it really impossible to use lightmaps with a basic Unity Mesh? I have to have some external model type to use lightmaps?

Did you use simple Debug.Log to see whether the lightmap is actually being created? How do you create the meshes?

I think you have to save the asset after you assign the UV’s. Best thing to do is create another script to check the first 20 UV2’s and see if they are set before and after baking, I have a feeling because you are not saving the Asset it’s not keeping them.

Thanks for the replies.

Not sure what you mean to check that the lightmap exists through logs. The snapshot looks like this though, which seems like it is taking up too little space in the atlas, but maybe I’m wrong (baked at 5 texels per unit, tower is about 3x6 units on one side).

The meshes are created with Mesh.CombineMeshes().

// Get the sub filters to combine
MeshFilter[] meshFilters = GetComponentsInChildren<MeshFilter>(false);
CombineInstance[] combine = new CombineInstance[meshFilters.Length];
Material sharedMat = null;
int i = 0;
while (i < meshFilters.Length)
{
  combine[i].mesh = meshFilters[i].sharedMesh;
  combine[i].transform = Matrix4x4.TRS(meshFilters[i].transform.localPosition,
             meshFilters[i].transform.localRotation,
             meshFilters[i].transform.localScale);
  //combine[i].transform = meshFilters[i].transform.localToWorldMatrix;
  //meshFilters[i].GetComponent<Renderer>().enabled = false;
   
  if (sharedMat == null && meshFilters[i].GetComponent<Renderer>() != null)
      sharedMat = meshFilters[i].GetComponent<Renderer>().sharedMaterial;

   i++;
 }

 // Combine the meshes
 Mesh mesh = new Mesh();
 mesh.CombineMeshes(combine, true, true);

 // Write the mesh to file
 string szPath = "Assets/GeneratedMeshes/" + name + ".asset";
 AssetDatabase.CreateAsset(mesh, szPath);
 AssetDatabase.SaveAssets();

I tried generating the UVs with and without AssetDatabase.SaveAssets(); and it doesn’t make a difference. The preview window of the mesh shows the “uv2” text either way. I’ve also taken the uv2 out with similar results.

The log results for uv2 on the mesh is:
21568 length
[0] = 0.6,0.4
[1] = 0.7,0.4
[2] = 0.6,0.4
[3] = 0.7,0.4
[4] = 0.7,0.4
[5] = 0.7,0.4
[6] = 0.7,0.4
[7] = 0.7,0.4
[8] = 0.5,0.2
[9] = 0.5,0.2
[10] = 0.5,0.2
[11] = 0.5,0.2
[12] = 0.5,0.8
[13] = 0.5,0.8
[14] = 0.5,0.8
[15] = 0.5,0.8
[16] = 0.7,0.4
[17] = 0.7,0.4
[18] = 0.7,0.4
[19] = 0.7,0.4

This is what it looks like at 40 texels per unit. That weird XOX (I love you, too) is crisper, but otherwise identical.

Top left: Baked light map. No shading.
Bottom left: Shaded, with real-time lighting. No lightmap. No shadows.

When you combine the meshes, what prevents you from computing the lightmap uv directly in there, before creating the asset?

That was more of an oversight. Once all of the LODs were generated and performance still wasn’t fast enough, I needed a quick way to add the lightmap UVs.

I just did a quick test with generate immediately after combine… and it works, with the same lightmap texture even.

Ehh… OK, is there something wrong with the CreateLightMapUVs function?

EDIT:
Instead of saving to the existing model, I created a new one, but it still breaks. It’s like the GenerateSecondaryUVSet function only works when the mesh is new.

This still presents a problem. The LODs are generated by a closed source application. I don’t have a choice but to add the UVs after the mesh file has been created for those.

Not in my experience. But we don’t know exactly what you are doing and in which order. I didn’t have issues with the lightmap computation in Unity so far, but I may be using it differently.

I didn’t mean Unity’s UV Generator. I meant the function I posted above.

Basically, all I’m doing is the second posted code (combine), followed by the first posted code (generate UVs), and then generating the light map with default parameters.

It seems super finicky. If I build the light map with the LOD1 mesh, it displays incorrectly, but if I bake it with the newly minted LOD0 mesh and swap it with LOD1, it looks correct. If I then bake it, it gets messed up.

I think my only solution now is to recombine everything with the uvs generated right away, regenerate the LODs and bake on the first level, hoping for the best.

Not ideal, but thanks for your help!

Yeah, basically, LOD share the lightmap, you’re ideal situation is to create your own UV2’s and LOD and ensure they fall into the same coordinates…given the tool you are using, can you not create the UV2’s for LOD0, then export this as an OBJ then LOD that?