I’m having trouble getting my lightmaps to render properly when I choose DXT1 compression. They are generated externally (i.e. I am not using Beast) and have no alpha information, so I would expect DXT1 to work fine. However, it looks like DXT1 is effectively converting them to a 1-bit image; all I see are black or white pixels. DXT5 works fine, but I would like to minimize the size of the lightmaps because I am using a large number of them.
You can’t use DXT1 for Unity lightmaps, since DXT1 encodes only 3 color channels.
We encode HDR lighting information in lightmaps using the RGBM format, which needs 4 channels, so those lightmaps need to be compressed with DXT5 (all 4 channels contain meaningful data).
Without RGBM encoding your lightmaps would have very low range or very low precision. If that doesn’t bother you, you can write your own shaders that will be interpreting your lightmaps in whatever way you like but I’d recommend sticking with DXT5.
Thanks, Robert – very helpful. However, the lightmaps I’m using come from an external source and before any extra processing have only 8 bits of information per pixel (i.e., they are 1 channel of grayscale). They are very high resolution, but low precision. I might like to experiment with replacing the builtin shaders, but I’m not sure where to start digging into Unity’s shaders to short circuit how it is interpreting the lightmaps. Does the Terrain Engine use one of the Legacy Lightmapped shaders to interpret lightmap data? I’m very new to shader programming and am trying to wrap my brain around all the shaders I’m looking at. I can get a mediocre result by modifying the Lightmap-FirstPass terrain shader, but I’m not sure yet how to access the lightmap data there.
BTW, just downloaded the PuzzleBloom package. Really neat effects there; thanks for making them available!
These are surface shaders which use the Lambertian lighting model. Surface shaders hide all the nifty details (like lightmap decoding) from you, so you normally don’t need to worry about them. But you can also access all that magic relatively easily.
Just add #pragma debug under #pragma surface surf Lambert in one of the shaders. Then switch to Unity and click “Open compiled shader”.
Surface shaders basically generate a couple of passes for you that are commented out in the file with a comment “shader source for this pass:”. The interesting passes are the ones with LightMode tags. Those have the following values:
ForwardBase
ForwardAdd
PrePassBase
PrePassFinal
You can just copy the whole compiled shader, remove all the compiled code, uncomment the CGPROGRAM’s and you should be set to start modifying it. As soon as it compiles you should get the same result as from the original shader.
Both ForwardBase and PrepassFinal passes will have calls to DecodeLightmap(). The function itself is defined in UnityCG.cginc. Just rename the call to e.g. DecodeLightmapSimple() and define it before all the passes between CGINCLUDE and ENDCG tags and do your decoding there.
Et voilà!
Ah, excellent! This is exactly what I was looking for.
Meanwhile, since my last post I’ve modified the Lightmap-FirstPass surface shader to interpret my lightmaps as the emissive component of the SurfaceOutput structure. This has allowed me to classify my lightmaps as “Texture” rather than “Lightmap”, and they are then rendered correctly when I use DXT1 compression. I have issues with terrain tiles beyond the basemap distance – I’m not certain which shader is being used at that point. Still, it’s greatly improved my application’s performance (both memory consumption and terrain tile loading times are way down).
However, I suspect I might get better integration with Unity’s engine by defining a DecodeLightmap* function as you suggest. I’ll explore this avenue. In the meantime, can you tell me which shader is used beyond the basemap distance? My understanding is it is one of the Diffuse shaders – but which one, exactly? I think others have had this question.