So I noticed that on most shaders I’ve seen, the alpha layer of the diffuse map is used for specularity. Meanwhile, for any actual transparency, the alpha layer of the normal map is used.
That seems sort of backwards to me. I would find it much easier to do it the other way 'round. I haven’t learned much about shaders yet, but I wonder if there’s a technical reason for this practice, and if I would indeed be able to do it the way I’d prefer?
My guess is that you’d be more likely to not compress the normal map, if it had to be one of the two, or you’d use more bits on it if it were compressed. Having a ratty edge for specularity will be a lot less noticeable than for transparency.
My only guess is that whoever wrote it knew that they were more likely to use just diffuse and spec over normal maps or alpha transparency. That’d mean they load the minimum required textures.
Also, when you set a texture to be a normal map in Unity, thanks to the channel swizzling, you lose the alpha map. So you can suffer bad compression but keep the alpha channel, or you can have nicer compression but no alpha channel.