The Lit shader in Unity uses texture packing where the green channel is used for occlusion, the red channel for metallic, and the alpha channel for smoothness. However, it does not utilize the blue channel, resulting in an 8-bit waste (since each channel in RGBA uses 8 bits). Moreover, we cannot simply remove the blue channel as it is not supported. While we can set the blue channel to 0, it still exists and continues to waste 8 bits. This issue can be resolved by utilizing the blue channel for smoothness instead of the alpha channel. Unlike the red, green, and blue channels, the alpha channel can be removed, saving 8 bits, which is significant in my case.
How many bits you are ‘wasting’ and if you are wasting them depends on which texture format you are compressing to. BC1(dxt1) for example is only 5:6:5 rgb bit depth and the 1 bit alpha is premultiplied making it ‘free’. And again this precision is within a block so not exactly per pixel.
In BC7 the color precision is shared more fluidly between the channels and cannot be expressed as simple as you loose 8 bits of precision as it is simply gained in the other channels. This has the unfortunate effect that channels can ‘ghost’ into other channels if they are used to describe wildly different images.
Now this is just two texture formats, but if your are shipping on mobile you might be using astc formats with other limitations and strengths that you should read up on.
Anyways if you want to highly optimize your run time texture memory using a generalized shader probably won’t do, you have to understand the optimal way to layout the data based on the texture compression formats available on the target platforms that you wish to ship on, and for this you may want to author your own shader.
Also fyi. the Lit shader in Hdrp uses the blue channel for masking the detail map.
According to unity documentation it uses BC1 compression for RGB and BC7 for RGBA
BC1 uses 4 bits per Pixel and BC7 uses 8 bits per Pixel. Both compression doesn’t care if one of the channel is 0 and uses their respective fixed bits per Pixel.
If you have 1024x1024(1k) texture with alpha it will come out to
1024x1024x8=8388608bit=1.048576mb
But that same texture without alpha will come out to
1024x1024x4=4194304 bits=0.524288mb
That’s over 50% difference but with unity urp lit shader it’s not possible because it depends on the alpha channel.
My apologies unity uses BC3 for RGBA compression but math is still correct because BC3 uses 8bit per Pixel
So there is still 50% loss
Yes for PC it will default to BC1 for rgb and BC3 for rgba, but you are free to choose whatever format you want yourself. The shader does not dictate the format, the texture importer merely has some defaults that you can choose or not to choose to follow, it does not know which shader you intend to use the texture with, nor what will yield the optimal quality for your use case.
The reason bc7 is not default for rgba even though it is pretty much superior in every way except compression time is that it is only supported on dx11 and above.
Also the entire line of BC1-7 formats are only available on consoles and pc, to consider android and ios you should look into ETC and PVRTC formats.
Its also worth noting that storing a greyscale mask in R,G even when wasting the B is always preferable as there is no grayscale single channel texture format that utilizes less memory than BC1 (0.5 byte/px).
BC4 which can only store a single greyscale map also uses 0.5 byte/px, all be it at higher quality.
Thus a Bc7 map which uses 1 byte/px can store 4 greyscale maps in half the memory it would take to store them in bc4, an added benefit of bundling mask maps is that you end up with fewer texture samples in the shader offering you an added performance benefit outside of the memory savings.
The 50% loss you speak of is therefore fictitious as it assumes there is a hardware supported texture compression format that would realize the gain, there is not.