I have a complex shader with multiple textures and complex maths, and thought I might be able to combine some of the data into just one channel on a texture, so say the alpha channel could hold 1 1 b it value, a 3 bit value and a 4 bit value. So all fine in theory but I suspect in practice filtering and mipmaps is going to break that idea? Has anyone tried something like this before got it working before I start doing all the code to combine the textures and change the shaders?
In practice that will only work with uncompressed textures (and as you say correctly, you also need to consider filtering). Better go with two compressed textures instead, that will still give you better results and lower bandwidth