I’m optimizing for mobile, so I’d like all compute shader floats to use half precision explicitly. For example, if I fill a compute buffer with half floats on the CPU side, I’m not guaranteed that the buffer will be read with 16 bit precision on the GPU side. Depending on the hardware, it might read it as 32 bit instead.
To avoid this, it’s usually recommended to use hlsl compilation arguments such as “-enable-16bit-types”, but I can’t find where to set this in Unity properties. I see a “shader precision model” property under the Player settings, but that seems to do the opposite of what I want.
What’s the best way to ensure that all floating point operations in my compute shader always use 16-bit precision?