When I add 0.1 to a float, it doesn’t always produce 1.1, sometimes it is 1.0999999999 or similar.
Why is this? Is this because bytes are power of two?
This is how floats work, lots of numbers do not fit exactly into a floating point. In most cases you should not need to worry, there is the Decimal type that can be used if it’s really problem, but then it has it’s own issues to contend with. For more precision there is also the Double type, but again even then all decimal number may not fit perfectly, and I think Unity ends up using floats in the end anyway.
Basic theory from computer cience
http://en.wikipedia.org/wiki/Computer_number_format
unstable algorithms are amazing, execute this
[MenuItem("Game/Test")]
public static void Test()
{
var x = 0.1f;
for (var i = 0 ; i < 9 ; i ++) {
x = x * 100f - 9.9f;
Debug.Log(i+":"+x);
}
Debug.Log(x);
}
If you write by hand, x didn’t change. If you write in a computer, the final x will be the amazing value of “5.343345E+09”
In general case is not a problem. But you need to know to identify when its happens.