Very strange, I am dividing an int by an int and I am getting an int. when I divide the int by a float however, I am getting a float as I would expect…
var nTicks = 100;
var motorFactor : float = 1.0;
motorFactor = 0.0;
for (var i = 0; i< nTicks; i++)
{
motorFactor = i/nTicks;
yield WaitForFixedUpdate();
}
motorFactor = 1.0;
This is true for almost all programming languages. In an expression involving several types, they are promoted to the type that is of highest precision/range of all of them.
So:
float int operation = float result
int int operation = int result (since highest presision/range of arguments is still int)
That is interesting, it took me by surprise as I was relying on JS duck typing doing all the work, in this case determining that the operation should be done in the precision of the = variable.
I suppose most languages do that for optimization ?