Floating point inconsistencies

I know floating point math is not an exact science, but why do I get different results from a float variable than a float const?

Debug.Log((360.1 - 360.0));
// Result: 0.1

var q : float = 360.1;
Debug.Log((q - 360.0));
// Result: 0.1000061

360.1-360 is double not float because you threw in numerics without declaring them (would work even in C# like that as 360.1 without f is a double there too. on UnityScript you can make use of the f as well if you want to be sure to have a float), while the second calculation defines one of the two variables as float which degrades the whole calculation to be a float (the smallest common numeric format between them enforces the type of operation in the end)

No, in Unityscript float is the default, not double.

Debug.Log((360.1f - 360.0f));
// still prints 0.1

Anyway, good question. I have no idea why it works that way. If you do this:

Debug.Log((360.1d - 360.0d));

You get 0.100000000000023. So it’s not somehow interpreting (360.1 - 360.0) as doubles either.

–Eric

If unity is using a floatingpoint module optimized for speed rather than precision its normal to see that kind of variation. The reason when you use constants its accurate is because constant mathematics are not done at run time and handled by the compiler. So if you were to do 360.1 - 360.0 at runtime it just evaluates to 0.1.

As far as the doubles, It is odd that it does not act similar.

At any rate though this should not be much of a issue. were talking about 6.1 millionths with the float and much much smaller with the double. with floating points if your checking for equality directly, it should only be for zero or other variable values ( not constants ). float.Epsilon is the smallest difference allowed between two floats to equal each other and its even smaller than the 6.1 millionths we are talking about.

With that said though the compiler is somewhat smart. For example the following lines

        float q = 360.1f;
        float d = q - 360f;
        Debug.Log(360.1f - 360f);                         // 0.1
        Debug.Log(q - 360f);                              // 0.1000061
        Debug.Log(d);                                     // 0.1000061
        Debug.Log(d == (360.1f - 360.0f));                // true
        Debug.Log((q - 360f) == (360.1f - 360.0f));       // true
        Debug.Log((360.1f - 360.0f) == (360.1f - 360.0f));// true
        Debug.Log(d == 0.1f);                             // false
        Debug.Log((q - 360f) == 0.1f);                    // false

so if you were to compare to the constant value of 360.1f - 360.0f on any of those you get a true. But if you were to say 0.1 explicitly thats when you get a fail.

With all that said, its best not to use the == operator unless checking for zero, even then you may want to use Mathf.Approximately.

Thanks everyone for the explanations!

BBDev, that makes sense, good insight. The double exception is very strange indeed. I just have never seen math imprecision when calculating such simple numbers before in other languages I’ve used.

With c++ you get the same trouble if you set your intell or gcc compiler flags to use fast math :wink:
(No xp with other compilers)