I ran into this weird issue where the result of a value differs when changing it from 1 line of code to 2 lines of code even though the execution order is practically the same. I thought both would end up with the same value, but they don’t? This doesn’t happen if I replace the Mathf.Min(3f, 3f) with just 3f.
Tried it with doubles and now I see the rounding error.
My guess is the compiler works with just doubles and since I’m specifically storing the float, it actually does a float conversion in the first result which it tries to optimize in the second result causing the rounding error.
I recommend either reposting this in Scripting or asking the moderators to move the thread. This is a question that the people in that section would be better qualified to answer.
Ahh nah it’s fine. I think I was just using the forums as a rubber ducky to figure out what was causing this. But now that I know why it occurs the post can be closed all together. xD
By the way, this has absolutely nothing to do with the min method!
Just using 3f directly has the same result:
Indeed the culprit is rounding in a way. The actual computation components in the CPU (ALUs) tend to work with higher precision than float (because they are designed to support doubles at very least) and just throw away the precision when actually storing in a float. Usually this gives you some extra precision at no computation cost, but in edge cases like those, it can suddenly make such a huge difference.
Note that this is not hardware independent! It’s a common cause why the same program can behave differently on other CPU architectures.
Even worse, there are also other kind of inaccuracies which are the reason why most gaming physics engines (including Unity’s) are not deterministic (that means for example throwing around 100 physics balls and letting them bounce against each other, will every time result in different chaos after a minute, even if you started them all with the same positions and velocities): https://stackoverflow.com/questions/328622/how-deterministic-is-floating-point-inaccuracy
Haha yeah I can’t believe a different .net compiler will change the result here.
I actually ran into this problem because I’m used to always use int conversion when I want to round down to int.
I never used FloorToInt because I thought it was the exact same thing, but FloorToInt will actually return the correct value as well. Guess I’ll start using FloorToInt from now on.
What do you assume the correct value here to be? 9 or 10?
Mathematically correct it would be 10, but honestly it is very risky to use a statement that only makes sense on a computer if it manages to assume “absolute” precision.
The correct approach here unfortunately is not very pretty and actually a real challenge!
Here a small article I found where they wave away the issue more or less by just subtracting a fixed epsilon: https://mortoray.com/2016/01/05/the-trouble-with-floor-and-ceil/
However a constant epsilon can fail quickly: https://stackoverflow.com/questions/9916808/unexpected-behavior-of-math-floordouble-and-math-ceilingdouble
It’s a challenge that has to be tackled in industries like automobile or aviation where such an edge case could costs lives. There for example I’m not allowed to use even the standard library of C++ because it is prone to this kind of errors. Instead we have special libraries that force us to define the expected value-range of our doubles, so that it can automatically select a fitting value for epsilon internally when computing. However I digress… xP
In games you are likely better off to avoid such edge cases.
The compiler likely noticed that is a constant, and precalculated the value of the expression for you. Meaning it doesn’t actually divide 3 by 0.3 there.
If you want reproducibility with floating point values you better be prepared to work for them. There’s a reason I’m evaluating streflop for my engine for my determinism needs.
I doubt that’s the issue since that would mean that the result of the precalculation is different from the runtime result given the same exact inputs. I do think it’s compiler optimization related, but just not precalculation based.
The reason I think using any Mathf rounding function also fixes it is because then, just like first result in my screenshot, the compiler cannot directly use the internal double and convert it to an int. instead it will have to convert the double to a float and then to an int. Looking at how the latest .net version doesn’t have this problem at all means it’s just an issue I’ll have to live with in unity. And using Mathf rounding functions is a good safety net to prevent this issue from appearing again.