How to replicate: create a blank Unity project (I used Unity 2021.3.16). Create a new script with this code, attach it to camera and click on play button.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class NewBehaviourScript : MonoBehaviour
{
void Start(){
float A = Mathf.Round(0.001f * 1000.0f) / 1000.0f;
float B = Mathf.Round(0.001f * 1000.0f) / 1000.0f;
Debug.Log(A == B);
Debug.Log(A);
Debug.Log(B);
//Debug.Log(A + " is equal " + B);
}
}
Console output is: False 0.001 0.001
Unity says the two floats are not equal due to small imprecision in float calculations, which is a documented Unity feature. So far so good. Now uncomment the commented Debug line and run again. That’s where some magic happens.
Console output is: True 0.001 0.001 0.001 is equal 0.001
How is that even possible? Not only that last debug log command (which is not supposed to modify anything) is able to change results of comparisons of two floats, but it does that after the debug command comparing them has finished executing and sent output to console already (?!)
It’s almost like after I remind Unity that A and B should be equal, Unity says, okay, now I admit it, happy now?
Anybody has any insights on what is going on here? Because I have no clue.
I just confirmed your observations on an arbitrary object I had lying around.
I won’t dig into the actual reason, but my guess is pinhole optimization in the code generation step. By actually USING the variables A and B in an expression that has side-effects inside this scope (string concatenation), the compiler can’t just optimize away these otherwise inferrable values.
Compilers do things all the time, trying to produce more or less literal code so it will run faster. For example, your expression 0.001f * 1000.0f probably produces NO final code that multiplies constants; the constants are already pre-multiplied. Mathf.Round() is also not actually called as a function, as it is “aggressively inlined.” That’s fancy talk for “the compiler looks at the contents of the function and expands it right where you call it every time you call it, making bloatier code that runs faster because nothing goes on the stack.”
In general, though, we’ll beat on this drum every time it comes up: do not compare floats.
If compiler shortened “A = Mathf.Round(0.001f * 1000.0f) / 1000.0f” to just “A = 0.001f”, or something like that, while generating code to execute at runtime (as per your explanation, if I understood it correctly), wouldn’t it also optimize B to exactly the same value, instead of a slightly different one?
Also, I posted a simplified example, but in actual code (which led me to this discovery), instead of 0.001f there were variables set by Physics.Raycast (distance to surface from a moving object), which become known only at runtime (and change constantly). For sure compiler can’t know and optimize those values in advance? Yet script behavior was exactly the same. Two rounded raycast distances to the same surface are never equal - until their values are logged into console. Then they magically become equal.
There’s no magic in computers and programming. Though there are a lot of edge cases, most of which you never have to worry about if you follow best practices.
Such as not comparing floating point numbers for equality.
Float point inaccuracies can happen for several reasons - one of those are leftover values in CPU registers because CPUs use those to sometimes provide a little more precision than technically promised. The consequence of that is that it is highly situational of what result you get.
By the way it’s a “feature” shared with the C# language as a whole and also compilers for C++ and the majority of languages because they all delegate the actual float calculations to the CPU-internal optimizations.
There are some compilers where you can provide a flag to exchange determinism for some speed. Somewhere I read Unity Burst compiler should get such a flag at some point because they want to provide a deterministic physics system.
When you Debug.Log a float, it has to convert float to string. Aside from floating point inaccuracies, conversion from float to string is also inaccurate. When your console says is not precise.
This isn’t specific to Unity. This is how float comparisons work in all C# applications, and to my knowledge, all programming languages. Never compare two floats for equality. Use the epsilon value to create your own “ApproximatelEqual()” extension method, or multiply it by a large number, convert to int, and compare that way. You should also not compare decimals for equality.
Here’s some extension methods I created for this at one point:
public static bool ApproximatelyEqual(this float current, float compareTo) {
return ApproximatelyEqual((double)current, (double)compareTo); //Ignore IDE. Cast is not redundant. Without cast, either ambiguous call error or infinite loop.
}
public static bool ApproximatelyEqual(this double current, double compareTo) {
double epsilon = Math.Max(Math.Abs(current), Math.Abs(compareTo)) * 1E-15;
return Math.Abs(current - compareTo) <= epsilon;
}
Note, unless you are storing and tracking incredibly tiny fractions of a number in your variables (to the ~15th power), the above logic will always work as expected when comparing your floats.
The reason for the string output is that it rounds up to the intended significant value simply because floats and decimals have the tiny variations due to their nature. If you were to print to string explicitly to the 15th or so decimal place, it would show everything and show the actual difference in value.
I think people have lost sight of the original observation. Line 11’s answer changes based on the presence of Line 14. This is definitely a low-level code generation quirk, not just the vagaries of IEEE754 float resolution.
The interpreter/bytecode compiler is allowed to reorder statements for performance reasons when it does not change the guaranteed output. Therefore an additional statement even afterwards can change a thing or two.
I agree with what you’re saying in the first half. I’m well familiar with compiler tech. I don’t think your second statement agrees with the first. Everything in the original test case above, taken individually, is computed by deterministic means, so even if it’s using IEEE754 it should come out with the same answer. An assumption it makes about which flavor of equality should be used has changed.
Hell, I almost want to bring AnimalMan[UK] back so he could wave some crystals around the room about it.
It was a figure of speech used for emphasis, apparently.
As halley has noted above already, some people misunderstood the point of the thread. I’m not asking for any help to “fix” my code, or explain why direct comparison of floats is a bad idea and what/how should be used instead.
I wanted to understand how a simple debug command was able to change value of floats, when it wasn’t supposed to affect them in any way (at least not according to any documented Unity features). Comparing floats here is used just as an observation method to confirm that their values change. We can drop that “comparing floats” part completely (replace it with some observation method, maybe?), but we’ll still have the fact that floats change.
Yes, both IEEE754 for 32 and 64bit operate to the same philosophy: that any result (outside clearly defined NaN or underflow situations) will result in “an answer that is identical to a calculation with infinite precision but rounded to the nearest least-significant digit”. So if you’re doing all the math in 64bits, that’s deterministic. And if you’re truncating some of the intermediate results to 32bit, that’s also deterministic.
Now, as I have said multiple times in this thread, yes, it’s a code generation issue. I agree it’s in the C# layer. The compiler can decide different sequences of statements. But it’s not going to come out to different values in the floats, it’s going to decide whether it knows enough about equality to decide that step (e.g., which flavor of .Equals to rely on), and it’s getting that wrong.