// I have two public floats set on my component ... Min and Max, where Min = 0 and Max = 10, and _range = Max - Min ... so its set to 10.
// Value is a public float with a getter/setter reading/writing to a private float ... its set to 0.2f.
float rangeValue = _range * Value; // _range = 10, Value = 0.2f, so rangeValue = 2
float stepValue = rangeValue / _step; // _step = 1, so 2 / 1 = 2
// Debug.Log(stepValue) prints out 2
// Debug.Log(stepValue == 2) prints false
// Debug.Log(Mathf.FloorToInt(stepValue) prints 1
// Debug.Log(Mathf.Floor(stepValue) prints 1
var svs = stepValue.ToString(); // just casting the float to a string
var realStepValue = float.Parse(svs); // and recasting the string back to a float
_currentStep = Mathf.FloorToInt(realStepValue); // Now Mathf.FloorToInt() will return 2, which was the expected value
I don’t really know what else to say, except that’s a really scary Unity bug. All I’m doing is multiplying 10 * 0.2f, which is obviously 2 … and it Debug.Logs() the right number … but everything else treats it like its not a real whole number, but a float a tiny bit below 2. I logged every single number I possibly could, and they all show up properly. Using ToString() and then putting that back into float.Parse() gives the proper value … so clearly theres a discrepancy between the string value of the float, and however its internally being stored.
This only bugs out if Value = 0.2f. If its 0, 0.1f, 0.3f → 1.0f … it works fine. Its only 0.2f.
This is part of a larger project obviously, and I’m just giving the snippet that matters. I’ve been using Unity for years … I’ve never seen this. This should be very simple, and should never do this.
That is because it is not an integer. It is not a Unity bug, but the nature of floats. They are good at storing approximations, but not exact values. Read up on how floats work for more details.
If you need the resulting math to be exactly some integer, you can’t use any floating point type for that. That’s just not what they are for.
Yeah, I understand that its a float, and thus not exact. But … really? Mathf.FloorToInt(10 * 0.2f) failing is expected? I’m not doing crazy things here.
So … should I never expect Mathf.FloorToInt to be reliable? What am I missing?
The thing about this being a fundamental aspect of floating point math is that the CPU doesn’t care how crazy or non-crazy your math is. Just because it’s simple and straightforward to a human doesn’t mean that the CPU is going to be like “Oh, well in that case, I’ll just start thinking like a human now.” This is just how floating point works, and you have to account for it in your code.
I know its not an integer. My bigger issue is that this is a very simple use case of using Mathf.FloorToInt(). Am I supposed to expect Mathf.FloorToInt() won’t actually work properly?
“In binary (or base 2), the only prime factor is 2. So you can only express fractions cleanly which only contain 2 as a prime factor. In binary, 1/2, 1/4, 1/8 would all be expressed cleanly as decimals. While, 1/5 or 1/10 would be repeating decimals.”
Okay … fair enough. The reason why I can’t tolerate 2 being below 2 … is because Mathf.FloorToInt will return 1 if its below 2. Its not very crazy.
I accept that its my own mistake though for not realizing that could happen, and making sure it doesn’t. But definitely not a crazy use case … I just clearly didn’t realize that can happen!
Well if it’s just under two, FloorToInt is gonna spit out a one, as it should, you can CeilToInt to raise it to the next or just Round to the closest(although i dont think this is in system)
Technically your post title says “but rounds to 1”, but in fact it “floors to 1,” as per your code.
You may wish to consider using the Mathf.Round() function in place of the Mathf.Floor() function, but I obviously don’t fully understand your use case.
This isn’t my making a mistake in how to use basic Unity functions (although thank you) … I just didn’t realize Unity was going to log a different value than the float really is. As I said in the original post: “so clearly theres a discrepancy between the string value of the float, and however its internally being stored.” Which everyone is basically saying is as expected.
I figured it was a floating point issue. What I didn’t expect was that Unity would print out one value, and then treat it as a different value. But … if thats normal and to be expected, my mistake. And yes, I can understand why it would do that … just caught me unprepared :).
This ToString is ironically a lot like this other discussion:
In the future… if you see numbers acting weird, first consult the magical oracle of floating point and assume that’s the problem.
Yep, it’s weird… but as you have said, now you know.
Cause at the end of the day… if there was somethign truly buggy about floats/doubles. The internet by and large would be up in arms. Becuase consider the fact that Unity didn’t create C#/mono/.net. Microsoft created C# and .net, and mono by Ximian/Xamarin/opensource community and has a massive user base around the world. They’d know if float acted outside of the IEEE 754 standard:
Especially considered that most floating point operations are really just pass through to the CPU, and if the CPU doesn’t support it (what is this? 1995?) the OS has an implementation for it. So as to avoid environment to environment differences/bugs like you thought you may have found.
What really confused me was that ToString() said one value, and the float was a different value. But … as surprising as this is to me, I just never encountered this in the past 15 years of working as a programmer (clearly not programming anything that usually runs into this type of problem ;)), including the past five years of working in Unity.
So … I felt duty bound to mention it and draw attention in case there was something wrong. However … that didn’t work out very well for me, except I learned a bit more about what it means to be a floating point “problem” :).
Omg. I’ll necro this thread as it has no answer, just a bunch of people showing their superiority to a person asking, making that person apologize for doing so and for their lack of knowledge. This is why asking such (or any) questions is a mistake on Unity Forums, as it has no “best answer” upvoting system. Try elsewhere in the future.
I just use this non-elegant, but simple solution:
int result_int = Mathf.FloorToInt(source_float + 0.00001f);
works fine for me in 99% cases. And it is not “normal”, as most programming languages I used turn floats and doubles into ints how most people would expect it, even if it is “wrong” from mathematical standpoint.
It has an answer. It has multiple people giving different links explaining the same basic answer.
Floats, the IEEE-754 standard for singles/doubles, can not accurately represent several decimal/base10 values because the binary/base2 version of it is a repeating value. Since the single/double has a finite space to store said repeating value, the value will thusly be ever so slightly < than the expected value.
In this case 0.2 can’t be represented accurately as a float, and instead is equally to 0.19999999 instead and so when multiplied by 10 you get 1.999999999.
The ToString returning “2” is todo with a completely other thing and that is ToString doesn’t actually display the true value in memory, but rather a close approximation, depending on the format (may it be default, or custom) that it stringify’s via. Because mind you… a string, and a float, or 2 different things.
Also…
What? No they don’t.
Most languages keep the data type what it is.
Some languages may not… but most do.
I don’t know how many languages you’ve written in your life. But i’ve written dozens and most of the one’s I’ve written behave like this. Because:
the IEEE-754 standard is language agnostic
a language altering the data type of my data willy nilly makes for a language I don’t want to write.