# float to int cast unexpected/inconsistent behaviour

hi everyone!

for a project I had to remap a float from 0-1 to 0-100
I simply multiplied it by 100 and cast it to an int to remove everything behind the decimal point.
however I have been getting some unexpected results from this

``````                float number = 0.7f;
Debug.Log((number * 100));//70
float test = number * 100;
Debug.Log((int)test);//70
Debug.Log((int) (number * 100.0f));//69
Debug.Log((int)(float)(number * 100.0f));  //70
``````

i understand that floats can have this tendency to turn the number into 0.69999âŚ
i dont understand why it gives different results in the samples given above

i tried the same thing in a c#.net console application but everything works as expected

``````            float number = 0.7f;
Console.WriteLine((number * 100));//70
float test = number * 100;
Console.WriteLine((int)test);//70
Console.WriteLine((int)(number * 100.0f));//70
Console.WriteLine((int)(float)(number * 100.0f));//70
``````

*for the project iâm using mathf.roundToInt now just in case
iâm just curious why` Debug.Log((int) (number * 100.0f));` has a different result from the rest

Trying to store .7 as a binary number is kind of like trying to store 2/3 as a decimal number.

2/3 in decimal looks like 0.666666666666 infinitely repeating.

Similarly 7/10 in binary looks like 0.1011 0011 0011 0011 0011 0011âŚ and so on, infinitely repeating. Obviously it just gets truncated at some point and thatâs how you end up with 0.69999 something

If you pick a number that is representable in binary, say 7/8s or 0.875, it will work perfectly.

The difference you are seeing is likely just some kind of rounding difference between Console.WriteLine and Debug.Log.

Interesting enough, if you take this line

``````Debug.Log((int) (number * 100.0f));
``````

And remove the cast to (int) and Debug it, it prints out 70. I just ran it to see if I was getting the same results.

i understand that there are rounding errors in floats, and using 0.875 doesnât give this behaviour
but i would still expect these to return the same number

``````Debug.Log((number * 100));//70
Debug.Log((int) (number * 100.0f));//69
Debug.Log((int)(float)(number * 100.0f));  //70
``````

the fact that casting to float before casting to int works, makes me think that a float multiplication returns a double (or gets turned into one) , causing a rounding error.(69.999âŚ) becomes (69.999000âŚ)

i did an extra test by casting to double instead and it shows the same rounding error

``````Debug.Log((int)(double)(number * 100.0f));  //69
``````

note that visual studio also flags the double cast as redundant

Probably due to Debug.Log using InvariantCulture by default: UnityCsReference/Runtime/Export/Logging/Logger.cs at 61f92bd79ae862c4465d35270f9d1d57befd1761 Âˇ Unity-Technologies/UnityCsReference Âˇ GitHub

Iâd guess Console.WriteLine uses default culture of your device. Those may have different number formatting rules/behaviors.

I really donât think this is an actual difference, just a formatting/printing one.

1 Like

You should get familiar with decimal rounding methods like ceiling and floor.

Also, if all you want to do is ommiting the values after decimal point, you can do it simpler just by formatting the string, like floatVariableName.ToString(âN0â). There are several formatting parameters, thatâs something to get familiar with as well.

To note, the Debug.Log is not to blame for the 69 value.

Adding a breakpoint so I can check the values and we can see that multiplying the float by 100.0f produces 70. Then casting that value to an int still produces 70. However, when combining the multiplication and casting in one line, we get 69.

Actually, another interesting set of numbers

The multiplication still gives 70, casting to an int still gives 70. But then num is 69.

Also, a c# console program produces these numbers

Notice in this case that the multiplying and then casting to int produces 69, but num is 70.

Note - in this post I use the word âfloatâ to refer to both single and double precision floats. I donât say float to refer to the shortcut C# type name âfloatâ, but to the IEEE floating point standard.

So I just ran this as well and got OPâs results. Including in Visual Studio (outside of Unity, VS2019 to be exact, targeting 4.7.2) I donât get the result.

So I rewrote the code like so:

``````        float number = 0.7f;
float test = number * 100;

float fa = test;
float fb = (number * 100.0f);
float fc = (float)(number * 100.0f);

int a = (int)test;
int b = (int)(number * 100.0f);
int c = (int)(float)(number * 100.0f);

Debug.Log(fa.ToString("0.00000000")); //70.00000000
Debug.Log(fb.ToString("0.00000000")); //70.00000000
Debug.Log(fc.ToString("0.00000000")); //70.00000000

Debug.Log(a);//70
Debug.Log(b);//69
Debug.Log(c);  //70
``````

And like this in visual studio:

``````            float number = 0.7f;
float test = number * 100;

int a = (int)test;
int b = (int)(number * 100.0f);
int c = (int)(float)(number * 100.0f);

Console.WriteLine(a);//70
Console.WriteLine(b);//70
Console.WriteLine(c);  //70
``````

And so I checked the IL and weâll notice that both have identical IL:

(unity - only the setting of a,b,c lines)

``````    // [22 9 - 22 27]
IL_0023: ldloc.1      // test
IL_0024: conv.i4
IL_0025: stloc.s      a

// [23 9 - 23 40]
IL_0027: ldloc.0      // number
IL_0028: ldc.r4       100
IL_002d: mul
IL_002e: conv.i4
IL_002f: stloc.s      b

// [24 9 - 24 47]
IL_0031: ldloc.0      // number
IL_0032: ldc.r4       100
IL_0037: mul
IL_0038: conv.r4
IL_0039: conv.i4
IL_003a: stloc.s      c
``````

(vs in distinct project from unity - only the setting of a,b,c lines)

``````    // [129 13 - 129 31]
IL_000f: ldloc.1      // test
IL_0010: conv.i4
IL_0011: stloc.2      // a

// [130 13 - 130 44]
IL_0012: ldloc.0      // number
IL_0013: ldc.r4       100
IL_0018: mul
IL_0019: conv.i4
IL_001a: stloc.3      // b

// [131 13 - 131 51]
IL_001b: ldloc.0      // number
IL_001c: ldc.r4       100
IL_0021: mul
IL_0022: conv.r4
IL_0023: conv.i4
IL_0024: stloc.s      c
``````

Aside from the line # comments, theyâre identical (the line # comments of course will be different, theyâre different code files).

What we can tell though is whatâs actually happening though IL wiseâŚ

``````    IL_0012: ldloc.0      // number
IL_0013: ldc.r4       100
IL_0018: mul
IL_0019: conv.i4
IL_001a: stloc.3      // b
``````

load the variable (0.7) into register
mult those registers (this places the result on the top of the eval stack)
conv.i4 on that result (**** this is our primary reason for the problem *** - it acts on the value on the top of the eval stack. This converts whatever is there into an int and places it on top of the eval stack)
Move the result on top of the eval stack to variable 3 (b)

We can compare this to the version that casts to float first before int. It just have one interim step of conv.r4 before the conv.i4. This converts the result to a float, rather than to an int, and places it ontop of the eval stack.

The issue going on is what is on top of the eval stack when âconv.i4â is called?

In the case of âbâ, itâs the result of the multiplication. In the case of âcâ, itâs definitely a float (since conv.r4 was just called).

Thing isâŚ whatâs at the top of the eval stack relies on how mul was jitted into machine code by the runtime. And Unity uses a different runtime than Visual Studio 2019 on its own.

The thing isâŚ âmulâ is just telling the runtime we need to do a float multiplication (since the inputs are floats). How that actually gets performed is up to what the runtime decides.

This is the biggest part about âfloat errorâ in generalâŚ floating point standards really only define how a float is stored. Itâs loose on how operations actually occur. Different CPUs will behave differently. Some CPUs donât even have a floating point operator and instead rely on software implementations offered often by the OS or some other source.

Itâs not even required to be operated in the same word size as the float (so yes you could operate a single float with a double float hardware operator)!

Usually hardware implementations perform all FPU operations in a word size larger than the actual data. For example (and donât hold me to thisâŚ Iâm working on limited knowledge of the CPU architecture), AMD Ryzen seems to have a 256-bit data path for its FPU operations (the architecture Iâm on). Itâs usually bigger so that way the results contain all of the overflow (since float arithmetic can have spanning sig ranges).

And also keep in mind that usually speaking a CPU will use the same FPU path for all floats regardless of size! Why have 2 FPUâs when a large one can cover the same operations on both singles and doubles?

Float error doesnât just have to do with rounding and sig value ranges. It also has to do with the target platforms implementations of operations. And âplatformâ can refer not just to hardware, but software (including version of that software).

So the âresultâ sitting at the top of the eval stack isnât necessarily a single precision float. It could very well be a double, or even larger. Depends on what the runtime decided to do for âmulâ.

So this leaves the only questionâŚ

âWhy does Unityâs runtime appear to do something different than VS 2019 targeting 4.7.2?â

AndâŚ :shrug:

We donât know what Unity does in their version of the runtime.

It could very well be that theyâre using a modified version of the runtime from a long while ago (this gets into the muddy history of Unity with mono/.net/xamarin/etc). The runtime may have used to do this and Unity still has it because they still use some modified version of that.

Or their version is more heavily mono based (likely) than the one used by Windows/VS2019 (note that unity only uses VS20XX for editor/debug purposesâŚ the compiler and runtime is distinct from this).

OrâŚ who knows. Maybe the Windows/Microsoft/VS2019 runtime implicitly casts the result of the operation to a single float in the end through software as part of its implemenation of âmulâ to create a more consistent result at the expense of speed. While Unity just takes the raw result and doesnât force a cast until told to (or forced to by moving it off the eval stack and into a typed field/variable) to increase performance (or just because its version of the runtime it inherited historically did so for performance).

âŚ

The why? Itâs hard to tell why.

All we know for sure though is that its all to do with how the runtime decides to perform âmulâ. And those choices are different in the Unity runtime rather than the Windows/MS/VS2019 runtime.

Heck Iâm willing to bet if you IL2CPPâd, or you targeted switch/PS/xbox you may very well get varying results there as well.

âŚ

In the endâŚ donât trust floats (single or double)!

Itâs part of the definition of how floats work. Theyâre fundamentally prone to error.

2 Likes