Below are 2 examples of casting a float to an int via a cast and 1 using a function.

float value = 0.94f;

Debug.Log($“Value : {value}”);

int a = (int) (100 * 0.94f);

Debug.Log($“a : {a}”);

int b = (int) (100 * value);

Debug.Log($“b : {b}”);

int c = System.Convert.ToInt32(100 * value);

Debug.Log($“c : {c}”);

Output is as follows:

Value : 0.94
a : 94
b : 93
c : 94

The only difference between (a) and (b) is that (a) is using a literal float, where as (b) is using a variable float.

My question is why is there is a difference in the output? It it possible that using a variable is introducing some precision error?

Initial feedback from StackOverflow reported that he same issue could not be reproduced using the Roslyn compiler with regular Console.WriteLine.

It was suggested that System.Single (the type given for a literal float) is an alias for a float type. Not sure it this gives any insight into the way it is handled internally.

Example (c) was added just to clarify that the correct result can be achieved using the same variable.

I’m not a compiler expert here but because this is C# and not Unity, you can use tools to see what the resultant C#/IL/Asm is yourself which will help you reason about the what and how. If you go to SharpLab.io and type in the following:

using System;
public class C {
public void M() {
float value = 0.94f;
int a = (int) (100.0f * 0.94f);
int b = (int) (100.0f * value);
}
}

Select results on the right as C# you’ll see:

public class C
{
public void M()
{
float num = 0.94f;
int num2 = 94;
int num3 = (int)(100f * num);
}
}

IL you’ll see:

.maxstack 2
.locals init (
[0] float32 'value',
[1] int32 a,
[2] int32 b
)
IL_0000: nop
IL_0001: ldc.r4 0.94
IL_0006: stloc.0
IL_0007: ldc.i4.s 94
IL_0009: stloc.1
IL_000a: ldc.r4 100
IL_000f: ldloc.0
IL_0010: mul
IL_0011: conv.i4
IL_0012: stloc.2
IL_0013: ret

So the compiler itself has prebaked the constant values. If you make the “value” above a constant (add the “const” keyword) then you get:

public class C
{
public void M()
{
int num = 94;
int num2 = 94;
}
}

You can switch to IL too to see the differences.

I’m not trying to explain the why here (I’m not the expert), just give you an objective view on what it’s doing. It’s like the difference in compiler conversion versus runtime execution. Maybe there’s even some kind of option in the compiler for this. Again, not the expert.

Personally, I never use float->int casts like this. I’d always opt for something explicit like Mathf.CeilToInt or equivalent (add/sub 0.5f before casting to int etc). Using that here gives consistent results.

Hopefully someone more knowledgable than me on this subject might give an answer to the why/what/how etc.

To explain the “why”: When you put equations into your code that consists purely of literal values, the compiler will pre calculate those into a single value (as Melv has shown with the decompiled code). At compile time the compiler may use any precision to carry out the pre calculated result.

As you may know a float (System.Single) is essentially just scientific notation in base 2. As a result not all finite decimal numbers can be represented as a binary number. Our normal decimal system has similar issues. For example 1f/3f as a decimal number can not be represented with finite digits as the decimal expansion of that number is 0.33333333..... In base 2 we have similar issues, though with different numbers. The most prominent values are 0.1 and 0.2 which both can’t be represented exactly in binary.

In the decimal system each digit has a value that is a power of 10. So 1000, 100, 10, 1, 0.1, 0.01, 0.001,… In binary we have powers of 2: 16, 8, 4, 2, 1, 0.5, 0.25, 0.125, 0.0625, …
With a finite number of digits you can only get an approximation of your desired decimal number. 0.94 is such a number. You can use this website and enter 0.94 in the top field and press enter. You will see, that the number is represented as 0.939999997615814208984375 as this is the closest approximation. You can use the +1 / -1 buttons to go up one step or down one step. The error when going up one step is larger than this representation.

// error | actual value
//-------------------------------------------
// 0.0000000024 | 0.939999997615814208984375
// 0.0000000428 | 0.940000057220458984375

So actually having the number 0.94 at runtime in a variable means it is represented as explained above. Now the issue with casting to int is that it is always truncating the decimal places. So it always rounds towards 0. So 93.999 becomes 93 and -93.999 becomes -93.

I guess that should cover everything If you want to learn more about floating point numbers I can recommend this Computerphile video and playing around with the website I linked above.

This is really unfortunate if you think about it, that 1/5th (and 1/10th by extension), a beautifully rational number and quite frequent in our everyday decimal practice, is cursed with being totally irrational in binary.

This is why it helps to think in powers of two:
0.5, 0.25, 0.125, 0.0625, 0.03125, 0.015625, 0.0078125, 0.00390625
shows exactly which fractions are “friendly” and there is a huge gap over 0.1 and 0.2, plus these numbers are technically primes of the binary division sequence. I also like how 125 and 625 at the end of the number, keep oscillating forever after the 2nd number.