float to double

I’m having a weird bug. in a script I’m making I have a little line of code that increase a float by 0.1 but for some reason when the float gets to 2.7 and I have it increase again it goes to 2.799999 and having the float increase doesn’t fix the bug so it still goes increases like this (2.1 ,2.2 ,2.3 ,2.4, 2.5, 2.6 ,2.7 ,2.799999, 2.899999, 2.999999,3.099999, etc.) what causes this bug and how do I fix it?

That’s simply how floating point numbers work. The number “0.1” actually can’t be represented exactly with floating point numbers as they are binary numbers.

Have a look at this website. It allows you to type in a decimal number and it gives you the binary representation of the number.

`0.1` is actually something like `0.10000000149011612` but since a float only has a limited amount of significant figures it gets rounded down to 0.1 when displayed.

The binary representation of 0.1(decimal) is actually `10011001100110011001100....`. It’s an endless recurring pattern which has to be rounded or cut off at some point. Just like the decimal system can’t represent `1 / 3` since it’s `0.33333...`

The decimal system can easily divide numbers by “10”. The binary system has the factor “2”. So the only accurate fractions are

``````0.5  0.25  0.125  0.0625 ...
``````

and any combinations of those. All fractions that can’t be directly represented have to be approximated, like 0.1 which is actually 1.600000023841858 * 2^-4

1.600000023841858 Is the sum of

``````1 + 0.5       +     0.0625 + 0.03125         +          0.0039062 + 0.0019531 + ...
//   1     0     0     1        1     0          0          1           1       ...
//       0.25  0.125               0.015625  0.0078125                          ...
//  1/2   1/4   1/8   1/16     1/32  1/64       1/128     1/256        1/512    ...
``````

In decimal each “digit” behind the decimal point can have 10 different values (0 … 9). Each “place” behind the decimal point has a different “scale”. The first digit has a multiplicator of 0.1 the second has 0.01 the third 0.001 and so on.

In binary each digit can only have 2 different values (0 or 1). Each “place” also has a different scale. The first digit has 0.5 the second 0.25 the third 0.125 and so on. It’s simply uses base “2” while the decimal system uses base “10”.

I faced the same issue too.
Using double solved my issue especially when it involved calculation like additions , subtraction, etc.