This will explain it all.
In an IEEE single-floating point number you can represent certain numbers perfectly and unambiguously.
For instance, 0.5f is precisely represented as 0x3f000000
However, 0.1f CANNOT be precisely represented. One possible approximate representation is 0x3DCCCCCD
The analogy is that you cannot show 1/3rd precisely as a decimal.
0.33 is correct to 2 decimal places
0.3333 is correct to 4 places
etc.
But you can never exactly represent 1/3rd as a decimal numeral. You just get closer.
Three times 0.33 (one possible representation of 1/3rd) is 0.99… and last I checked, 0.99 does not equal 1.0… and yet “three times one third” should be 1.0.
Since computers store everything internally as binary (that’s how digital computers work!), you can NEVER, no matter how big a binary number you make, precisely represent 0.1f
And in your example 0.2f is simply 0.1f times 2, so not representable precisely.
But you CAN use other ways to represent it, such as with a C# Decimal, but that is nowhere near as efficient computationally, and is rarely ever used. I don’t even know if Unity’s .NET supports Decimal.