Well, look up the range and significant digits that a float can represent… A float has 23 significant binary digits which roughly translate to 6 to 7 significant digits in base 10. Your number has 7 digits, so there’s almost no room for any fractional part. You may want to have a look at my floating point precision table over here.
In fact the smallest representable change actually is 0.125 at that scale. However due to rounding that addition may fail / get rounded down. You can play around with 32 bit floats over here or here.
Also, you’re mixing ints and floats. In that case you might end-up with a situation where a float is implicitly cast down to an int. I don’t know all the ins and outs so I would avoid that if possible.
Until now I hadn’t to fiddle much around with floating point arithmetic. I was using the site “32 bit floats over here” to mess around with floats already (thanks for the advice any way). But seeing the representation of values there just made me more confuse.
Per example, for Exponent = 137, the lowest decimal value that I can represent in IEEE754 32 bits is activating the lowest bit of the Mantissa, the 0-th from right to left. Which gives me 2^10*(1 + 2^(-23)) or 1024 + 0.00012207. What seems to me be perfectly representable in IEEE754.
But doing Debug.Log(1024 + 0.00012207f) in Unity prints 1024, which makes sense following the floating point precision table over here as you pointed me but runs away from the IEEE754 demonstration from previously. What I’m not getting?
decimal is great if you want to represent values that behave like base10 value. This is because it stores its exponent as log10 rather than log2. This results in numeric representations that behave the way you might expect decimal’s to do so (1/3 is repeating… and 1/10 is not repeating… where as in binary 1/10 is repeating).
It is also a 128-bit in size giving tons of sig-value range (96 binary bits or 29 whole decimal digits and 28 fractional decimal digits).
This comes at the expense of no hardware acceleration (or none that I know of), a huge memory footprint relative to 32/64 for single/double respectively, and a shorter top range than double (decimal has a fixed range, where as single/double have a moving radix which allows for very large values).
You should really only use decimal where that massive fixed sig value range is needed, or where behaving like a decimal is needed (a big place I’ve used it through out my career is banking/accounting software… this is actually why the VisualBasic equivalent type from Microsoft is called the ‘Money’ type).
Really though… try using ‘double’ instead. It expands your sig value range to 15-17 decimal digits, but is pretty fast as it follows the IEEE-754 specification, meaning it can be more easily hardware accelerated.
First off I’m going to just emphasize that this is NOT a Unity thing. This has to do with the language specification. In this case C#.
Next… unless you’re on some weird piece of hardware that doesn’t have an FPU built in (it’s 2023, so unless you’re on embedded hardware, I doubt it), your floats/doubles are being computed in hardware. This means any discrepancy to what is expected is going to likely be hardware specific. The IEEE-754 standard does not actually specify how reproduceable results should be. Basically 1 FPU/CPU may calculate something slightly different than another FPU/CPU.
With that said… here is why this is a language specification problem…
I doubt you’re actually seeing anything calculating wrong.
And rather are seeing the results of how C# stringifies a float/double. And I’ve seen it many times in the past… when stringifying a float C# won’t print small noise found on the end of the sig value range. Rather instead preferring to consider that general float error and display the main portion of the sig value range that the float aught to be representing in decimal.
Case in point… I bet if you convert your float to a double it’ll magically get the fractional portion back.
Here I wrote a simple test to see (written outside of unity to demonstrate unity has nothing to do with it):
Exactly, different format specifiers give you different string representations. Try “R” or “N” and you often get slightly different rounding results when it’s converted into a decimal string. Those formatters use specific rounding rules to minimize strange rounding results in the least significant place. So the number 1048576.125 may get rounded to 1048576 in one format or to 1048576.1 in another. Your numbers’ significant digit count in decimal is at the max limit of 6 - 7 digits, so whatever you do you can’t expect it to behave a certain way at this scale. Those 6 - 7 digits always apply to 32 bit floats, no matter if you have a number like 1.048576125 which gets rounded to 1.048576 just like 1048576.125 which gets rounded to 1048576
As it was already said, if you need greater precision / more significant digits, use a double instead. A double has roughly 15 significant decimal digits. So about “double” the amount.