Right, so I have a float value of 0.141, yeah?
Now, basically, I want this value to be converted to 141. essentially multiplying by 100.
Problem is that this multiplying by decimal gives me 140. Which lacks that 1.
This is actually to be converted and displayed as a string, so whatever I can do to get this result, I would be most grateful. I’ve been looking around, tried a bunch of things, but so far, has yielded no rewards, sadly.
Just to clarify, multiplying 0.141 by 100 should only give you 14.1… I’ll assume you meant multiplying it by 1000.
When decimal numbers are represented internally they are represented in bits, which are power-of-2. This means not all decimal numbers can be represented exactly in binary, so there is some loss of precision. This is called broadly “floating point error.”
To correct this type of error, you need to agree on a heuristic for your rounding. Most likely what will get you there is to add an extra 0.5f after you multiply by your 1000 scale value, like so:
float myOriginalFloat = 0.141f;
int myThousandLargerInteger = (int)(myOriginalFloat * 1000 + 0.5f);
That should get you where you want to be, as well as give you explicit control of that rounding fudge, which in this case I chose to be the usual halfway mark between 0 and 1.
1 Like
Floats will accumulate small rounding errors as calculations are performed on them- this is because they’re meant to be fast and efficient, not super-accurate. This is why monetary calculations should never be performed with floats. If you desire greater accuracy, you may want to use decimals instead and eat 4x the memory and performance cost in the process for a smaller possible numerical range.