Hello, I got a little problem. I am creating a clicker game and when the money ammount reaches over 100 000, the 2nd decimal digit stops moving, then when it reaches 1 000 000 the 1st decimal digits stops moving also.
My code looks like that:
float currentMoney = 0.00f;
private void Update()
{
currentMoney += 11111.11f;
}
For example:
- 1st frame - currentMoney = 99 999.55
- 2nd frame - currentMoney = 100 000.50
- 3rd frame - currentMoney = 100 111.60
- X frame - currentMoney = 999 999 .30
- X+1 frame - currentMoney = 1 000 011 .00
That’s how Floating Point numbers work.
That said, the main subject of this question, Floating Point Accuracy, comes up fairly frequently. This isn’t an uncommon subject, but the context in which it is brought up will vary. So, I’ll keep it brief.
Unity uses 32-bit (
[float][2]) and 64-bit (
[double][3])
Floating Point values, with an emphasis on 32-bit.
Unlike integers, however, the data in Floating Point numbers needs to be divided up a bit.
A float is comprised of a 23-bit number, an 8-bit mantissa (exponent), and 1-bit for the positive/negative sign. Essentially, you can think of it as a 23-bit integer where you can place the decimal point wherever you want around it.
// 1234567.0
// 123456.7
// 12345.67
// 1234.567
// 0.000001234567
// 12345670000000.0
// etc.
Because it only uses 23-bits for the main value, however, it doesn’t have a huge amount of accuracy available to it. This is the problem you’re running into. Basically, as you increase the exponent (power of 10) of your value, you lose a digit of accuracy off the other end.
Having said all this, I'll also direct you to a
[pair][4] of
[posts][5] I previously made related to the subject of games using huge number libraries (or similar). Hopefully, those might help point you in your preferred direction.
Edit: Formatting