It’s become apparent to me that floats and doubles aren’t always accurate. It’s only natural, of course, that a computer would have to round after a certain number of digits due to memory/performance limitations. However, I have read here that floating point numbers have an accuracy somewhere 7.2 and 7.5 digits. My understanding is that, since you can’t have “part” of a digit, the computer will actually start to make weird mistakes remembering what the true value of a number is once too many digits come into play. But at what point do these mistakes start happening How many digits can I use before 1.250000 turns into something weird like 1.250001413?
The reason I ask this is because I plan on having a health system where the player has the potential to have a fraction of a hit-point(something like 2.5 health left), and I’m worried about small errors adding up to unexpectedly (and unfairly) killing the player when he should have barely survived. I’m fairly certain that using just two decimal places isn’t going to cause any problems, but I’m curious as to what causes floats to lose their accuracy.