On the accuracy of floats and doubles: Where/when do rounding errors usually occur, and how can I avoid them?

It’s become apparent to me that floats and doubles aren’t always accurate. It’s only natural, of course, that a computer would have to round after a certain number of digits due to memory/performance limitations. However, I have read here that floating point numbers have an accuracy somewhere 7.2 and 7.5 digits. My understanding is that, since you can’t have “part” of a digit, the computer will actually start to make weird mistakes remembering what the true value of a number is once too many digits come into play. But at what point do these mistakes start happening How many digits can I use before 1.250000 turns into something weird like 1.250001413?

The reason I ask this is because I plan on having a health system where the player has the potential to have a fraction of a hit-point(something like 2.5 health left), and I’m worried about small errors adding up to unexpectedly (and unfairly) killing the player when he should have barely survived. I’m fairly certain that using just two decimal places isn’t going to cause any problems, but I’m curious as to what causes floats to lose their accuracy.

You will find inaccuracies all the time at seemingly unlikely values, of course without applying them to a multiplication or a power etc those are so small as to not make a difference in any real sense. If you are performing complex math on them then you should use the highest precision possible and consider the methods outlined here Floating-point arithmetic - Wikipedia - though it probably won’t have much of an effect in the kind of system that you are proposing.