int a = 100;
int b = (int) ((float)a * 0.01f);
debug.log(b);
after run, b is 0,
I cannot understand that result.
I expected 1, but result was 0,
Why result is 0 ??
int a = 100;
int b = (int) ((float)a * 0.01f);
debug.log(b);
after run, b is 0,
I cannot understand that result.
I expected 1, but result was 0,
Why result is 0 ??
Well, first off: 100 / 0.1 results in 1000. And 100 x 0.1 results in 10. Not sure why you’re expecting it to be 1. Did you write the right numbers in the question?
And secondly: Variable b is an int which can only store whole number values (no decimal places). So if you were to 0.1 / 100 and get 0.001, storing that result as an int will make it round to 0.
Sounds like you might be running into a floating point rounding error, in floating point it might be coming out as 0.9(recursively) which is still a zero as far as integers are concerned. Using double instead might fix it or calling
int b = Mathf.RoundToInt(a * 0.01f);
But be advised that will cause 0.5f to come out as 1 also.