When I run the code below, I got this “Answer is 0” while it have to be 1. Can anybody tell me how to fix this? Thanks in advance.
Code: Debug.Log("Answer is " + 320* (1 / 320));

Even when I use this code, the answer is still 0
Code: float y = 320 * (1 / 320); Debug.Log("Answer is " + y);

(float) ( 1 / 320 ) ) converts the result of the integer division (0) to a floating value : 0.0; while ( 1 / (float) 320 ) will give a floating result of the floating division 1/320.0

This likely occurs because floats attempt to round themselves off to be faster and more efficient, but rounding will obviously cause some miscalculations; hence the name floating point precision errors. You could try using doubles instead of floats, or rewriting your statement to (320 * 1) / 320;

To actually make it work you gotta convert the “1” must be a float, it doesn’t matter if the “320” multiplying is float or not. So the correct form is:

Debug.Log("Answer is " + 320 * (1f/320));

If “1/320” is not a float, you will get a integer result (rounded if necessary), which is what you were getting there:

1/320 = 0.0… which rounded to integer = 0, which multiplied by anything gives you 0