What is the correct way of defining a float or int number?

Hello,

This may seem to be a basic question but i don’t have the right answer for it so i decided to ask it here and get some answers from you guys. I am writing my scripts in java language. Sometimes i see these type of writing on the internet;

3.0f or 3f. What is the difference between these two? How should i define any int or float number at the beginning of a script, should i do it like;

var number : float = 0;
or
var number : float = 0.0;
or
var number : float = 0.0f;

I am confused. Sometimes a float number cannot be divided by other integer or float number, like;

5/4 has no result but,
5.0/4.0 gives a result.

Can anyone explain how should i write these type of things in Unity3d using Java Language.
Thank you very much in advance.
Sincerely

…don’t use javascript.

More seriously, the reason you sometimes see 3.0f instead of 3.0 is that, at least in c# (javascript works a bit differently, and imo not in a good way), the compiler interprets 3.0 as a double, the f at the end in 3.0f tells it to interpret it as a float.

5/4 would be interpreted as a division of integers, the outcome of that would be 1.25 which is not an integer number. Usually you’ll get an integer result when using operators on only integers, so you may get some unexpected results when using division.

As for how you should define it in Unity, I’d recommend just writing

public var number : float;

And set the initial value in the editor instead of your script.

First, there is no difference between 3.0f and 3f to the compiler.

For the other cases you can think of it as following: a whole number such as 5 is compiled as an int, a number with a decimal point is compiled as a double, but any numbers that end with f compile as a float. Binary operators on these numbers, such as division, automatically “upgrades” to match the type of the more complex one.

What this means is that the compiler treats the following

var number : float = 5/4;

is the same (computationally) as

int a=5;
int b=4;
int c=a/b; (=1)
float number=c (=1f)

But the following

var number : float = 5/4.0;

is treated as

int a=5;
double b=4;
double c=a; (=5)
double d=c/b (=1.25)
float number=c (=1.25f)

Hello again. Thank you very much for your answers. I understand the difference now. I just want to ask one more thing to clear any question in my mind. Which way at the bottom is the best way of defining variables?

var number : float = 1;
var number : float = 1.0;
var number : float = 1f;
var number : float = 1.0f;

Or for decimals, which is the best way;

var number : float = 0.5;
var number : float = 0.50;
var number : float = 0.5f;
var number : float = 0.50f;

Thank you!