Why does the decimal point matter in my for loop?

I had two identical blocks of code (different variable names), one worked, the other not quite.
Straightforward for loops, iterating through an array. The ‘z’ variable is used in calculations within the loop.
This loop worked perfectly:

for(var z = 0.0; z < zDim; z++)

While this loop iterated through the array, setting the same value every time.

for(var z = 0; z < zDim; z++)

The only difference is the decimal point. All my other loops seem to work no problem with simple integers. Inside the loops, calculations are made with a mixture of ints and floats. I know my code works, although it would be awesome if somebody could help me understand why.

Does mixing int and float screw things up somewhere” - almost certainly yes, and this is why Javascript’s dynamic typecasting is so dangerous. It’s pretty unusual for loop iterators to be anything other than ints, so I’d suggest you explicitly declare them as such. i.e.:

for(var z : int = 0; z < zDim; z++)...

If you want to do some floating point calculations based on the value of the iterator, cast it as a (float) inside the loop block:

var zCoord : float = zOrigin + (float)z / zDim * density;