Could someone please convert this into JavaScript? I don’t understand what all these letters mean such as “0xaacff006” or this “(n<<13) | (n>>19)” because JavaScript doesn’t have bitwise opperators.
float Noise2d(int x, int y) {
unsigned n = x;
n ^= 0xaacff006;
n *= 0xdc3deee5;
n = (n<<13) | (n>>19);
n += y;
n ^= 0x0a87317a;
n *= 0x38656b5e;
n = (n<<13) | (n>>19);
n += 0xa6c5636a;
n *= 0x7d4677b3;
n = (n<<13) | (n>>19);
n ^= 0x57a90182;
n *= 0xe09358ab;
const float inv_2_31 = 1.0f / 2147483648.0f;
float res = 1.0f - n * inv_2_31;
return res;
}
Thanks!
This function translated to Unityscript becomes:
function Noise2d(x: int, y: int): float {
var n: uint = x;
n ^= 0xaacff006;
n *= 0xdc3deee5;
n = (n<>19);
n += y;
n ^= 0x0a87317a;
n *= 0x38656b5e;
n = (n<>19);
n += 0xa6c5636a;
n *= 0x7d4677b3;
n = (n<>19);
n ^= 0x57a90182;
n *= 0xe09358ab;
var inv_2_31 = 1.0f / 2147483648.0f;
var res = 1.0f - n * inv_2_31;
return res;
}
But there’s a big problem: it compiles, but doesn’t work! This function gives Overflow Errors in several lines, because the runtime functions check overflow in most operations.
If the values of x and y are relatively small (-32000…32000) the hex constants may be reduced to 16 bits, and with some care they will never overflow. The resultant script is:
function Noise2d(x: int, y: int): float {
var n: uint = x;
n ^= 0xf006;
n *= 0xeee5;
n = ((n<>19)) & 0xffff;
n += y;
n ^= 0x317a;
n *= 0x6b5e;
n = ((n<>19)) & 0xffff;
n += 0x636a;
n *= 0x77b3;
n = ((n<>19)) & 0xffff;
n ^= 0x0182;
n = (n * 0x58ab) & 0xffff;
return 1.0 - n / 32768.0;
}
This function works fine, and returns pseudo-random values between -1 and +1.