(-0f).GetHashCode() not equals 0f.GetHashCode(), how to fix this fast?

Following situation (code is reduced to make it clearer):

public class Hex {
    public float x;
    public float y;
    public float z;
    public Hex(float newX = 0f, float newY = 0f) {
        x = newX;
        y = newY;
    public void Flatten() {
        z = -x - y;
    public override int GetHashCode() {
            return x.GetHashCode() ^ y.GetHashCode() << 2 ^ z.GetHashCode() >> 2;
                // Calculate hash based on Vector3 calculation
Hex hex = new Hex(); // all variables equals 0f

So naturally:

Debug.Log(hex.x == hex.y); //true
Debug.Log(hex.x == hex.z); //true
Debug.Log(hex.y == hex.z); //true


Debug.Log(hex.x.GetHashCode()); //0
Debug.Log(hex.y.GetHashCode()); //0
Debug.Log(hex.z.GetHashCode()); //-2147483648

Further investigation concluded:

Debug.Log((-0f).GetHashCode()); //-2147483648

So this is a Problem:

Hex hex2 = new Hex(-0f, -0f);
Debug.Log(hex.GetHashCode() == hex2.GetHashCode()); //false

Because I want to use the Hex-class as a dictionary key, the Hash of both hex and hex2 should be identical. Performance is very important, because the key is called a lot.
So what is the quickest function/calculation to fix this?

Technically -0f and 0f are two completely different values. The IEEE 754 floating point format has the concept of a “signed zero”. Logically the two zeros are treated equally in “most” cases but not all. An equal check between the two zeros will return true. However certain calculations will yield a different result. For example 1f / 0f will yield +infinity while 1f / (-0f) will yield -infinity.

The only way to solve this issue is to “normalize” the zero with a seemingly pointless check:

if (val == 0f)
    val = 0f;

This will ensure if “val” is zero it will always be a positive zero. So just do this in your “Flatten” method or inside your constructor after you called the Flatten method. Note that you probably want to do the check for all 3 values.

Note that GetHashCode does not calculate any special hash for float values. It just returns the same 32 bit pattern as int. So a negative 0 looks like this in hex 0x80000000. This is equal to “-2147483648” as signed integer. Have a look at this IEEE754 converter.