Is there a reason Transforms in a Unity scene are saved with rounded precision?

Our project requires custom serialization of some object values, including the position and rotation of objects we dynamically spawn into the scene.

When saving these values from localPosition I notice that the position will store the highest precision value (eg. -12.365378379821778), but if this object were saved to a Unity scene file, I can see that this is stored as m_LocalPosition: {x: -12.365378...}.

Is there a reason for this rounding? I would have thought higher precision is desired, especially because we deal with small scale objects that are often stacked and aligned with each other, and hope for minimal physics jitter during spawning/unspawning. I can round our serialized values, it appears that Unity’s scene serialization is approx 6-7 decimal places (another question). Likewise, I’m uncertain if it would be reliable/valuable to round x/y/z/w of my serialized localRotation.

Unity serializes scene files through their native serialization code on the C++ side. Vector3 values consists of 3 float values and they are stored as floats. However the JSONUtility always stores numbers as doubles like the json standard suggests. However this doesn’t really give you any more precision as the actual value in memory will only be a float (single precision value). A float has only about 7 significant decimal places (24 bits) while double values have about 17 significant decimal places (53 bits).

If you want to explore how the single / double formats work under the hood you can use my little editor window which shows the binary representation of float, double and integer values in comparison. When you enter the value “-12.365378” in the upper float section and cast it to a double you just get the value as double. However when casting it back to a float you get the same value again.

Also you may want to have a look at this table i’ve posted some time ago which shows the degeneration of precision the larger the value gets. Precision is relative since it’s a floating point format.

You might not believe me at first, but…

Students of programming in most any language come up against this with disbelief and horror, angling their head like a dog hearing a strange sound. Mathematicians, engineers and scientists hate this, but it is a product of electrical engineering.

Unity uses float types to store values in Vector3 and Quaternion. The output code writing the data may define a double instead of a float, which implies that each float is converted to a double before being sent to the file. This appears to increase precision, but that’s not what is happening.

The float type is a format similar to scientific notation, using an mantissa for the “digits” and an exponent to float the decimal point, but it isn’t a base 10 format. It is a base 2 format. When the mantissa, an integer component, is multiplied by the exponent, the result will not always line up with the decimal value used to create the float. Worse, when a float is converted to a double, it seems to invent new digits, but it isn’t. When reading a decimal expression, it may seem to loose something. 0.2 may come back to you as 0.2, but 0.24 may come back as something like .0239999321…

It is best to stop thinking in terms of decimal digits, especially when evaluating the precision of these types. The float offers about 6 decimal digits of precision, without regard to where the decimal is actually located, in a way analogous to scientific notation. Factually, after 6 digits are provided there are a couple of bits that offer part of a 7th digit, but since it doesn’t align exactly to a base 10 “place”, it becomes confusing to think in terms of decimal digits of precision.

The IEEE 754 format for the float type has a sign bit, 8 bits for the exponent and 23 bits store the mantissa (the digits of your number). When the mantissa is multiplied by the exponent, the result doesn’t exactly line up with the decimal representation. The double has a sign bit, 11 bits for the exponent, and 52 bits store the mantissa. Converting from one to the other may produce identical binary representation, but the decimal interpretation of that value seems to invent digits, giving rise to the impression of greater precision. If the source is a float, the double it converts to may look different when printed as a base 10 value, but the two are identical internally.

This is just how floats and doubles are.

Another point you may want to know, these aren’t 5/4 rounded, they are truncated.

The only way to get precise storage of float or double data is to save binary versions, not base 10 interpretations of them. Unfortunately the CPU’s on different machines may have different binary representations (related to the little-endian or big-endian representation of integers), so it is often discouraged. If you know all target CPU’s are going to use the same binary format, you could choose to read and write data in binary format (or coded as HEX text), but it won’t be portable to incompatible CPU designs without conversion on load. The C# language is based on a runtime (and IL code) that assumes a theoretical CPU, so in theory the representation should be compatible on different machines, but I don’t know if that is guaranteed (my primary language is C++, not C#).

Consider, too, what scale is involved. Even a float will write out 6 digits of a meter unit, and since the decimal floats left to right, the larger the number the fewer digits are beyond the decimal. 3 decimal places is 1 mm accuracy, so if you have 4 digits past the decimal you can rely upon, you’re accuracy is about 0.1mm.

If your application requires accuracy beyond that, or represents magnitudes beyond this (say you have several thousand meter sized things among several sub-milimeter sized things), then a game engine may not be suitable for your purposes. This is not a precise environment, it is an artistic expression of what may be real counterparts, but it is, in a sense, a puppet show of reality, not a simulation of reality.

The best you can do is rely upon local coordinate systems. While you can’t expect a 100 meter parameter to be as precise as 0.01mm, you can place an empty GameObject at that large scale that owns objects placed within 10 mm of it’s center, then place objects on that local space to an accuracy of about .01 mm or better.

This is the effect of the decimal point floating. That is to say, even though 6 digits of precision are reliably provided, that doesn’t mean 6 digits beyond the decimal, but 6 digits from left to right of a value in a base 2 version of scientific notation. You have 6 digits for a value on the scale of 10^-5, and 6 digits on a value on the scale of 10^6, but not 6 digits to the left and 6 digits to the right of the decimal in a single float.