Floating Point Errors and Large Scale Worlds

Hello,

I am currently working with a team on a project involving a large scale world.

I currently believe Unity supports anything up to 99,999.99 meters away from the origin point (1 Meter = 1 Unity unit). This size gives precision up to 1cm, if I am thinking of this correctly. Up to 9999.999 meters away from origin gives you 1 millimeter precision. I am looking for conformation that this is in fact how it works, and to be informed if I am missing something in the calculations.

Thank you in advance,

1 Like

I remember the following post, where an animated model has been placed at different positions and starts to jitter/whobble quite noticeable around 5000 units away from the world origin.

The forum contains tons of large world related posts and floating point accuracy as well as origin shifting comes up every time and then as well.

I believe, the ultimate conclusion is:
As of April 2018, there are other game engines that would be a better match for creating large worlds.

7 Likes

Agreed. Basically operating just beyond 5km mark will produce issues, depending on what you do. Knowing this limitation you put in place some world shifting when the action moves beyond certain threshold.

I wonder if Unity has plans for 64 bit engine.

1 Like

That does not mean what you think it means.

64bit precision transforms is what you mean.

The engine itself already supports 64bit architectures and did so for a long time.

2 Likes

Sure, that’s what I meant.

There’s a video of Starship Citizen that illustrate what you can get with 64bit math. I’ll post it here if I find it.

5 Likes

5000 units:( is too small IMHO just look at this http://davenewson.com/posts/2013/unity-coordinates-and-scales.html

Yeah, it depends on the operation as well. For instance make a sphere of scale 5000 and move the camera near the surface and do a raycast on the mouse position. The hitPosition won’t be very accurate or if you do a Transform.TransformPoint it will just suffer or augment the precision issue.

The 32 bit limitation makes you look for creative options which can work but well, make the system more complex to develop, maintain and prone to side effects in a large scale environment

I wonder what prevents Unity to add support for 64 bit fields in Mesh, Transform and Vector structures - is it an underline platform compatibility issue?

1 Like

Probably platform compatibility as you say. Since Unity support too many platforms I think it’s hard for them to make such huge change I honestly think this will happen but we just need wait. Just look at how Unity evolved from 5.6 to 2018.1 that’s a huge step they added some good features. And there’s always solution like world origin shift if I don’t mistaken UE4 use the same work around but it’s built-in I guess.

1 Like

I replicated the test, I whipped up a animation and a character mesh with decently high vertex count. The mesh is around 1.5 meters in height. I tested the mesh at 100, 1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000, then 50000 and then 100000 units (Meters in this case).

For close up shots on animated meshes there is visible vertex jitters at around 2000-3000 units. However, the jitters would be unnoticeable in medium and long range shot all the way to 10,000 units. Close range shots would be around 3000 units. Extreme close range shots would be around 1000 units.

Although Star Citizen does have some very impressive tech, a similar multiplayer implementation has been around since, early 2000’s when Guild Wars was in production.

1 Like

From the top of my head…

64 bit floating point operations are maybe not hardware-supported by all platforms they target, so it would get software emulated and this is slow.

The “double” type consumes twice as much memory as “float”. So every Vector3 would increase from 34=12 to 38=24 bytes. Which would render every existing Unity data format that currently contains these types, e.g. asset bundles, saved games, etc… incompatible.

Larger data types causes more band-width usage as well. With all the cache-line optimization talks going on here lately, using larger data types is probably also counter-productive in this regard.

1 Like

Yep, I can understand the trade off in performance though for some applications we’re already abusing of double data types. And it’s no fun to ping pong to float every time you need to translate the data to graphic entities.

My question was more about the cost of adding native double precision as an option in the engine - I guess any change at so low level components will quickly trigger lot of dependency issues.

A zero shifting builtin solution can help some scenarios but it’s kind of hack that works for some, but not ideal when you really need that precision in your data.

1 Like

I really think a well implemented zero shifting can work perfectly since many open world uses UE4 which use that’s hack

Agreed, better than no support at all.

Hello Guys,

I am an Environment modeler, and I had a discussion with my level designer. He said that " I need to make stuff in clean number. For Example, If I am making a cube, it has to be in 1M in Height and Width. So he can use Grid snap" So the snap value will be 1M inside of unity because in unity 1 Unit = 1M.

Which I understand. And I am fine with it. But he refuses to use vertex snap, I asked him why and he said: “Vertex snap give a value of 1.9899(just example -not exact number) which is bad for the game engine in terms of programming issues and it is also a bad level design to have unclean numbers”.

Unity created Vertex Snapping for a reason, to snap. It is just another snapping option to me. I see that there going to be an issue in a huge open world. But what we are doing here is a closed indoor environment.

#Does this affect level design in any way?

#Using Vertex snapping is a bad practice as he says??

#If assets don’t align using grid snapping, is it really bad to use vertex snapping??

#So even if a real-world asset is 1.78 meter, should I have to round to 1.5M or 2M just to satisfy the need of Grid snapping?

Please throw some light here. I am really tired of this argument with people around me about this whole conversation. IF there a best practices guide, that will be helpful.

I also do understand the importance of Grid snapping when I do modular asset pieces or tilling assets. I make them in the increments of 1M grid’s. But other than that I really need a clarification on this.

Thank you.
SK

Your designer should have been able to provide a better explanation. There are a lot of possible reasons why uniform sizing would be good. But absent any of those, it doesn’t really matter.

It’s only bad for programming if you actually have a case where you have to deal with it in code. If that doesn’t exist, then the argument is a red herring.

Vote for double precision world positioning here:

https://feedback.unity3d.com/suggestions/double-precision-for-worldspace-and-objects-and-vector-slash-math-64bit

It is currently the second highest voted feedback in the graphics category. Just a little bit more of a push to get to #1.

1 Like

According to this PhysX thread from 2015, is double precision not supported by PhysX. Thus I guess UT will not bother implementing it either.
https://devtalk.nvidia.com/default/topic/627742/physx-and-physics-modeling/can-i-change-pxreal-to-use-double-precision-in-the-free-version-of-physx-/

To my knowledge double precision is supported by every major platform (at the very least that’s all three desktop platforms as well as the consoles). ARM has had hardware support since ARMv6 which means mobile devices as far back as the original iPhone.

Performance is a completely different story. You’d have to translate from double precision to single precision before you sent the data to the graphics card as even the most recent graphics cards are 32 times slower with doubles than they are with singles. Processors are generally at least half as fast with doubles.