You can surely make a simple sphere and give it a radius of 6371000 meters. But that kind of model is not going to work well with Unity cameras or pretty much any computer modeling system. The problem is the accuracy of numbers, or more specifically their *precision*.

**An optional lesson on floating point precision.**

The default float value on Unity is a 32-bit IEEE754 variable. It can represent numbers from 0.0 all the way up to

3.402823466*10^38 (or about 300 Undecillion, a 3 followed by 38 zeros), or a negative value in the same range. It can even represent tiny values, like 1.175494351e-38F (or about 1 Undecillionth).

HOWEVER, this magical 32-bit float variable cannot represent every possible number in that range. Just like a tape measure or ruler on your desk, there are only so many “marks” or possible places you can measure with this value, with some empty space between the marks. Unlike a tape measure, the marks change their spacing as you get closer to **0.0**. When you’re really close to 0.0, the marks are very close together. When you’re really far from the origin, the marks get farther and farther apart. Also unlike the tape measure on your desk, you can’t actually have any values other than the marks given.

If you give a value of 6371000 meters, for the radius of the Earth, and you put a 1 meter cube on its surface, it can only approximate the location of the vertices of that cube, according to the marks on that IEEE754 “tape measure”, and the finest marks you can get at that range is 0.5 meters! So each corner of that cube may collapse or nearly double your intended size. Moving the cube around, it must clunk around in 0.5 meter increments.

The clipping planes on cameras is similar: the higher the ratio is between “far” clipping plane and “near” clipping plane, the worse your Z-fighting and Z-confusion resolution will get. To be able to distinguish nearly-parallel elements like windows from walls and doors and signs at 4 kilometers away, you would need a near clipping plane that is a meter away from your camera.

**The practical upshot of all this.**

All of game design and game development (and computing in general) revolves around verisimilitude. You have to approximate, you have to pick and choose your battles, you have to select what needs to be real and fake the rest. You need to load just the terrain you could see, and other users in the same online world may be loading an entirely different chunk of terrain. As you transition from area to area, you need to unload what you can no longer see, so you have space to load more important elements.