Scene size limit?

Hiya!
So I’m kinda new at Unity, and trying to figure out some technical parts of it before I dive into Unity head-first :slight_smile:

My question is; say you’ve got a continent about 70 km across on the x-axis, and 30 km across on the z-axis, that contained many small villages, towns, and meshes in regular; would it be possible to create it as one large scene given that you use occlusion culling as much as possible, and down the Level of Detail on objects farther away? I mean, of course it is possible, but given the average computer efficiency and technological limits, would the scene be playable without any major lag?

I hope that came out right; do ask if I said something funny! :slight_smile:

Thanks in advance!

Hi, i’m somewhat new in the unity forum, but experienced in game programming in general, so i’ll say what i know.

Actually, there is no defined limit in the size of the objects you include in the scene, but should be aware that things that are very far away from the world origin could get rendered badly. As floating point numbers are precise up to the seventh significant digit, the farther you are from the origin, the less you have the ability to get finer absolute precision. At 70 km (which i take it is 70000 world units) there should be already problems when rendering things at the centimeter/millimeter scale.

I’ll advice to divide the world you are planning in smaller pieces that are world-centered, and use the Application.LoadLevelAdditive function to make the load seamless. I’ll require coordinate changing by script, of course.

As for the occlusion culling, you have to balance it. If used heavily, the CPU overworks. If used very few, GPU overworks. Although i think nowadays computers and GPUs should go fine.

Among other things that can help optimizing the performance are adjusting the camera far plane to a small value, so only near things get rendered. I’m pretty sure that objects that are not inside the camera frustrum add no (or very, very little) performance cost. You could even render two cameras in the same position at the same time. One could render faraway things at low resolution (like mountains and so), and other near things at high resolution. Cameras in unity can render objects selectively by organizing them in layers.

Hope some of this helps! :wink:

I’d try to keep everything within 10K units from the origin, since anything beyond that suffers from increasing floating-point precision loss.

–Eric

So, if I’d divide the world into 7x3 pieces (or more, of course), use the aforementioned function for seamless loading, and balance the occlusion culling as much as I can, I should be in the clear?

We’re still talking a lot of models per piece, but if I plan forest regions etc. to occlude the rendered view so that the character never sees, let’s say, over a hundred meters (or units, or whatever), this should in theory work? :slight_smile:

Sorry for the delay, but yes, your reasoning seems very accurate to me. But in the very end, you should test the performance as you make the game and change the scene or code consecuently. If one thing runs with bad performance, i’ll run with bad performance everywhere.

For the models, if one has to appear lots of times (like a tree), try to reduce the triangle count to the minimum possible, even if nowadays GPUs can handle lots of triangles per second. Could even use LOD for the models. And you should be able to tweak things like occlusion, camera planes, fog and stuff at runtime, so if for example you enter a forest, you limit the view distance to 100m, and set it again to 1km when exiting the forest.

But as i said, it’s continuous test and tweak… :slight_smile:

Sorry for the delay on my behalf as well! :slight_smile:

I think the question has been answered now, thanks to everyone who answered it! This will help me a lot! :slight_smile: