I’m working on a 2D open world game. The game begins with some ‘minor’ introductive scenes, and then the open world part begins.
And well, that’s when I hit my doubt: should the open world be implemented as one enourmous scene containing all of the GameObjects of the open world part… or multiple minor scenes, with the issue of introducing a loading time / transition which would (could?) impact player’s experience negatively?
Obviously, my preference goes towards one big scene, for a number of reasons. But: I lack enough experience in Unity 2D to properly evaluate the impact of this design choice on the long run, especially from the performances PoV. So, those are my two main concerns (apart from other standard ones, such as memory usage):
Unity culling implementations. This is an open point for me: how much is it effective? Does Unity effectively cut-out the cost of rendering GameObjects currently not seen by the player (out of camera view)?
Update() and FixedUpdate() routines. Most of those on the GameObjects I’m implementing do actually nothing as long as the player is not ‘infighting’ them or is close enough to the GameObject to trigger some ‘activate’ routine. But still: those cycles are running. What’s the actual impact of having a ton of GameObjects running thought their Update and FixedUpdate routines, even if most of those do nothing?
As a strategy to mitigate the performance hit in this scenario, I was thinking about a scanner routine on the player. This should every X seconds and evaluate the presence of such GameObjects around the position of the player, activating them only if distance < a certain threshold. That could mitigate performances issues by playing around the ‘active’ state of GameObjects but… would that have an impact? Is this a shot in the dark?
So, in conclusion: do you have any suggestion between one scene / multiple scenes? What’s the best scenario to accomodate Unity’s engine? Am I ignoring other concerns or possible solutions / workarounds which I did not specify in this thread?
Every performance parameter is dependent on the hardware you’re running it on. How it runs on a $5000 ultra crazy gaming rig is going to be different than a $50 cheapo Android device.
Everything else will be somewhere in between.
Predicting the performance of unwritten software on unspecified hardware is a complete waste of everybody’s time.
Organizing the project as one scene vs many scenes is a matter of preference, workflow, possibly performance and many other concerns.
Additional reading:
DO NOT OPTIMIZE “JUST BECAUSE…” If you don’t have a problem, DO NOT OPTIMIZE!
If you DO have a problem, there is only ONE way to find out. Always start by using the profiler:
Window → Analysis → Profiler
Failure to use the profiler first means you’re just guessing, making a mess of your code for no good reason.
Not only that but performance on platform A will likely be completely different than platform B. Test on the platform(s) that you care about, and test to the extent that it is worth your effort, and no more.
Remember that optimized code is ALWAYS harder to work with and more brittle, making subsequent feature development difficult or impossible, or incurring massive technical debt on future development.
Notes on optimizing UnityEngine.UI setups:
At a minimum you want to clearly understand what performance issues you are having:
running too slowly?
loading too slowly?
using too much runtime memory?
final bundle too large?
too much network traffic?
something else?
If you are unable to engage the profiler, then your next solution is gross guessing changes, such as “reimport all textures as 32x32 tiny textures” or “replace some complex 3D objects with cubes/capsules” to try and figure out what is bogging you down.
Each experiment you do may give you intel about what is causing the performance issue that you identified. More importantly let you eliminate candidates for optimization. For instance if you swap out your biggest textures with 32x32 stamps and you STILL have a problem, you may be able to eliminate textures as an issue and move onto something else.
This sort of speculative optimization assumes you’re properly using source control so it takes one click to revert to the way your project was before if there is no improvement, while carefully making notes about what you have tried and more importantly what results it has had.
Additive scene loading is one possible solution:
A multi-scene loader thingy:
My typical Scene Loader:
Other notes on additive scene loading:
Timing of scene loading:
Also, if something exists only in one scene, DO NOT MAKE A PREFAB out of it. It’s a waste of time and needlessly splits your work between two files, the prefab and the scene, leading to many possible errors and edge cases.
Two similar examples of checking if everything is ready to go:
Unity does do culling based on the “bounds” of whatever is being rendered. You can experience this with the isVisible property of any renderer: https://docs.unity3d.com/ScriptReference/Renderer-isVisible.html
However this check does happen constantly in the background so it does add some workload.
Are the Update and FixedUpdate() truly needed? Can’t the player itself launch some Coroutine on the objects when nearby instead?
So called “busy waiting” is indeed a possible performance killer if you have >10000 objects on medium desktops (on mobiles way lower).
Disabling the objects fully does help, albeit it leaves a memory footprint.
For the actual impact, it depends a lot what your target hardware is. Gaming desktops or low end smartphones?
Note that objects in the scene also have a memory load. And so do textures (and meshes). Especially if your open world is so large that it has different “biomes” of some sort, doing everything in one scene directly would mean that all textures are constantly in memory even if the player would need to wander ten minutes to reach a location where a particular texture is used.
Besides splitting into completely separate scenes there are many techniques one can use. Additive loading/unloading of scenes is one option. A common technique is to store data needed to instantiate all (or most) objects in some grid datastructure. Then you dynamically load/unload cells of that grid and instantiate using an object pool (in that case you would have to additionally manage the texture memory).
[quote=“kzkkzm, post:1, topic: 913968, username:kzkkzm”]
As a strategy to mitigate the performance hit in this scenario, I was thinking about a scanner routine on the player. This should every X seconds and evaluate the presence of such GameObjects around the position of the player, activating them only if distance < a certain threshold. That could mitigate performances issues by playing around the ‘active’ state of GameObjects but… would that have an impact? Is this a shot in the dark?
[/quote] This technique can work as well, however try not to keep a huge list of Transforms and then access position all the time because 1. that causes a call crossing the boundries over to Unity’s c++ side and 2. means a huge number of distance checks. A custom “spatial data structure” tends to be far better to determine what’s nearby (you find a lot of info on those online).
Best if you do experiment yourself to see where the limits are and to familiarize with the profiling tools Build for example some scripts that replicate a quick sample structure across a huge landscape and randomly swaps textures as well.
Also note that an actually built game may be a fair bit faster than in the editor.
I made an open world 3d game for mobile using this asset. I was loading in and out scenes based on proximity. The world was built in sectors and this method made managing memory realistic. This would be the additive technique others are mentioning above. Profiling was key to know how many objects and how busy a single sector group could be given groups of sectors were always loaded in at the same time. For example, if I knew I had a busy town, I kept the next busy town at minimum 3 sectors away.
you should do one large scene, but you are not supposed to have all the game objects in the scene. You are supposed to do it like minecraft, where you load objects and terrain from disk, based on proximity from the player. You would do this in chunks, whenever a player approaches a chunk that chunk is loaded and all the terrain and objects in that area are loaded into the game from disk. On the same token, when leaving an area that area is unloaded and saved to disk, this will make sure that you always have only the needed amount of objects and terrain that are needed and ensures good performance.