Hello!
Not sure if its just me, but I feel like the scripting part of engine is quite underdeveloped. I think it’s like several years already without any serious architecture changes in terms of scripting API. There are no improvements planned on the scripting and game architecture in roadmaps. I’ve posted several ideas like a year ago or so, and still did not get any response from that.
I just feel like scripting part of engine is left over without any clear direction and improvements. And while, the scripting part is decent - there are many issues with Unity when project scales (i.e. when creating real game, not just a dude who can shot and jump). Community develops their own various ways to deal with various Unity shortcomings, but this fragments community a lot and I feel like Unity lacks of “official way of doing things”.
Let me explain several examples.
Event system long tail of backward-compatability
Events are used in various ways, both in engine and in user code. What is lacking in the engine in my opinion is a consistent event system that is used both in engine internal stuff and on user code.
We have UI components that mostly use UnityEvent, which can be used both programmaticaly and via inspector. But for example RigidBodies do not use them and instead rely on magic methods like OnCollisionEnter, etc. Canvas items may implement IPointerXXXHandler to react on user mouse. Localization uses simple C# events (LocalizationSettings.SelectedLocaleChanged += xxx), etc.
On user side, we either can use UnityEvent, one of hundred Event assets from store or use native C# events.
Why can’t we have one, single way of dealing with events in Unity instead? It just feels like there is bunch of random patterns thrown into single bucket.
Id like to use a single pattern like:
button.onClick.AddListener(…)
rigidBody.onCollisionEnter.AddListener(…)
camera.onPreCull.AddListener(…)
localization.onSelectedLanguageChanged.AddListener(…)
qualitySettings.onQualityLevelChanged.AddListener(…)
And so on.
Ideally, if that event system would be auto-managed (like UnityEvent), generic, be able to be broadcasted or listened both locally (someInstance.someEvnet.AddListener) and globally (“listen to all XXX events that happen”).
Lot of deprecated stuff all over the place
When we finally remove some deprecated/antipattern stuff like MonoBehaviour.camera or Camera.main? There are a lot of stuff that has long history of being there for maintaining backward compatability.
Lack proper way of referecing “global stuff”
Because Unity needs some inspector references, it is always cumbersome to find a way to reference that stuff in your scripts. Especially true if you are using combination of MonoBehaviours and plain C# classes. When developing a game, you will find a lot of global prefabs, scriptable objects and other stuff that need to be referenced in some way. Some people just scrap that and use Resources.Load(), others will but a magic global gameobject that holds the references, others will use Addressables. Some people will drag&drop single scriptable object into 100 different prefabs to share data etc.
When creating a new project, I must always do some boilerplate with creating global prefabs that use RuntimeInitializeOnLoadMethod to quickly inject themeselves as DontDestroyOnLoad GOs where all my global stuff is referenced. And while it works and you can get used to it, it still feels like I need to hack my way through the engine. Some assets try to address this (like Weaver Pro from Animancer creator) but hey… we should have first class solution here.
Perhaps some way to quickly reference these assets would be nice. I do not know which idea would satisfy the most of community needs, but maybe we could have another building block, like “Data Providers” which could be static classes with static fields that can be set up in inspector and be accessed as simple as MyGameDatabase.enemies, MyGameDatabase.prefabs and whatever you set up in this class.
The singleton nightmare
It looks like engine was designed like old-school FPS games worked. You have a “level”/“world” (scene in Unity) that contains entities which self-manage themeselves. But I believe that Unity is general purpose engine and I believe very few games can be designed as a bunch of scripted entities that dance together. You WILL need some global controllers, managers, command buses, ui handlers and stuff.
Because of lack of native Unity way to deal this, most people use singleton pattern which again - can work but feels like hacking through engine limitations. As far as I am working, reading blogs, reddit, watching vides, etc - I know that Scenes in Unity are used not only as “levels” but as a building blocks of game. It might be a level, a screen, some “sub-screen”, UI etc. For example, a lot of people use single “Game” scene that handles main gameplay loop, some “Menu” scenes for providing before-game interface. Many developers design their UI as separate scenes that are loaded in game to have separation between various building blocks of game.
When I select the scene in my project file list, nothing really appears in the inspector. Why couldn’t we have another building block, lets say “scene components”? It could be script inheriting from SceneBehaviour class that could be attached onto scene (in one-scene-many-components relation, like game objects).
This way, when developing your UI scene, you could attach your GameUIManager component directly into scene instead of putting it on some random game object. When you design your “main gameplay scene”, you could also attach one or more components into the scene.
These components could be then queried easily for example via SceneManager.GetComponent().Something(); instead of doing a gameobject with singleton. This would also scale, because you could have several scenes with GameSceneManager component attached and you could setup them in inspector (for example changing the settings for testing-iteration scenes for scenes for Unity Test Framework).
Unreal has this in form of “Level Blueprints” I think. Having engine-supported, single way of defining and accessing global components that act as high-level orchestreator (all that “…Manager” classes) would be better than rellying on random patterns like Zenject, singletons, global variables, RuntimeInitializeOnLoad, referencing in inspector a scriptable object that controls gameplay, etc.
Creating gameobjects
Unity controls the moment a monobehaviour scripts are created, we cannot initialize it in constructor. This is understandable that engine needs to control this. But as for now, there is no “official” way of creating (instantiating from prefab) a game object WITHOUT adding it to scene.
Even a simple pattern like:
Monster monster = GameObject.Instantiate(monster);
monster.SetType(someMonsterType);
Lets say the monster instantiates its children based on monster type passed (like behaviour, sprite/models etc.) in their Awake/Start methods.
Comes with some pitfalls, because your Start(), Awake() methods will be called before your SetType() is called, so you need to have your own custom initialization method as there is no easy way to setup your GO before it is added to scene and initialized. So you need to either null-check “monster type” in Start/Awake to handle this very brief moment or scrap the engine Start/Awake and provide custom “Initialize()” method that you can control when its called. This is especially true if you mix static scene monsters that have “type” set up in inspector with dynamic ones that are spawned in game (for example your wizard has a summoning spell,whatever).
What I am doing for this is I am creating a disabled game object on scene and I am instantiating prefab as child of that scene, so Awake/Start methods are not called immediately. Then I setup my GO and then I finally reset transform.parent to null. Again - this works but I am feeling like I am fighting with the engine.
Why cant we have new way of instantiating that create objects in detached/dangling state and allow us to manually control when it is added to scene?
ScriptableObjects
They are great tool but what they lack is to have an ability to create “inlined” scriptable objects inside GameObjects or other ScriptableObjects. Interface could work like in Scene Lightning Settings, where next to scriptable object, we have “new” button that creates a scriptable object inlined inside current Object (and is serialized “inside” object instead of separate asset).
This way we could avoid creating unnecessary scriptable objects that exist on filesystem just because they must. There are assets that deal with this in some way, like FlowCanvas allows creating graphs (scriptable objects) as separate assets or “bound” ones, but as far as I know, they are not using any Unity first class solution and instead they serialize data to json and save them inside game object.
This idea actually comes from Godot’s Resources which are very flexible in this matter. You can create them as inlined resource or you can create them on disk and reference them only. You can even change your mind in progress and drag&drop your inline Resource into filesystem, turning it into filesystem asset.
There are many use cases for that. This SO could be a some sort of gameplay element, like an ability for example. And you would want to have them as separate assets for player skills, but perhaps monsters could reuse this ability SO and you would prefer to save them inside monster ScriptableObject to avoid having 10+ files for every enemy and reduce clutter.
This can be done manually with custom inspector that provide [New] button and save asset as child of current asset (AddObjectToAsset + hideFlags), but this wont for GameObjects on scene which always need SO as a static file reference. Fighting-the-engine.
Serialization
Its 2024 and we still don’t have a way of natively serializing other collections apart from List/Array. Dictionaries, HashSets etc. all those collections should be serializable by default, instead we need to rely on 3rd party assets.
Runtime serialization could use more love as well, for example by using the same serialization editor does (with ability to deserialize references to scriptable objects automatically, etc.). Doing this manually always results in huge mess and lot of boilerplate code to transform serialized references in both ways.
Spawning Flags & Overrides
What Unity lacks it to have ability to override some of gameobjects/scriptable object preferences based on flags when game is launched. Some of flags could be static, engine provided (like “Desktop”, “Mobile”, “Low Quality”, “High Quality” etc) and some could be defined by user (like Layers - there are several hardcoded ones but we have a space for custom ones).
These flags could override some properies of GO/SO based on build. Simple example, you develop a cross-platform game for mobile+desktop and you are using realtime shadows on desktop and blob shadows on mobiles (most of the time, real time shadows are too hard for general mobile games).
What I would like to have is ability to “override” that certain gameobject will be automatically disabled on certain quality levels or other flags. Instead I must create a 5-line script that will check if its android and will enable blob shadow object based on that.
Other example, on mobile build I am using reduced set of post-processing effects (tone mapping, bloom, etc) for performance/thermals on mobile. But my lights are oversaturated without these effects so I need to reduce their intensity on mobile platforms. Again, I must make a script that on runtime will check if its android and halve the intensity. Works OK, but I lose ability to preview this on editor scene view and must run the game to check it.
Localization has some way of dealing with this problem, as you can select “locale mode” and then override some properties just for this locale. I think this could be reimplemented/reused for this case.
Asynchronous programming
While certain games can rely on timple Update → do something multiplied by Time.deltaTime approach, there are some types of games that are more easily implemented using asynchronous programming pattern. As for now, we can use 3 ways of doing this: Coroutines, Awaitables from Unity or UniTask.
Coroutines are great and integrated with engine (e.g. they stop when game stops or when containing game object is destroyed), but they do not support try/finally, their syntax is weird, they allocate garbage. In the other hand, what is good about them is that you can grab reference to coroutine in order to stop it later on manually.
Awaitables or UniTask are more native, which means they can be easly used with async/await features. But they are not tied with the engine and your async tasks continue to run when you stop your game in editor. Also, they don’t stop automatically when your game object is destroyed and you are required to pass cancelation tokens all over the place in order to reliably use them, which is REALLY cumbersome. In the other hand, you gain some nice features like try/catch or ability to await several tasks at once (await UniTask.WhenAll(projectileTask, damageTask); etc.).
Async programming is especially useful in turn-based games, where you need to precisely control what happens, when it happens and have possibility to await it. A complete spagetthi code can be turned into simple:
foreach (Enemy enemy in enemies) {
await enemy.TakeTurn();
}
with Async/Await. But even real time games sometimes may have good use for Coroutines.
What I would love to see in Unity is to have their own way of creating async tasks, integrated into engine. To combine best of the two worlds, by allowing these tasks to be spawned, controlled (StopCoroutine etc.) and lifecycle managed by Unity but also use async/await syntax with all benefits like try/catch, Forget() or WhenAll().
I understand that its not easy task to do, but this does not need to be implemented in native way and could rely on code generation/weaving under the hood (for example, in order to manage lifecycle, when compilling Unity could add its own CancellationToken to all calls, under the hood). We already do it in DOTS/Jobs or multiplayer, where source code !== thing that is compiled under the hood.
Final words
This is not final list, there are just some examples that could steer Unity into being a little bit more opinionated in use, but also have healthier learning curve. I think that some openness in engine is nice, but in Unity everyone is doing things in their own way in order to circumevent engine limitations. This is not healthy practice, and while I understand that improvements take time, I didnt saw anything serious in this matter for a YEARS already. Roadmap is empty, nothing is developed.
And of course, you can argue that there pitfalls can be avoided if you know how. Okay, but first of all, it increases learning curve, makes community separated because everyone does stuff differently. And the third thing is that there are no single things you can avoid and go forward.
Everytime you create project you need to create:
- a stack of managers like ui manager/whatever managers
- a pipeline that creates that stuff with runtime initialize on load
- a way of querying global stuff in your game
- a controller that handle your async/awaits
- your game object spawner
- etc. etc.
Also, this hurts community I think, because most of tutorials on YT or other media are just “here, I will tell you how to make your dude to shoot when you press spacebar, now please subscribe and give thumb up” and they do not scale. There are serious lack of resouces of how to create something that can actually scale to full game and because you have so many ways of dealing with BASIC things, this is really serious maze. Everyone I know learned how to make scaleable Unity project by creating 50 projects that bricked in some part onto totally unmaintainable scale and people were like “next time I will do better”.
Unity, please consider giving the scripting part more love and solve architectural problems this engine have. We need some polish, refinement and adressing of typical use cases, I do not want to fight the engine with simple basic things everyone needs.