Separation of Logic and GameObjects

Hello, I’ve been playing around with unity a bit and would like to ask a question about scaling games and the separation of logic and representation.

Basically, I was hoping there’d be an easy way to have all the game entities actions and data management to happen independently from unity and then have unity to simply update its representation.

This would allow to have entities be active without needing to add them to the scenegraph (because they’re too far away or invisible objects). It would also make it possible to replace unity with any other representation technology if need be, without changing any game logic.
It would also make it possible to run the logic on a remote computer and merely run the 3d representation and some inputs as a 3d game (or 2d, or as a website, or whatever).
These inputs would then call on the (neatly separated) data / logic to update something, which would then automatically be seen in the 3D representation, as it keeps up to date with the data.

But to me it seems that the hierarchy is the other way around, i.e. using GameObjects as a starting point, which then initiate or call on scripts, rather than having running scripts that spawn (or remove) GameObjects as representation of themselves.
It also seems to me like the game logic in unity is typically directly linked to the frame updates? Rather I was hoping to have the frame updates merely updating the representation, while the actual game logic is happening in its own Thread (or even process).

Does my idea directly go against how unity is supposed to be used or is there a way I’m not seeing?

Thanks for any responses!
Best,
David

Unity is not just a “representation technology”. It’s a game engine. This includes rendering, of course, but also things like physics simulation, pathfinding, useful data structures for gamedevelopment (like Vectors and all available methods on them, but many more) and so on.
If you want to programm a game independantly of Unity, nothing stops you from doing so. You can just create a game the ‘old school way’, programming the OOP object hierarchy yourself. If you want it to be truly independant and only use Unity for rendering purposes tho, you will need to not make any use of things like Vector3, or other tools like pathfinding. You will then also need a way to convert your custom Vector types to Unity types for displaying them in Unity. You will also needs to create one interface / manager object, which reads your custom game data and converts it into gameobjects. So sure, it’s possible. But if you want to go these lengths, why not simply add a renderer and create your own engine?

Are you familiar with normal object oriented programming? Whatever you do, you will have a Main() method somewhere. Everything that happens, happens inside of this main methode. The rest of the OOP code is just structures to help you. you can simulate this by simply using the Start() method of a single gameobject to replace a Main() method. You can additionally use Update() to call some other code to tell your own game structure that one frame passed.

You wrote that you’d like to have some scripts running in the background and just visually update whats happening. However, how would that work? You either calculate things as fast as possible, or need to pause some time to reach some desired target updates per second. The first would just waste all kinds of ressources, which is bad design. The second is basically… what Unity does. A frame is nothing more than calculating each thing that needs to happen regularly once, then optionally wait until some time passed to reach a lower target framerate. Then repeat this cycle. You may want to take a look at this:

Each object needs to be initialized and then have some update cycle. What you are talking about (things happening ‘independant of the framerate, and visuals just updating what’s happening’) is basically what FixedUpdate does. The physics simulation happens in fixed timesteps and not necessarily every frame. This is an optimization. If you want to develop your own backend for a game and only use Unity for visualizing this, you would also have to think about all of this yourself and implement your own solutions. The effort required for this would be huge.

I also believe you have some general misconceptions. You wrote “this would allow to have entities be active without needing to add them to the scenegraph”. Meshes already only get rendered when they are seen by any camera. So invisible objects, objects behind the camera, or faraway objects already will not be rendered. However, if you plan on fully disabling them, that would be equal to disabling the whole gameobject. If you do not want to waste CPU on some entitiy, then you will also not update it. Thus you cannot know if it walked into the camera view again, or depending on how you do it, you wouldnt even know if you moved the camera to look at it, so you had no idea when to render it again, or would potentially render an un-updated version of the object.

So to conclude my answer: you can write all the custom data structures you want and recycle them, perhabs even between different engines. At some point however, you will need to interact with Unity. It’s also probably a good idea if you think about how you would implement these things differently - because just having scripts run in the background without any limiters is hardly any good design. These objects / scripts will need to have an update rate. This update rate ideally needs to be in sync with other update rates, and in a fixed order relative to them. This is what a framerate is.
I believe perhabs you have the wrong idea about how games work in general (which is a weird thing to say, i know). Games dont exist between frames. Nothing happens between these snapshots we see, and if something happened there, we wouldnt see it. Games are literally just snapshots of something calculated in the background, determined by some samplerate which is the framerate. The game world does not change between two frames, it changes only with each new frame.

1 Like

@Yoreki
This is helping a ton, indeed some of those things I hadn’t considered yet.

I was indeed thinking about using a different library to handle the math and then convert it to unity vectors and such, as well as having a manager for all entities (and yes, I was already expecting it to be a lot of effort, but hoping the gained independence from unity might be worth it).
I am familiar with OOP, I thought one of the advantages of not having it all be run within the unity architecture would be the ability to have the game run on a server and then create interfaces with unity for 3D representation, and a web UI for lower powered machines or phones, all neatly separated from the actual game.

Thanks for the link to the execution order! FixedUpdate sounds very interesting indeed.

I was unaware, but of course it makes sense that there are already sophisticated solutions in place to improve performance by remvoing what is not visible.

Naah, you’re probably spot on with this.

You gave me a lot to think about, thank you so much for this detailed answer!
Have a nice weekend.

This is a valid use-case, however you are basically talking about a server strong enough to calculate all the data for your clients, then send it to them and they only display it. This should be possible, but to tag it with a word… we are now basically talking about cloud gaming. So unless you have literally shittons of money lying around to build some supercomputer serverfarm with thousands of GPUs, this is not feasible. And if you are not talking about games, but instead applications, i doubt you need to outsource any of the calculations.
In theory tho, nothing stops you to calculate everything server sided and just send the data to your Unity clients, which then interprete and dirstribute it to the correct objects. However, the server itself would most likely need to run some form of Unity (or custom compatible structure), such that it can send the right information per frame. One step above this would, of course, be to only send the images and only receive inputs from the clients, which is again basically cloud gaming. Maybe i’m misunderstanding your intended usecase, or maybe you are slightly underestimating the architecture you are implicitely talking about :smile:

Just to make sure we are not talking past each other. FixedUpdate is an optimization for physics, since you mostly dont need to execute them every frame, but also to make interpolation more accurate. Unity tries to guarantee that it runs with a fixed timestep (independant of framerate). However, this implies that you absolutely must not abuse it. Slowing down execution of FixedUpdate can cause exponential (!*) slowdown, so it should only ever be used for actual physics updates, which is its intended use. I only brought it up because it seemed like a fitting example for things that can easily be missed and thus add to the workload of writing an engine that most people dont really think about.
Maybe an unfitting example, since i believe it can be easily misunderstood. Sorry.

  • If you are curious for why it can cause a serious slowdown, this is an implicite fact about it being executed in a fixed timestep. If your code in Update() is slow, you decreae your FPS which leaps to a higher fraction of FixedUpdate per frames. This is fine and the intended use. However, if your code in FixedUpdate is slow (ie, it takes 0.01 second), but it tried to execute it as close to the fixed timestep as possible (which default is 0.02s), half the time would be spend on FixedUpdates, leaving little to no time for actual Updates. The longer your FixedUpdate takes, the worse this effect becomes. There was a nice user-article on it somewhere but i cannot find it anymore haha.

I guess that would be the case with a lot of users, but my idea was more in the realms of a handful. I feel I should specify here; I am not intending to build a physics heavy simulation for a lot of people to access, but rather something akin to a strategy game with a lot of entities. With the game running on a server I could do some inputs on my computer within the unity representation or later on the move with my phone.

I am (obviously) not an expert, but I understand that cloud gaming means the actual rendering happens remotely? But imagine, rather than saying

“render awesome spaceship flying through space doing space things from 15 angles because we have 15 clients viewing it”

it would be more like

“calculate the trajectory of an object tagged spaceship and store the information spaceship at [x,y,z], so that any client could render it if they want to”
This would also mean that indeed I would not go frame by frame in the simulation, but merely interpolate how far it would have moved given the time the last update cycle took. All the unity app would do is display the current snapshot of the world.

I hope that makes sense.

I got literally HUNDREDS of bucks to my name, so… :sunglasses:

1 Like

I am thinking now that maybe unity is too big a beast for what I’m planning to do.

Demn, gimme some man! :stuck_out_tongue:

Yeah i get what you mean. This may be how it’s done in competitive RTS games, but i’m not really an expert concerning this. Either way, the idea would be similar to what i wrote above. The inputs would be made by the player, the server would do the calculations and report back on state changes. However, concerning competitive RTS games i would imagine that you needed to combine both approaches. So the users would need to calculate the game themselves (prevent delays), and the server keeps track of the game world as a sort of cheat prevention.

What you plan should definitely be possible (just sending the precalculated positions and such), but if your main concern is performance (which is a BIG factor in developing RTS games with decent unit counts, making it a very hard game genre to tackle), then there may be other solutions.

So unless it has to be a server-sided calculation, Unity being big and all may actually be what helps you out. If your main concern is making thousands of units run smoothly, you may want to look into DOTS. It’s still very new and the documentation is thus somewhat lacking, but for efficiency, it’s about as good as it gets. Unity combined a data oriented Entity Component System (ECS), with the C# Job system and wrote their custom Burst compiler. All this results in increadibly efficient, data oriented code that can be run in a fraction of time on the CPU, since the memory layout is way, way better organized. It can easily handle tens of thousands, or well over a hundred thousand unity, depending on what you need them to do. There are some very impressive demonstrations on this topic, one of which involves an RTS-like game scenario:

https://www.youtube.com/watch?v=GEuT5-oCu_I

Just as a rule of thumb, enabling the burst compiler alone often gives you a speedup of factor (!) 20, up to factor 100. This is on top of the obvious advantage of it all being multithreaded, and data oriented.
Because of how it works, you can also pretty much instantiate thousands of Units without any effect on the framerate, or destroy thousands of objects without creating any garbage for the GC to collect.

DOTS now also features netcode and other things that may make your life easier, but again 1) the documentation may be a bit lacking here, 2) i’m not exactly an expert on it (only used Jobs+Burst to implement some highly efficient procedural terrain generator) and 3) DOTS is data oriented programming, so while they made it feel a lot like object oriented programming with ECS, it’s not exactly what you are used to.