Optimization is a science unto itself. Early optimization is known as an evil in most development work, given an experienced developer that knows how to avoid obvious pitfalls.
Yet, optimization covers a wide range of issues. Typically, engineers consider optimization from a “treat as needed” basis. They don’t bother until a problem presents itself. Developers tend to overthink if they try to optimize too early, and spend time optimizing material that consumes infrequent resources, returning little or no benefit, because they’re basing their work on assumptions without investigating what is actually occupying time (or RAM).
That said, what you’re asking about is a bit outside the genuine science of optimization. Your asking about code structure vs efficiency. That does sound like optimization, I must agree, but it is one of those few areas where early optimization isn’t quite the evil it is elsewhere.
This ties into one of the basic tenets of true optimization - it is usually algorithmic choice that changes performance the most. For example, some people prefer trees or hash tables where sorting an array actually performs better. If we write so specifically to one choice or another, we make it difficult to switch and try another option. Learning to use design methods that allow is to switch without massive code refactoring is key.
In your specific inquiry, what you identify is something like that. Update is unavoidable as a hook into the way the animation cycle operates. The various update choices synchronize with the engine (in different ways). You can easily switch between update and fixed update because the design makes that simple.
Now, what you’re pointing out is that you have heavy work performed at each update, and that is to be avoided. You can implement a means of checking on things only on occasional updates. For example, if you intend to check on something only once per second, you check and track elapsed time (which is quick), and call a function only when a second has passed. Not everything must be checked at every update (especially fixedupdate), so don’t. Quite often we want checks on a real time basis instead of a frame rate basis, so track time and perform work on a timed basis when that applies.
Generally you want to create fast, simple tests which exclude the heavy work until it is required. It is actually rare that this isn’t possible, but as an example of where it can’t be avoided is collision testing (a design point of the engine itself). The engine has to check for collisions all the time. It can’t put that off. You, on the other hand, may realize you can distill a question down to a bool, or maybe the comparison of two floats or two ints, and so you can just test a bool or a pair of ints to see if something needs heavier processing. This is along the lines of checking for the passage of time.
This tends to create an update function that is a series of simple, fast tests which call workload functions as required.
Functions calling functions, which call functions, isn’t really a measure of performance demand. They have overhead, but you should trust the compiler and optimization techniques to deal with that, and use functions, or the member functions of classes, for organization with little concern until you measure a performance problem. Focusing on function volume or density is an example of focusing on the wrong objective.
Focusing on when and how often work is required, and organizing fast, simple tests to control what is done during update is a better focus.
Another tangent of this concept is threading. Update is synchronized with the engine. The engine is handing a thread to you (a thread it uses for its own purposes), and you’re occupying time on THAT thread. This is true of more general GUI designs, where all of the message response functions are running on the GUI’s thread (the main thread of the application). This isn’t always a good idea. Instead, we often consider spinning of threads to do work, leaving the “main” thread handed to use by the engine (or the GUI in a ‘traditional’ application) for quick response and return it to the engine (or GUI).
This takes advantage of the fact that most devices in modern usage have multiple CPU’s. That doesn’t make this “free” work, but it just might. If, for example, your application is running in a 6 or 8 core machine, but Unity is using only 4, your threads may have two cores all to themselves. RAM is a common resource, so when the various cores demand from RAM they can “interfere” with each other to some extent, but generally you can ignore that.
When you spin off heavier work into a thread, you generally end up with a finished “job” of some kind that the main thread needs to know is finished. There are a large number of mechanisms for this, like callbacks, message passing and states (bools that indicate something’s ready, for example).
To do this you have to understand synchronization (or, better, how to avoid having to use it). Properly administered, this means much of the heavy work you’re concerned about might be tossed off to a thread, letting the “Update” function finish almost immediately, while your thread does the required work. When that work is finished, it might leave a “message” to indicate that the next update can use that workload.