I am finishing up what I would consider the core gameplay elements of a game I’ve been working on, and am debating making another pass through the code to optimize before it becomes too daunting. On that note I a/some questions about unity and optimization.
Calling public variables from other scripts: for example my crosshair script calls for a weaponsaccuracy variable from my weaponinfo script to control the spread of the crosshairs. this is done in the update() function so it’s happening all the time. My question is how much of a performance difference there is between calling a public variable cross-script over using a local variable. I had debated putting a conditional statement in update() and only calling in the weaponaccuracy on weapon change, converting it to a local variable for use on every Update() call, but this would only be worthwhile if there is a significant performance different between local variable usage and public cross-script variable usage. *** I realize that small scale this will not make a difference, this is both for my own education / best practices and in consideration of possible futures for this project being a multiplayer 3rd person shooter where there would be much more going.
I’m at work again right now, but after I get back and have a chance to look over some more code I’m sure I’ll have a couple other little questions about optimization,
All joking aside though, if your code is running smoothly and your not crashing or experience any issues you shouldn’t spend time re writing code. If you just want to optimize it just to do it then go ahead, but you may find your old code ran faster than the new code.
Local variables do run faster, but since you are passing 1 variable it’s not that big of a deal. Once you start passing huge classes, then you should consider changing it.
If you’re using ‘GetComponent’ every update, that will have a more significant impact (cache that reference on Awake or Start), then accessing a property of that object once had.
That and if your local value is only used once or twice, bringing it local might actually cost more (technically speaking, but again very small cost), because you’re accessing the property, making a copy of it locally, and using that… adding an extra step in.
What? A reference to large classes isn’t expensive. If the object already exists, passing it around is very minor, since they’re passed by reference.
Right, since it’s passed by reference, “huge” classes vs. tiny classes makes no difference, it’s just a reference in either case. The only time you’d consider size is when using structs, where the general guideline is to have them be 16 bytes or fewer.
By writing this post you have already spent more time on this than it deserves. Step 1 in optimizing is to run a profiler. The worst way to optimize is by inspecting code that works and speculating on whether or not there is a faster way.
You never want to take code that works and risk breaking it without a very good reason. That reason should be that you have documented proof (via profiling) that it is too slow and can be improved. Then ideally you want to make sure there are tests you can run after making the changes that prove that the new code works correctly. Then after fixing any new errors, you want to run the same profiler to prove that you made it better. You also want to have some easy way of rolling back your changes in case they make things worse, so you want to be running some kind of revision control like Git or SVN or even manual backups are better than nothing.
Premature optimization is the root of all evil – Donald Knuth
Unless something is highly performance critical and every other system will rely on it don’t worry about it. If your coupling is low then you can come back and rewrite it at another point. Definitely so if things are abstracted. Making things work so you can determine if it’s even useful is more important imo.
Just gonna chime in that things that happen once per update should really be considered quite infrequent. Unless the code is measurably affecting performance and you can massively improve it with some effort, time is much better spent elsewhere.
Definitely in violent agreement with all the folks who say profile first, then optimize based on the results.
Also, the biggest optimizations are almost always algorithmic changes, not simple code structure tweaks. Going from O(n^3) to O(n) is a big improvement. Shuffling instructions around not so much (aside from extremely inner loop stuff). Put another way, the best optimizations are the ones that get the same result with much less work, not the ones that do the same work slightly more efficiently.