I wondered recently whether the common wisdom saying that “GetComponent() is slow” is, in fact, wisdom. So, I did some benchmarking and documented it here: http://chaoscultgames.com/2014/03/unity3d-mythbusting-performance/
The results were quite interesting, and surprising to me at least, so I’m sharing them here.
When I’ve gotten to the optimization phase of games, I don’t think there’s been a single time where the Unity Profile showed that GetComponent() was slowing me down. I’ve heard the same thing about caching transforms and such… but really optimizing away a single draw call has waaaaaaay more effect than worrying about GetComponent().
I still tend to GetComponent() in my Awake() and Start() functions because that is proper initialization and I don’t want to spam “GetComponent()” all over my classes.
But seriously, people focus on this way more than it warrants. Sometimes I wonder if historically, in Unity 1 or Unity 2, it actually made a big difference.
EDIT: Great article btw! It’s important that people go through these tests so we aren’t all just echoing bad advice.
It becomes relevant really fast with many objects.
Oh yeah I’m sure! I guess my point is that optimization should be based on data that’s showing GetComponent() is slowing the game down. I’ve seen too many people trying to optimize a dozen GetComponent() calls which is a very different scenario than having thousands of calls a frame.
Especially when it comes to framework code! Users tend to abuse frameworks in ways the framework creators never imagined.
for (var ii = 0; ii < NUMBER; ii++) { }
i suggest you repeat the test with the prefix increment (ie ++ii). according to this (.Net)benchmark and my own profiling the postfix is considerably slower.
here is another attempt to profile this issue.
Yes, GetComponent was optimized sometime around Unity 2 and used to be slower.
I’ve never seen that make even the slightest difference one way or the other. Remember that Unity uses Mono, not .NET.
–Eric
I don’t see how it would affect performance in .NET too.
I just run a 1,000,000 iteration loop with i++ and ++i… and even i += 1
i++; 00:00:00.0070492
++i; 00:00:00.0071135
i += 1; 00:00:00.0069987
And I run the test dozen time, and I see no clear way of incrementing a value being faster by any margin.
That is really an “optimization” would skip.
Say who? ಠ_ಠ
I read somewhere recently that it depends on how you’re using a pre/post-fix operator as to whether it generates different IL (possibly something about whether you use the returned value?). From memory, the article demonstrated that when used in a standard for loop the IL was identical either way, and thus there can’t be a difference.
I did measure the empty cycle time, so it does not matter, even if there is any difference.