C# Performance Pitfalls for larger projects.

It might be nice to find some in-depth discussion on common performance mistakes, especially issues that might be stylistic and pervasive throughout an entire project. I’ve heard horror stories of code packed with reflections that sap CPU time. … Unfortunately I don’t even understand the significance of that cautionary tale.

Here’s an example of an issue that concerns me at the moment. I often find myself needing to pass a lot of variables to a method - often a method in another class. Would I be better off to:
1)Use global variables instead (Nevermind the organization issue for the sake of argument)
2)Pass all variables of various types and not worry about it - this is the fastest way?
3)Keep Variables in a Dictionary instead and pass only the one Dictionary reference
4)In the case of a method from another class: simply pass this (similar to the global option I assume)

Going into more detail about #4, if I were to then use it’s variables many times e.g., that.x that.y that.z is there a performance penalty that I’m not aware of with each of these calls? Would I be better off to put them in a local variable and use that, or is that a wasted line?

This is the kind of stuff that keeps me up at night. If you are like me, maybe we could compile some interesting reading.

You’re not really talking about performance issues of C# per se with any of those four points but with a poorly designed architecture. You can write crap code in any language you choose. Generally, all four points are bad because the question is bad. If you have the occasional function, and by occasional I really do mean perhaps two functions in a 20,000 line program, that takes many arguments, say, five or six, then that should be acceptable. Perhaps you need to pass around an object and then point #4 is a valid supposition but there is no generic case you can cite that would cover all eventualities.

With regard to the that.x, that.y point there is an implicit this in all class member variable access. You can assign a member variable from another class, or even the current class, to a temporary local variable, but you would need good reason to do so and then you are starting to second guess an optimizing compiler, and my money, these days, would be on the compiler to do the right thing (eventually) rather than your programming skill. Concentrate on algorithm and architecture optimisations first unless you have an exceptionally tight loop you know and have profiled and proved that the compiler is doing a crappy job at optimising. The only reasons I would consider assigning a member variable from the current class or another class in to a local scope variable in C# would be for clarity of code or I know the accessor is doing a bunch of work in the background such as creating a new object every time I call it.

There are many amazing books on writing good code and being a better programmer, but if I had to pick just a few, these would be them:

1 Like

Yes it is a wasted line unless you declare the variable to be volatile and use different threads, but I guess this is not your intention ;).
The compiler assumes, unless you specify it otherwise, that everything is only modified by the current thread and thus it assumes that “this.x” will not change during local execution, and thus it will usually cache the value… But especially in a managed environment these things are always a little speculation… The rule of thumb is to assume that the runtime does a good job in optimizating your code. You have to pay attention with properties though. If “this.x” reads a value from file, you really are better off chaching it yourself ;). the compiler will only cache when it “knows” that the value can’t be changed by an external source…

You can calculate a factor of 2-10 between C++ and C# performance, so in >most< cases it won’t matter. But sometimes these bits do matter, that’s for example why no one would attempt to write a serious engine purely in C#. But for the usual code, C# is more than blazing fast… Someone who screws his code up with reflection all over the place may have serious issues or maybe misunderstood that reflection is an OPTION in C# and not a necessity ;).

@Justin:

I don’t quite understand what you want to say with that ^^. I mean even if the function had 100 arguments it wouldn’t matter at all in most cases. If it is a hotspot and causes performance issues (profiler), then you can think of using such weird optimizations (even though the ones above might actually not solve anything) otherwise you don’t have to worry about it at all…

BTW, you have a good book reference ;). Quite something every programmer should read…

No doubt, I thought it might be useful to only talk about one language. And yes, when I’m shuffling lots of variables it’s almost always related to initializing an instance which, I guess in the grand scheme of things, is not something to get excited about.

Any thoughts on the reflection? C# seems more picky about typing than JS but I guess they are both going to the same place so to speak, is it just a cost of doing business in a scripting language?

What do you mean by “going to the same place”? Scripting languages are usually intended for non-programmers or to do very specialized stuff, where it would be too cumbersome to use a general purpose language like C#. And for this, they usually want to make it “easier” for new people to write something in these languages. But I hate scripting languages and it is by no means easier it is just brain twisting…

Reflection has not much to do with all that. Reflection is also cached but still not very fast, unless you compile specific code out of reflection data which is done at many points within the framework, like serialization. Usually you don’t need reflection, but sometimes it can be quite helpful and is possible because all type information is stored along with your assembly.

It is not a matter of performance but a matter of good architecture and design and whilst we can debate back and forth on what is good, better or best, if you were to write a function with more than a half-dozen arguments any team technical lead would certainly be looking at the rest of your code with greater scrutiny for other egregiously bad architectural decisions. A function with too many parameters is a sign of laziness in design which means other people on the team are having to carry more burden in their design. It makes the interface to the function far more tightly coupled. Just accept that functions with fewer arguments is almost always better than functions with more arguments so long as it doesn’t require jerry-rigging other techniques to get those parameters in to the function in the first place.

Ok thanks, I get the point ;). I was only talking about performance… In this case I totally agree with you!

Optimisation in your initialisation routines that do not make the application load faster usually means you are working on a feature that your customer won’t care about. :slight_smile:

Like all tools, it has it’s place. Use it for the intended purpose. Getting clever with fancy language features is usually a sign that someone, somewhere is thinking too hard about a great solution to the wrong problem. Nothing wrong with reflection, but if you are considering using it in anything but a language extension, you might want to review your architecture or class hierarchy to see why you need to.

There are reasons for both duck and strict. Tools in the toolbox. Know when to use each one, but more importantly, know when to avoid it. I know that sounds Zen-like but really, it comes with experience of usage, because again, for every generality there are many exceptions. Wait until you spend four days hunting a bug in a non-strict scoping, duck typed variable only to realise you had a spelling error.

I think JS moving away from duck typing is a good move for the sole reason is that, for the most part, the majority people who use JS are not strong programmers. That said, JS permitting both duck and strict permits a bridge from lazy variable handling to something a little more rigorous without the culture shock a naive programmer will experience moving instantly to something like C#.

Every tool has it’s place, learn as many as you can, it will give you a strong perspective on what works and what doesn’t, what will save you time and what doesn’t. More importantly, when presented with a shiny new silver bullet tool, it will let you assimilate its usage quickly but also figure out whether someone is trying to get you to drink Kool-Aid. cough Ruby on Rails cough

With regard to C, C++, Python, Unity, JavaScript, C#, VisualBASIC, Lua, RoR, MVC.NET, frameworks, Java, APIs and every other tool we use regularly***, I always go by this adage: “It makes the easy stuff easier, the hard stuff is just as hard.”**

**Said originally by this guy: http://www.google.com/search?q=justin+lloyd+nice+guy
***Except for BrainFuck, Moo, LOLCat and a bunch of toy languages people create just to be funny.

Are we talking c# only, what level of optimization are we discussing?.. Some examples:
Avoid division when you can multiply
Use layers over tags where possible
Use Distance() (which uses sqr magnitudes) over Vector3 magnitudes
Direct calls or delegates over SendMessages()
Co-routines over Update()s

?

BTW, I use reflection with compilation in my current Unity project and can say that on a Q6600, a delegate obtained via reflection has an overhead of about 200 nano!seconds in NET and 2x or 3x slower on mono 2.6… Same goes for multiply instead of division. Only in very rare occasions you even need to think about such things. The best way is still to use the profiler and some sense of “experience” to pinpoint code that really needs optimization, otherwise you will most like just waste time or make your code less maintainable/readable for sake of nothing…

They all end up compiled to the same CIL code.

Unityscript is no more and no less a “scripting language” than C# is.

Distance() isn’t square magnitude, it’s exactly the same as (a-b).magnitude (just a bit nicer syntax).

–Eric

I’m interested in any. Never know what you might of missed. Of course there are others mentioned in documentation like use of type rather than string for GetComponent and avoiding GetComponent and Find in the mainloop. Dictionary offers a performance advantage over Hashtable (and now that Dictionary is supported by iOS I’m not certain there is any great reason to use hashtable other than laziness).

Yeah, on my i7 920 @ 3.6 it takes about 60 Unity procedural-locomotion entities to max out one thread (Anyone looked into multithreading that by the way?) I’m not even shooting for dual cores to be honest but waste not want not.

Yes I know you like UnityScript Eric ;). So I am not going to convince you for something else. But C# and US have not much in common except that they can be used with Unity and compile to IL code…

I like this book http://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882

And my favourite quote about optimization is;
“The First Rule of Program Optimization: Don’t do it. The Second Rule of Program Optimization (for experts only!): Don’t do it yet.” — Michael A. Jackson.

Most “experts” who gave me optimization advice were dubious at best, the .net is not open source so they are likely just guessing. Its better to learn to recognise “code smells”.

I think favour Refactor rather than Optimize is the best pattern for me when you want good performance. However I think Unity is a little bit different to work with when you come from using Visual Studio and your own projects as you are a bit “tied into” the Unity library, finding the best way to work with it after reading the above book might be …interesting. I’ll admit that I am unsure of how to write “clean code” in some circumstances using Unity. Then again perhaps I am over analysing as maybe the design philosophy is that you treat Unity scripts as lightweight (let Unity do the lifting).

Half these guys will still be muttering about reflection and benchmarking when I’m on holiday spending the cash from my finished games. None of the stuff you’re discussing matters one bit when it comes to actually finishing an enjoyable game. Really, head in sand.

I don’t optimize, I just finish it, then check the worst offenders and fix those. I don’t care about special code, only that its commented and does the job.

Largely, if you are working on projects for the sole goal of finishing the project and shipping that’s exactly how you should approach the situation (get it playable, optimize only what parts require it when performance issues pop up). However, if your major goal is to learn the engine, and best practices, so that in the future you can save time by doing things the “right” way from the beginning - why not ask questions and learn more?

Saying that understanding how different approaches to common systems impact your projects overall performance is putting your “head in sand” is, in my opinion, a really “head in sand” way to approach development.

You certainly do not have to go out of your way to be rude to people trying to grow as programmers.

Not every project is created with the intent of shipping and profiting. Many are created with the intent of educating ones self so that when said person is ready to start their first real project they have a solid foundation to start with. Pretending otherwise is really putting ones head in the sand. An absolutely enormous part of being able to rapidly build a game and only optimize where/when needed is understanding basic system architecture so that you can avoid many potential issues before they ever become issues. Never begrudge someone for trying to reach the point you are at now.

Usually this is also how it is done. You write your code and when something is getting slow you hit the profiler, its says where it is slow, you fix it and go on…

But what you might not notice anymore though, is that YOU probably write efficient code out of the box, like many advanced programmers will do. Additionally you will know what code has to be paid special attention to before even starting to write it. You can’t honestly tell me that you just go on typing just stuff in there with the aim that “it does its job”. In that case, especially on mobile platforms, you wouldn’t get anything running smoothly, at least if you have some more complex algorithms in there which really need to do things efficiently. So the need for such thinking might actually seem strange but there are also those who just don’t know what might cause trouble, especially folks from script languages xD.

But what many people get wrong about optimization is that you usually don’t have to care about low level… These are only constant factors and it is very unlikely that constant factors are bothering you. Much more likely is that polynomial costs will bother you or even exponential if you get real lucky. And then its time to think of efficient algorithms and not about exchanging divisions with multiplications ;), which in fact would make the algorithm potentially complete in one million years instead of two millions, great win!

Listen to this hippo. Ship! Ship! Ship! I have done one, and precisely one optimisation in my current game to date, I optimise a data structure to reduce memory usage from 320MB to 12MB. The original 320MB data structure stayed throughout the entire development period until I was good and ready to do away it and I was willing to ship the PC and web player versions with it in there.

But all that said, it’s important to learn your tools, so if your goal is to do that, figure out best practices without concerning yourself with putting the ideas in to the current game you are working on.

What amuses me is when people profile the .net. I got blah blah down to blah blah. Says who…the magical performance counter? Most of the time its…no you havent got it more efficient, the GC is just collecting at a different time than it was 5 mins ago. I’ve been down so many rabbit holes with this type of thing and not one single time that I can recall did it have any bearing on the program performance.

Wow thanks Eric. I thought I read a performance test that said otherwise but I can’t find it now and the docs clearly back you up. I don’t get why they would wrap the slower version when it was one line of code anyway. Confusing for new users! They should depricate. The docs are clear enough without the extra layer of interface IMO.

Clear example of sqrMagnitude here in the docs:
http://unity3d.com/support/documentation/ScriptReference/Vector3-sqrMagnitude.html