System.Thread on iOS

So I am at a point where I could be taking chunks of processor intensive code in my app and splitting it off to different threads. I actually have this working and haven’t run into an issue yet, I’m just curious to know if anyone else has experience doing this and what have you found?

Does System.Threading work cleanly on iOS devices, any weird bugs or glitches you ran into? Did you find significant performance gains? Any tips on the subject or things you discovered?

Also, how exactly do you see how much the two separate processors are being utilized on an iPad? I can only figure out how to get one processor usage % bar through instruments, which I assume is both processors clumped together. But I would really like to see how much processor 1 is being used and how much processor 2 is being used, to address if splitting my code into more threads might actually have benefit, and if it is doing anything. Does anyone know how to do that?

Lastly, I have just a general question about threading in C#. I have an array of like 10,000 Vector3’s that are essentially the positions of particles. On a separate thread I am doing some calculations and updating the Vector3’s in the array. Then on the main thread I am reading these Vector3 positions into a particle system and updating a particle system on screen. This is working, but I am curious to know is this the proper method? I was under the assumption that when threading I had to implement something to make sure a value isn’t accessed by one thread, while another thread is changing it. Is it possible that my second thread could be half-way through updating a Vector3, when my main thread attempts to read that Vector3 and something bad happens? Can things like that happen? Do I need to take extra steps in code to check for something like that? I was under the impression I did, but the code seems to be running just fine, does C# have something built into it that automatically checks that?

@tech, you definitely dont want to be just willy-nilly accessing data from multiple threads. You are guaranteeing to create hard to find and reproduce bugs. Any data accessed by more than one thread needs to be “locked” by the thread accessing it. There are many, many ways to do this with C#. Mutexes are probably the easiest but I would recommend you consult Google so you can learn about all the different ways to sync threads.

Thanks, I looked at Mutexes and monitors some, but it seems they need to be mixed with try catch blocks? Which slows performance I believe, this is a really performance critical operation thats occuring, it can’t possibly go any slower. Is there a method to open a variable up to being read by one thread, and written to by another another, without the need for try catch blocks?

I have to bump this thread again and ask, are you (or anyone) sure you must use some kind of mutex or monitor on a variable that is accessed from different threads?

Because I have an an array of ints, one thread changes the ints, another thread reads the ints. Both threads are directly accessing the array, no problems at all are occurring thus far.

@tech, it is absolutely, without a shadow of a doubt threading 101: accessing data from multiple threads without a lock/mutex/monitor/other mechanism can cause data corruption. Google is your friend. A simple search will get you thousands of hits and dozens of solutions.

Thank you for the insight Prime.

One more question if you could answer, as you seem knowledgable about this, and I suspect the answer may be specific to Mono, or even Unity itself, and I am having trouble finding the answer.

Is there overhead to locking a thread? I’ve decided to go with locking as the way to manage my multi-threaded variables. There are instances where I have a for loop, that iterates 1000 times, it alters like 20 different variables, but only one variable needs to be locked to be altered. Should I lock the entire for loop? Or should I lock just that one single variable change inside of the for loop?

Locking the entire for loop will require only one lock to happen, but since it is altering 20 variables that don’t need to be locked in that loop, the lock may be staying locked for longer than it needs.

However if I lock just that one variable inside of the for loop, there will be more time available when the variable is unlocked, so that other threads can access it, but it means I will be locking and unlocking that variable 1000 times in a single update. Is that a bad idea?

You don’t want to lock the entire loop. Just lock as little as you need to right before you need it. I don’t remember the reason why though. Go read a chapter in a C# book on threading for more info.

Well to report back, I’ve discovered locking does have an overhead, a serious overhead.

If you have a for loop that loops over 1000 values, and you apply the locking inside the for loop, you will encounter serious slow downs. It is necessary to lock the entire for loop. You actually want to use lock as little as you possibly can.

Given what I’ve found, I would actually say you want to avoid doing more than a single lock command in an Update loop, or maybe two.

Store all the data you need to update from the multithreaded variable into temporary variable, reading into the temporary variable while locked. Then unlock it, alter the temporary variable, and then lock again, read it back into the multithreaded variable.

And actually, if one thread is very performance critical, you may purposely want to over-lock it. Meaning, just lock a huge chunk of the Update function, even if the lock over extends it’s usage in some instance. Because if the update function has to lock a variable, do something else, then regain the lock, it is possible that regaining that lock could cause a pause. This does happen, and it was a big issue I found. The more locks you have in an update function, the greater the chance your update function is going to jitter and have to pause and wait.

For example, in my game I am locking a variable in my FixedUpdate and am then locking the same variable being updated by a function on another thread. The function on the other thread is performance critical but, since it doesn’t have immediate visual feedback, it can skip, jump and jitter without issues.

So what I have I have done is, in my secondary thread function, I lock and unlock the needed variable a bunch of times. This does make the secondary thread function go slower, but it also makes it so that the FixedUpdate function has greater possibly of being able to grab the lock when it needs it, and not cause a jitter. The fixedupdate then has just one single lock, and it is a large lock, which I am sure caused my secondary thread function to jitter. But again, the secondary thread function has no visual feedback, so it can jitter and skip without notice. This setup I have found to give the best performance, because it basically gives prevelance to the lock in the fixedupdate. The fixedupdate has more opportunities to grab the lock, and it holds and maintains the lock for longer. This gives me no noticeable drop in performance.

Is this a good design pattern, or is there a better way to do this?

And you know I’m still not completely sold on whether or not this is necessary. I know all the documentation says to do it I think that documentation was written with a different type of application in mind. My game seems to run just fine without it, and I haven’t seen lack of locks cause any issue at all. The data that is being locked and unlocked is not exactly super critical, it gets updated every frame, so if some erroneous data did get read in, the player wouldn’t even notice. I’ve been researching and I have found the only real issue with not using locks is a ‘race condition’ where one thread might alter the data, when another thread is doing something to the data and expecting a result. But my code wouldn’t have any issues with a race condition. The multithreaded variable is really just a back and forth communication variable, one thread spits data into it, the other thread reads it. No race condition could occur. And as far as what I understand at a deep internal level a CPU will not be editing and reading the same piece of memory at the same time. So even if I sent a call to read a block of memory, and write to a block of memory at the same time, at a deep internal level, one of these operations will occur one after the other, they can’t actually occur at the same time. Which in my instance, it actually doesn’t matter which one occurs first or second, because it updates so fast you couldn’t even notice.

So really… I am thinking I don’t need to use Locks. Is there something I may have missed? Anyone who knows more on this, any critique on my thought process would be much appreciated.

You don’t need to lock the thread if one thread is only reading from it and the operation of the other thread is atomic. An int is about as atomic as it gets - so long as the fact one int isn’t set yet has no appreciable effect on anything else then the locking is irrelevant. There are very rare circumstances where locking is unimportant - you may possibly have one.

For example a Vector3 is non-atomic, the X could be updated but the Y, Z are not yet - leading to an erroneous situation - which might in itself be fine, if you know about it and don’t care. The X of the Vector3 is atomic, it’s a float and will be set by a single processor instruction - it cannot be half set.