Any reason ContainsKey is missing from NativeHashMap.ParallelWriter?
Lots of things in the NHM source code go over my head without really studying it. I see a lot of Interlocked and atomic operations. AFAIK there will never be 2 buckets with the same key.
So why is ContainsKey missing then? Is it something that just doesn’t work or was it forgotten to add?
Parallel writer purpose is to write, using multi-threading. It makes no sense to try read from the collection, while still writing on multiple threads to it.
If you want to check for key existence, then use it without parallel writer.
Or use single threaded job writing to the collection, then you can check for exiting keys in the same job.
Well, it’s not that I write all the time to the NHM. The calculation of some data can either be instant or delayed to other frames. When delayed, I’m using the UnsafeParallellBlockList from the Latios framework because this one works better than just a NativeStream or NativeList in parallel writing.
The delayed frame number or tick will be used as a key so it’s mostly just a lookup to get the correct UnsafeParallelBlockList and a one-time setup when the delayed tick has never been registered.
I guess, for this case I can just use a normal NHM without a ParallelWriter.
Is is possible to identify your min and max tick values in advance? I usually do this and allocate a NativeList and a NativeReference to hold the base offset.
That would certainly be possible. There are quite some data checks involved (getting the spell involved that wants to be casted and then getting the spell data how much ticks will be delayed) so I don’t think it’ll end up being faster when doing this twice. I’m later measuring how much overhead the NHM lookup and eventual write costs me. Maybe I can get away with it.
Do you use NativeList<NativeReference>? Why have you chosen a NativeReference for your case?
So if I know the min and the max, then I store the min in the NativeReference and allocate a NativeList for max - min.
You don’t need the true min and max either. A theoretical min and max (or current tick and max ticks ahead possible) may compress your range enough to make the approach viable.
Ah got it! Much cleaner than using a 1 sized NativeArray. The base offset in my case is the tick which I always know so I wouldn’t need it. I have to think some more about the NativeList approach. Being able to use the direct index would be a lot better than having to lookup a NHM key. I just have to figure out how I can handle it without having many empty elements when the delays are uneven like +5, +11, etc…
What I can certainly do is find out the maximum delay of the spells which I could use as forEach count in a NativeStream. As the maximum delay of a spell won’t be longer than 60-90 ticks I think having a NativeStream with such a low count is the best solution were it not for the problem that writing to the same index in parallel is problematic. But I could do the same thing with a NativeList and an UnsafeParallelBlockList which would work in theory.
Well, as always you make my mind racing. Thanks for the input!
If you need determinism, NativeList is your best bet. If you don’t, then NativeList is probably your best bet. The biggest cost is going to be the preallocation. For low data counts you can zero the list and check if the container is allocated. For high data counts, you may be better just preallocating everything and not performing that check.
I see you have an alternate solution but to answer the original question
Because there is no way Unity could ever guarantee it’s always safe
public NativeHashMap<int, int>.ParallelWriter Writer;
if (!Writer.ContainsKey(5))
{
If ContainsKey returns false this is never safe to rely on as another thread could write the value of 5 to the hashmap before this condition is executed
Well, right now I’m still figuring it out how to do it.
Without thread safety, sure. So why not use thread safety? Interlocked.CompareExchange is already used for the parallel writer TryAdd to not create 2 buckets for the same key. (I think, correct me if I’m wrong)
I see what you mean. How is it different with a NativeMultiHashMap which can add to the same key? Don’t they have the same problem there?
So one problem is that the bool containsKey can be used later when it’s a local variable. That’s throwing a wrench in this whole thing and would require some kind of context. Yeah, I’m starting to understand why ContainsKey doesn’t exist.
I’m honestly just interested how this could be solved. Locking the outside container sounds like a bad idea. Is it just not possible to have a conditional add in parallel for hash maps in general? TryAdd kind of underlines this though. Still seems weird but multi threading and weird limitations go hand in hand.
So for thread safety I’m using Interlocked.CompareExchange custom method because Mono doesn’t have uint for whatever reason.
After the parallel job I iterate over the createDelayedSpells_WriteArray, looking for a tick to calculate. Once found, the UnsafeParallelBlockList is read, disposed and the WriteArray element reset to default.
An alternative solution would be to pre-allocate a NativeList and check the calculationTickDelay with the length.If it’s smaller, fill it with empty elements until the length matches and allocate the UnsafeParallelBlockList. I’ve not measured this but I think Interlocked is more scalable than my alternative solution. This all could be much nicer without filling the NativeList with empty elements but I’ve no real solution for that.
Such cases would need to work. On tick 265 a spell is cast with a delay of 10 and on tick 267 a spell with a delay of 3 for expample.