Native Collections with ParallelWriters throwing exceptions for capacity issues - is this ideal?

Obviously the intention is that you set the capacity of any such collection (e.g, NativeList, NativeParallelHashMap) before scheduling the parallel work which will write to it.

Right now if you set the capacity too low you’ll just get an uncatchable exception from a (likely Burst compiled) job.

This seems pretty painful from a robustness standpoint. For example, you may have a Job which traverses all Entities looking for the simulation-defined subset which meet some condition (e.g, deciding to shoot, or whatever). The typically tiny set (0.01%) of such entities are put into a NativeList via a ParallelWriter to be handled in some subsequent job.

Right now you need to set the capacity of the NativeList to match the full candidate set of Entities - i.e all of them, maybe 10,000x more than typically needed.
If you didn’t do this, then there’s a small chance that eventually you’ll need more than your guestimated number and your game dies.

Such exceptional behavior could just as easily be encoded as part of the TryAdd behaviour in these ParallelWriters - we’d get ‘false’ if the TryAdd failed for capacity issues, possibly with some second ref-bool or whatever to distinguish exactly why.

This would allow a kind of manual exception handling - any parallel job which hit capacity issues would set a separate flag somewhere and return early. A later non-parallel job would check the flag and do the work again in the rare case when it’s needed.

Is such a thing out of the question?

3 Likes

Thanks, I like the idea of a fallback mechanism for the rare slow handling. I’ve made a note to follow up in our API review.

4 Likes

+1 for this kind of idea, I commonly need to massively overallocate the size of parallel containers for the same reasons

1 Like