DynmaicBuffer internal capacity, IJobChunk, performance and heap memory

Hello,

The documentation regarding dynamic buffer says :

I have an IJobChunkthat cast a collider and store the collision hits in a dynamy buffer attached to the entity.
The buffer is clear before every cast adn filled with all the hit found.

I tested this with a number of hits exceeding the internal capacity of the dynamic buffer and it works fine.

My question is what is the magnitude of performance impact of exceeding the internal capacity ?
I understand it moves the entire buffer to the heap memory when that happen, I assume it’s done on a per entity basis and not for the entire chunk.
Does it mean that the peformance is impacted for that particular entity or for the entire chunk when the dynamic buffer is read in a subsequent IJobChunk ?
Does the fact that I clear the dynamic buffer before the collider cast job move the buffer “back” from the heap memory ?
Or does it remain in the heap memory but free the memory space up to the internal capacity ?

Basically, once the internal capacity is exceeded, does the performance impact remain even if the buffer shrink to a size bellow the internal capacity ?

Sorry for the load of question but I would like to understand exactly how t work so that I can take the necessary precautions.

1 Like

It means that when the buffer is moved to the heap, accessing this buffer will be a cache miss. Also means that accessing sequential buffers outside of the chunk will be a cache miss with no prefetching between buffers. The impact will depend in how you process the buffer and the size.
Moving to the heap don’t free space in the chunk. This would mess with the offset related to the entity.
About clearing the buffer, didn’t used to shrink back but I didn’t test it recently.

[ ]'s

Ok so if I understand well I have to find a good balance between reserving memory in the chunk that might not be used and risking the double draw back of having a cash miss and doubeling the memory space used in case I exceed the internal capacity of my dynamic buffer and be stuck with these drawback untill i remove the component from the entity.

One work around I can think of is to have a job before the ray casting one that check if the internal capacity is exceeded and remove the buffer and add it back to the entity through entity command buffer. But my understanding is that it will make the entity move out and then back in the chunk which may not be performent…

I used buffers in a collision system and it was pretty clear they would not fit in a chunk (I asked on forums if variable / user defined chunk size is on roadmap and if I recall correctly Joachim responded they look into it).

I then moved the buffer on the heap from the get go (capacity 0) and since I process it .asarray() it has linear access.

Depending on your buffer size (and frequency you exceed internal capacity / overall chunk capacity) this might be a viable approach for you as well.

Thanks you for your answer.
The reason I’m using this is because I have one system to move and one to jump, both need some informations on the ground under the player but won’t use it in the same maner.
So to avoid casting twice per frame (or more if I come up with other system that need this info), the plan was to cast once to get all contact points and then use them as need by each system.

The move system might want consider only ground at certain angle to manage a max slope behaviour, where the jump system just want to check for contact under the character not on sides.

A climb system coudl also use these data to check if a wall can be climbed based on the contact points and player direection and so on…

So I cast/write once a frame (twice if you count clearing) and read multiple times a frame in a for loop (the buffer is accessed in a IJobChunk).

Guess I’ll have to wok this out has I go and see if perfs are hit hard or not. No solution is ever perfect :wink:

I suggest you do some stress tests.

I generalny prefer buffers to be on the heap, since rarely they are size of few elements in my case. Rather counting in hundreds. So keeping such data on chunk with many entities wanting to process, may not be suitable. Depending on application of course.

I haven’t yet fully investigated following approach, so far works, but I tend setting low capacity, using attribute in buffer component, then increase capacity, to minimum required size, or its estimated size at entity creation, so avoiding later at runtime buffer resizing, as much as possible.