“DynamicBuffers supersede fixed array support which has been removed.”
So this sounds to me like: If I want to use array like structures on Entities it have to be Dynamic Buffers. Is that right? Or is there still a variant of fixed size available?
I am asking because intuitively I think that an (unmanged) variant of fixed size should be more performant.
Maybe as an example question: I want to have an Entity containing exactly 100 byte values. What data structure should I use?
Use [InternalBufferCapacity(int)] to set the amount of elements to store in the chunk, adding more elements means an allocation is made and the buffer is stored outside the chunk. Using a value of 0 means the buffer is always stored outside the chunk, which is useful if you know the buffer frequently needs to hold many elements
public struct Element : IBufferElementData
{
public byte Value;
}```
I dont know how unity determines the maximum elements it will store in the chunk though
thanks for the repsonse. I think I understand better now. One follow up question.
You said:
“… which is useful if you know the buffer frequently needs to hold many elements”.
By many elements you mean thousands or 10s of thousands? Even if it is always the same amount “n”? Or would I just have [InternalBufferCapacity(n)] in this case?
It depends on the size of each element.
The thing is that the buffer capacity is reserved on the chunk. So if you allow a capacity that would fill a chunk with just a few entities it would result in poor chunk utilization which may lead to poor performance.
On the other side if you allow too little capacity you will have frequent random memory access to get the buffer for the entities that exceed the capacity and you would still not improve your hunk usage because the buffer capacity would still be reserved in the chunk.
You will have to profile.
If I remember correctly the default capacity (when not specified) is equivalent to 128 bytes. So if your buffer element has 1 float then the buffer will be in the chunk memory if it has 32 or less element.
One question regarding capacity: Does setting buffer.Capacity = 10 at runtime mean that the capacity on the chunk is changed?
And regarding the Profiler: Am I reading this correctl:
I got 4 entities of this archetype. I could fit another 127 into the chunk containing them and one entity takes up 124B. These are split into some Buffer component and Entity data (index version and something I guess?)
So this is actually pretty bad chunk usage.
I am wondering: If I have several components that just hold some data (perhaps changing at runtime) then it makes sense to aggregate all of these into one entity. Because then chunkusage will be better?
I have not tested it but I think that will just resize the memory currently allocated for that specific buffer. Not the way hte chunk will handle sotring it in or out of the chunk.
Im’ not sure how you would change the internal capacity at runtime and I’m not sure it’s wise to do so.
This would imply a structural change because the memory equivalant to the internal capacity is reservec on hte chunk if you double the internal capacity, you will reduce by half the number of entity that can fit in a chunk. SO you would have to reorgenize all the exisitng entities into new chunks.
You are reading it correctly. It’s both bad and bot bad IMO.
It’s bad because you only have 4 entities in that chunk when you oculd have up to 131, but if you only have 4 entities in your world there is nothing you can or have to do about it.
It’s good (or not that bad) because you could still fit over 128 entites in that chunk. Keep in mind that job mostly use chunk iteration meaning that for every chunk you have the hoverad of scheduling the chunk for processing. The higher the chunk capcity the less chunk you have to schedule, the lower your overhead is. It also depend on the complexity of the work you do with the data. If the time it takes to process the chunk is far greater than the overhead, it becomes less important to have a high chunk capcity.
Yes you will save memory because the size of the memory reserved for a chunk is constant. it’s the capcity that differs.
So by merging the entities with non overlapping component set, you create a bigger archetype which can fit less entitiy in a chunk and therefor waste less space. But again, that will also increase the number of chunks you have to process for the same number of entities so it may be worst in terms of time performance.
I do merge some of my configuration entities but they hold blob asset reference and they don’t change at runtime or very rarely and I don’t have to iterate over it (its a sigleton entity), so i dont waste chunk memory sapce and am not impacted by the iteration performance since I don’t have iteration logic on that entity .
For the rest of the game data, I like to try to split my entities by big features.
For a player I will have a combat entity with all the stats and abilities, a movement entity with movement speed, input direction, patfinding profile,… , and inventory entity and so on.
They may have to comunicate with each other like an ability could reduce the movement speed so you have to choose on which side you put that coponent. Speed will be more frequently read from the movment system than written to by the ability systems so I would put it on the movement entity.
To come back to the internal capcity, my advise would be, leave the default value; make your game work and if your have performance issue, profile ot see if it’s due to the jobs that iterate over entity with buffers.
It may not be the job working on hte buffer itself that has the issue, it may be a job that does simple ork but since the entity has a big buffer, it’s spread accross lots of chunks and you pay a lot of overhead to schedule the other job.
You can try to reduce the buffer cacpity and see if it improve chunk useage without decreasing the performance of your other jobs.
Also if your entities mays have highly varying number of elemetn in hte buffer, I would suggest to set the internal capacity to either 0 or the max number of element any entity could have. (preferably 0 as it would be more memory efficient).
It may not be the best for performance but I think the performance will be more consistant.
Hey Wayn, Thank you for the detailed response! This already helped a lot.
I have a follow up question that I’d love to hear your thoughts on, if you can find the time.
What if the Internal Buffer capacity corresponds to something like map dimensions say (10x10) but then in the next level it is (8x8) and lots of stuff now calculates with 8 instead of 10. Shouldn’t I then somehow set the the Internal Buffer Capacity of Initialization?
Actually I realied that my question really is a case of unnecessary generalization. I will most end up with a static size anyway. And when figureing out what the ideal size is I might just have to adjust the two relevant parts in the code.