I’m really missing an option to get pointers from ComponentDataFromEntity. Technically I can get them if I expose some internals but maybe there’s a reason why you are not supporting them? Cache misses are a given when using CDFE so that’s not really an argument I would count. The only problem I see is a missed SetChangeVersion, but that could be called when getting the ref with a write flag. So problem solved?
CDFE is super fast, but for bigger comps, like 10+ variables, the copy process gets really slow in which case, getting a reference would be really beneficial! (In case you want to know what takes so many variables. RPG stats! And no, I will not use buffers! ) Microsoft C# mentions using refs on structs as good practice, or anything really that has 8+ bytes.
It would also be neat to write back in one go, without taking the additional overhead. It’s marginal with the lookupCache and just some poiner arithmetic but even with that out of the way, the full struct has to be copied. And the bigger, the worse it gets again. Maybe we just want to change one single value.
Anyway, hope to hear your thoughts of from the devs.
It definitely shouldn’t exist on the indexer (breaks changefilter) but I agree that a
ref T ElementAt(Entity)
method like dynamicbuffer has would be useful
I would also like to see an ElementAt function that bumps the change version. As it stands today the API tends to trip up Burst and make it do a bunch of register juggling.
It would also be nice to have the same thing (like tertle suggested ElementAt) for a NativeArray without having to use NativeArrayUnsafeUtility.GetUnsafePtr and then UnsafeUtility.ArrayElementAsRef.
Same thing really. The first uses standard CDFE read only approach. Under the hood it gets a ReadOnlyPtr and then mem copies to a struct that is returned.
The second uses a custom CDFE where I’m able to get the ReadOnlyPtr directly and typecast to the struct. No memcpy involved (UnsafeUtility.CopyPtrToStructure(ptr, out T data)).
How much difference do you think are between these 2 approaches?
The profiler is a little unstable when it comes to absolute numbers but approach 2 is roughly completed 30-50% faster. More on the 50% side. 250k iterations, 30ms vs 15-17ms. (Yes, profiler has huge overhead here )
I’ve never tested and profiled this before but these numbers are absurd!
That’s only 1 part of my code. Replacing all CDFE usages will lead to huge improvements.
This is even worse when writing back to a CDFE without using ref. I don’t know why you have built all these great native containers and then stopped half-way because of “safety” reasons. We don’t need hand holding at the expense of performance and some beginner will not touch anything that involves a void* or even ref.
The story isn’t trivial since structural changes can move memory and a ref return can be held without the safety system protecting you across the structural change leading to potential memory corruption.
That said, the downsides to performance and simpler code are massive.
So we need to figure out our story here. We are discussing internally what we will do about it.
Really happy to read that!
I realize the implications with structural changes are quite widespread. Before you start figuring out how to handle this in an elegant and safe way, I’d recommend implementing a quick fix.
I’d welcome either a compiler directive or implementing ComponentDataFromEntity as partial struct and a separate namespace, like Entities.Unsafe that gives us access to the pointers and references.
Same goes for any NativeContainer that is missing those.
Looking forward to it! Thanks, Joachim and DOTS team!
Constructively I would ask how you using the stats in the system. I found the most stats in an RPG aren’t used frame by frame. And that you’re realistically trying to calculate and derived stat based off of character stats. Ie strength speed and dexterity to calculate attack power. I fix the problem you’re having by condensing my stats component to the actually needed frame by frame stats. Then using a blob asset to update those stats on level up.
That is some good input, thanks! Stats are not completely static and they can be manipulated by buffs or certain spells, like an ability that has a 5% higher crit chance.
Still, bringing the base/non-changing stats to a blob could make sense. It would free up a lot of data in the spellcaster chunk.
I’ve tried a lot - bigger chunk data vs smaller chunk data with random access. In the code above I had the stats in a different entity, hence the random access. Now they are residing in the chunk which lowers the amount spent in random access but also increases the time because the max chunk count decreases. It’s really quite a balancing act how to view data when it’s not relevant for every frame.
Unity did a talk at one of the unite conference about handling mod stats in dots. Very good talk on handle things like poison slow etc etc. Might consider looking through the archive for it.
I’ve watched it and while elegant, this was like my first iteration and has terrible performance.
Not recommending any of it but a good way to demonstrate ECS and ECB.
Chances are that when you starting with ecb, not fully understanding yet what it involves, hence affect the design.
The main benefits of of using ecb, is in case like you have 100k enemies, and you want apply i.e. poison over the time, to just 1k enemies. Hen you don’t need execute through 100k enemies to apply these calculations. It is automatically filtered.
Allthoug, you probably don’t want to use ecb, if some events change quickly and often. For example taking general damage as enemies fight often.
You still could however filter damage system entities, by adding i.e. InCombat component tag, which is faster, than component data with fields.
This way with ecb, you can save tons of performance and have many buffs types and corresponding systems, avoiding uneccessery calculations, when not needed.
You could have two large amies standing, with literally no CPU usage, as there is no system executing unecceseirly. No bufs, no damage calculations. At least until enemies get into combat state.
Which part should be faster? Adding an empty field still triggers a structural change and dependent on the number of entities this affects, this is either irrelevant or a huge performance spike. The design is certainly not scalable enough to maintain stable framerates. A fixed comp with fields outperforms this by miles. As long as we don’t have enable/disable a comp and structural changes, a fixed comp will be the more scalable solution. InCombat is maybe not the best example as it’s not really doing anything, more like a flag, relevant to other systems to maybe start, like I don’t know, auto attacking, finding a target, etc… Having these systems running all the time and checking for the combat state is certainly not the most elegant solution but this problem can be solved with ChunkComponentData and/or a DidChange. No ECB or query needed.
Iteration and read time is mostly negligible even with 250k+ entities. Going extreme, the checking part could be vectorized.
With the archetype design it’s also better to have dedicated entities for buffs/debuffs/combat effects. That way, chunk size won’t be eaten up by data that’s not relevant to the calculation. Dedicated entities will have random memory access lookups, that’s certainly one downside. The example with the poison is also shortsighted, what happens when there are more DoT effects? All with different amounts. The example is just way too simple for most games I feel like. Then you end up with a DynamicBuffer and this brings systems and iterations to a crawl. The random lookups for the other solution won’t be a problem then in comparison.
It depends what and how ou design things. In most cases you won’t be changing states for 100k entities in one frame. Mostly when you have like 1000, maybe up 10k entities changing states, when you got some powerful magic spell. Some may be using add component tag. And yes, it was even confirmed by Joachim, it is better to use components tags with no field, rather component data with fields.
I definatelly don’t want have running 100 systems, with multiple jobs each, each iterating through 100k entities. Without filtering is wasteful approach of resources, and limits yourself, what you can do. That will reduce reach out of hardware, on which games can run. Hence fewer customers.
You will realise that, once you get reach out to huge number of jobs.
I say this, there’s a slight optimisation with using empty components. It’s still a structural change which means a memcpy of all affected entities.
Having that much systems seems like a huge design problem. I use 2 systems for spell casting and like 10 jobs which iterate on completely different things. Many have also come to the realization that not using Entities for everything gives much better performance. Entities is great but really not a universal solution.
Again, depending how the design is made. Systems don’t execute, when there is no matching queries. So Isn’t a big problem. Mixing various aproches is completely fine. However, yes there is small overhead on executing systems, so that need be considered in the design. Same applies to chunks, when they are marked as changed.
Doing and not doing stuff on entities is a separate aspect.
Indeed has been discussed many times over.
Also, your use case will be completely differ from mine.
So I say, it all depend what needed to be achieved.