“I want to know what objects are in a given radius what if Faster: Put all my objects(around 10) in an array and put this in update? … Or do it with an circle collider(and mark it as trigger) and track object that collides with the circle collider? Or is Physics.OverlapSphere() faster. What is the fastest way?”
For about 10 objects? Optimize the time it takes you to implement a solution Seriously, 10 tests won’t make much of a dent in performance. If it does, it most likely means your game has nothing else than 10 objects testing containment. The point is that 10 containment tests will be dwarfed in comparison to other code that will be running-
Each solution has a relatively fixed overhead and then a relatively variable execution time.
For a few objects, it might be faster to loop through an array.
For a lot of objects, it might be faster to use the physics system (to get the benefit of culling).
For specialized usage, it might be faster to use a different culling mechanism entirely (i.e. you mention “circles”, perhaps you want to try a QuadTree for example?) - the solution you choose should benefit your use case. For 10 tests, don’t even bother…
It’s impossible, or next to very hard, to give a concrete answer as to which is the fastest way because it depends on many factors. What is the fastest way? Well, I don’t know - but I assume you mean “in general”. There’s probably some clever patented algorithm somewhere that utilize the resources super efficiently which may be the fastest way. All in line with cache, utilizing all cores for large sets, one core for small sets, GPU for certain sets and probably your sound card for some strange case. Check out judy arrays for example:
Judy arrays are designed to keep the number of processor cache-line fills as low as possible, and the algorithm is internally complex in an attempt to satisfy this goal as often as possible. Due to these cache optimizations, Judy arrays are fast, especially for very large datasets. On data sets that are sequential or nearly sequential, Judy arrays can even outperform hash tables.
The point is, perfect optimization is hard. Really hard. But do you need perfect optimization?
I believe, the real question is “Is either solution fast enough for my current needs, or reasonably expected needs?”.
The answer then would be “measure it, and measure it well”.
If it’s below your performance requirements, try different approaches. Compare it with your previous benchmark to see if you’re making progress or making it worse.