Hi guys, I stared working with UnitySteer .Currently I am working on an app where agents move along a sequence of predefined paths .I set obstacles between the way points using UnitySteer's SteerForSphericalObstacleAvoidance behavior. I can see that my vehicle which uses AutomomousVEhicle checks against several obstacles once they fell into its range ,however its avoidance behavior always related to one obstacle .If ,for instance , I put two spherical obstacles close one to another so there is no gap between them ,The agent will calculate the obstacle only of one of them and path at its side even if its overlapped by second obstacle collider.Is there a way to force the agent to treat am array of closely positioned obstacles so that it would not pass between them ?
Thank a lot !
I haven't worked with UnitySteer at all, but my guess would be that it won't handle adjacent obstacles the way you'd like it to (although I could be wrong about that).
Generally speaking, getting around one sphere is a collision avoidance problem, while getting around two spheres side by side is a little more of a pathfinding problem.
That said, it should be possible to extend the obstacle avoidance behavior to work with other convex objects (although you might have to code it yourself if UnitySteer doesn't support it). For example, if you were to add support for capsules and represent the two spheres using that shape, the agents should then steer around them as desired.
Maybe someone else will have a better answer though.
Can you just use a recall? Do the same thing twice , the second instance should be on the result, when the results are not equal or no difference in environment was observed the result should be the end of the loop as it currently is and the action completed. It seems to me that your problem stems from having a working flow that assumes that once an INCOMPLETE correct result is found that the move can be excuted. If so you should avoid changing the environment when this action is taking place because such a system can not deal with changes on the fly unlike a continous check that uses sensor input for updated feedback to determine a path. The event (I am on a new Tile) for example could simple ask the (following example-->) algorithm again to pop out a YES/NO list of options that are accetable ot the user depending on your preferred settings.
For example you can use an A* algorith to determine which path is admissable, however this path is a constant and the move requires time 't' (seconds). If on 0.5t the environment would change (moving walls in a maze) the path becomes invalid and needs to be recalculated for a new admissable result. This is impossible without a sensor like system that detects the environment (let's assume a 2D tile based Grid of 1 : 1 tiles (meters))
Weigh carefully the decision as to how you want to balance your pathfinding relative to the speed the environment can change.
If you calculate a path per frame your object will react instantaneously and the cost is maximized in performance. If you react once every year it's quite the opposite, so consider when the events occur and how quickly the response must be to such an event to find the best result:cost ratio.