AI Navigation WIP Demo

Hey guys. I’m really pleased with how my navigation system is working so I thought I’d share and get some feedback and thoughts.

There is no pathfinding or waypoints, the AI are simply looking for their targets displayed by the long lines (after I turned on debug).

I’d prefer not to use a pathfinding method and have them navigate by searching as the game I’m designing will have alot of level changes happening. It won’t be as crowded as this so this was a bit of a stress test.

Feedback welcome :slight_smile:

Thats awsome .Gonna bookmark this page.

Looks very natural, it´s really sweet!

This is something Im working on aswell. I love how it looks.

Are you going to release this in the future?

need more info on how to do this!! Raycasts? How are you moving enemies forward? transfor.Translate?

Thanks guys for your comments.

I may release this as a plugin in the future, but it’s nowhere near that stage yet.
I currently have the transform.Translate () command in the Navigate function which is not the best way to organise the routine.
I’ll change it to return the direction and distance paramaters and move the characters seperately.
That way I can have the Navigate calculate on a time basis instead of a per frame basis as it currently is. If this ran on a slow computer I’m not sure how it would act, so time based calulation is a better option, like the Physics system.

The function also takes about 10 arguments to calculate the navigation. I have it setup as a class, so I can change how the system gets the data needed for each calculation. I’m thinking pass on any needed info to the class before calling the Navigate function. Any tips in this area would be welcome. Still deciding the best way. I’m fairly new to C# and am constantly learning new tricks that it offers. I come from a BASIC background, so OOP programming is doing my head in sometimes.

@jrricky
Yes, this is purely a Raycast and priority based navigation. It calculates where it wants to go and how fast to move and then adjusts this direction according to it’s surroundings.
Yes, it uses transform.Translate for the forward movement, but the complex part is deciding the direction to travel.

Do you know how it fares performance wise vs other methods of pathfinding?

Interesting treatment.

What is the application for the navigation? (i.e. what sort of characters/objects are the AI agents?)

The reason I ask is if they have a obvious facing (and/or “walk cycle”) are you worried their random ray seek might make them look quite “nervous”.

Years ago (when we were still unfortunately stuck in the land of flash), we tried a presumably similar (but more primitive) ray-casting approach for AI enemies (bugs). It looked really silly, and we ended up having to spend a long time implementing an A* pathfinding approach (but that did look very cool). It was using a broad grid (with character positions marching over the grid) - of course that affected our level design (no elements could be “thinner” than the grid size).

Look forward to seeing how it progresses.

A couple of comments:-

  1. Nice work … however…
  2. This sort of thing can be achieved/is similiar by using opensteer … however
  3. I doubt this sort of thing won’t work in 3d complicated environments with things such as stairs drops etc so is game specific.
  4. There is no really replacement for navmeshes or waypoint graphs.
  5. Hope I don’t sound negative - just trying to help.

I like it. How is performance? There seems to be a lot of raycasts going out. How would the system handle height differences?

If it’s raycast based, it should perform just fine on most post 2003 PCs. Handheld devices will struggle with more than a couple of agents. This is a fairly common (and easy-to-code) way of doing AI navigation.

If you won’t have any slopes or stairs, you’ll end up doing at least 2 raycasts per object (2* forward at 45 degrees). Otherwise you’ll end up using at least three (2*forward at 45deg. and one on -Vector3.up).

While this is all good and whatnot, the results will usually be mediocre with just 2 raycasts so you’ll end up implementing more of them anyway. Add to this the actual navigation script (which will have a couple of operations in Update) and you can see how this isn’t really suitable for anything other than PCs.

It works just fine, if done properly. However, as I previously stated, it’s more expensive then a good navmesh or waypoint system. The nice thing about a raycast approach is that it feels more natural if done well.

nice movement result… from top view it is very interesting to look at.

Hey MrMetwurst !

Looks very cool! Neat to see this thread. I have been working on an AI navigation system for the past week also!

I have included a video to share my results so far, as you can see the system is intended specifically for walking characters and for a specific set of behaviors and I am planning. It has a long way to go, but have enjoyed trying to build it myself rather than using ready made plugins, etc.

I noticed on the right side of your video there is a circle being formed by the objects. This is really cool to see also! You’ll see I am trying to do the same thing. I wound up adding a fall back that allows a character to “give up” and stand in a second row as long as he is close enough, but I was still trying to figure the best way to have them close the circle completely. It looks like you have a circle forming perfectly there.

In your video, some of the very long green lines look like they are anchored to a place the vehicles are moving away from - is that supposed to be happening and what is the purpose? Other long green lines look like they are looking ahead, and if they detect a distant obstacle, the higher number of raycasts activate and fan out to watch for the obstacle, but they seem to allow getting very close when checking so far ahead initially. It does seem like a lot of raycasts.

I am doing two raycasts (yellow lines) at 45 deg from the character. There is a third raycast straight forward that is only used as a second check when the other two cannot be resolved (going into a corner, etc.) The blue lines from the heads are not raycasts, but just show where the character is looking (they look at each other when close enough, or other objs of interest or random if not close to anything. (the lines show up 30 secs in the video) I am also triggering arm animations when they get close to each other. More (better) animations are planned as part of the behaviors. They wander randomly (a little too randomly during these tests) and occasionally are introduced to a waypoint so that they will collect in groups.

That is extremely twilight zone priceap. Impressive work. :slight_smile:

Amazing work!

Thanks for all the comments guys. I really didn’t expect such an enthusiastic response.

Not really as I haven’t tried any other methods yet. I really am tailoring this to my game plan. I imagine a well designed waypoint or A* method would be faster. Saying that I am getting around 500fps in the stats window in the editor. I’m running an i7 920 @ 2.67 Ghz.

The AI are going to be world builders. It’s going to be a life sim that’s surrounds another game plan.

No, the look function is smoothed and the real world application will be more open than this demo.
The grid problem is part of the reason I want to avoid A*. I’d like the AI to decide where to build/live etc.

Yeah, opensteer does look impressive, I’m definately going to look at that further. I checked it out ages ago. I kind wanted the satisfaction of doing it myself though. That’s why I program :slight_smile:
Yes, it is game specific, but eventually I’d like to add multilevel elements. Baby steps.
4) is debatable and I appreciate all thought. You don’t sound negative at all :slight_smile:
Thanks.

Performance so far has been suprisingly good. When a long green line turns yellow then there is an object in the way and the more complex checks kick in. All green lines tells it to simply move to the target. You can’t see it because of the top down, but the map does actually vary in height. If it can’t see a taget over a hill, then in real life a person can’t see the target and won’t know that it’s taget is there, hence he won’t move to where he can’t see yet.

Thanks pricap. Your video looks awesome. The little touches like the characters waving to each other and looking at objects or the camera are very nice touches. I agree, there’s something special about building it yourself, right? :slight_smile:

I simply made the characters walk slower as they approach their target. That way the turn to forward ratio is increased. It doesn’t always work out so good. Sometimes they wander round the group endlessly. Stupid AI’s :smile:

Shhhh, you didn’t see that :slight_smile: Yeah, when 2 characters walk next to each other, sometimes they don’t allow each other turn towards their destination until they are way off course, then they have to navigate back to where they wanted to go. That’s why the AI outside the wall had to navigate his way back. It’s kind of a flaw, but kind of cool at the same time. I think in a bigger open world this won’t be such a problem, but I’ll probably add a speed limit feature to help avoid other AI. I’ve designed it so that further objects don’t have a strong influence on the direction. So yeah, they can get quite close before they panic and really start turning away drastically. Again, I think more open worlds won’t be so bad. The start and end angles and the distance between casts is editable in the editor. I will of course cast the absolute minimum needed to make it feel natural as the game progresses. When the AI each have more specific goals and targets I think the crowding problem will be less frequent and I can cut back on the rays.

I think for 3 rays your tests are working very well. Love ya work :slight_smile:

Interesting - I did the same thing but opposite - the forward speed stays the same but I increase the turning amount as they get closer so they will rotate in towards the goal faster. Sometimes they will go around the group until finding an opening, but I also turn off the raycast collision avoidance and rely on the character controller collsions when they get very close to the group, and sometimes they get stuck while aimed straight at the goal.
I think a big difference is when character animation is included, that you cant make them move too slow or they seem to be in slow motion, but I might add an additional animation cycle that could look like they are moving around the others.

Definitely one of the challenges when one AI starts “herding” another away from its goal. Apparently they both don’t share the same goal! Another example of when the application determines the need. In my case, only one AI starts looking for a goal and it is the nearest one to him. If a different goal becomes closer for some reason (herding), he just uses that goal instead. Once he has arrived then that goal becomes the goal for all other AIs within a range - since they all share a common goal they don’t tend to herd one another off in other directions, but they do some.

Either way there is still the challenge of collision avoidance overriding goal seeking. I am translating the AI forward while interpolating its rotation towards the goal. The rotation amount is a member variable that continuously gets added or subtracted based on the combination of collision avoidance and goal seeking - so there is always a bias towards the goal in the collision avoidance “decision making”

However it still seems like your vehicle could continue to be herded by the other vehicle with the different goal, and with the long walls in the big world, it would still get far from its fixed goal. Maybe once it has gone past a distance threshold and has kept avoiding things .e.g, on its right, it could change modes, stop seeking its goal and steer to the left for some duration in order to turn around and then start seeking its goal again .

It’s looking very cool and I enjoy watching all the different event occurring in the video. Good luck on the development of the game and keep going with it! I hope to see the progress.

@MrMetwurst
you can try to add “information sharing” about that blue targets… so if one agent find target but it’s not his destination it will be added to “shared pool of destinations/goals” → hash table
1.that way agents can share locations when they are in proximity, like “smarter one” meets “dumber one” … means less computation and primitive learning… + adds up on effeciency
2. when agent change to new goal already in his pool he know where to look after it = less computation and less random movement, this looks more intelligent
3. agents can share goal change status [existent/non-existent] from agents already visited that place, without self test for location status and move to second goal on the list instantly after “quick chat”

  1. add more video of this!!! I watch it 3times a day… it makes me feel good somehow :smile:

I will be adding “memory” to the AI eventually so they know where something is once they have seen it.
Passing on information to each other is a good idea. I’ll consider this as well. I won’t make it a global table that all can read though, because that will, in this case, defeat the purpose of my current game design. :slight_smile:
Nothing much more to show at this stage. I have changed the Navigation system last night to only run once every set time interval instead of every frame. And I have improved the navigation as the AI get close to their blue targets.

Then my RAID setup kept crashing my PC and halted all progress :frowning:

I’ll post some more when I feel there is something worth sharing :smile:

Cheers!!