So I got to the point in my many projects (because with ADHD, I of course cannot focus solely on any one game idea I have) where I have begun to look into AI behavior and pathfinding. I know that there is already a whole project dedicated to implementing A* into Unity, and as I dive into the subject at it’s core, I’ve come across many questions, one of which was answered in this post on StackOverflow. As they put it, I figure in terms of an NPC’s awareness/knowledge, that of course something like A* wouldn’t quite seem right.
To be clear, I haven’t researched the subject anymore than skimming the Wikipedia pages for these algorithms, nor have I looked to see if any discussions/tutorials exists on using them.
Consider the following scenarios for an NPC:
A. A fellow adventurer who is discovering an ancient ruin.
B. A local baker going from their home, to the market, and to the shop.
C. A peddler taking their goods from town to town.
D. The enemy in front of you!
Between the three options A*, D*, and Lifelong Planning A*, which do you think would be most applicable to these separate scenarios?
Which of these do you think would be most performant? Assume different population numbers; 10, 50, 1000 per point of interest.
Are there any games that you know of that specifically use these algorithms or others?
I’m very curious and would like to know what you all think about it!
I recently have been enjoying the line of sight approach
This guy doesn’t actually path find, to a point, but he knows where he cant go, and if an obstacle appears and could easily be told where to go. He’s like a robot vacuum cleaner and that’s why i spend alot of time admiring it for its potential, instead of over developing it.
If working on a grid, and objects are confined to grid coordinates as they move, then pathfinding can be achieved iteratively. He only needs to know the next nearest square to his target. But this often includes an obstacle check, well is the next nearest space clear? if not how do i decide the next nearest point to that? Problems arise, solvable problems. it is a matter of preference, Game design, if you had a grid that objects attached to its convenient to use it.
I myself, as a fellow ADHD who is undiagnosed and will never diagnose, who also cannot work on one project at a time, who does not know how to settle down on what to program with my super powers, and always comes up with a better more fun, or more rewarding idea. I like giving him line of sight, and having him respond live, but this game although partially grid based, the AI nor player never uses that grid for anything. The AI having LOS, and code, using ray vision, same as player controller, means he doesnt need to know about collision with walls or objects, because he’s a robot vacuum cleaner he will automatically avoid them precisely as he is told.
I disagree line of sight helps him find a path by himself.
The limited gif does not show hours of wondering around the room in all directions. He can leave the room as well and navigate corridors.
if I told him to move towards a location then he can find a path to the location by himself quite simple.
On a grid based, you can do your next cell and memory of cell. I couldn’t navigate the previous cell so I won’t go back there to recheck unless I fail to navigate another 5 cells. Do I navigate around an obstacle left or right? Well which is closer for target.
Did you have any solutions yourself? I only managed to get one line out of your comment.
Does your algorithm work with concave obstacles and rooms or other dead end situations ?
Does it tell you if particular target is reachable ?
Will it tell you how far is your target (in path length).
Will it work in 3d (layered) level ?
Personally I’m using navmesh and A* for pathfinding.
and RVO for dynamic objects avoidance.
You can slap on a bunch of heuristics, but it’s going to be a losing battle. There will always be pathological case: a situation where those heuristics wind up producing a bogus result!
I use a NavMeshAgent (which is some kind of A* derivative, I presume) for complex NPC behavior. You just need to implement your own systems to simulate the NPC’s awareness!
For example, the NPC doesn’t just know where the player is. Instead, the NPC has a list of sensations, like sound, sight, and touch. Each sensation has random noise added to the true source of the sensation. Therefore, faint sounds from a distance will give a very vague target, whilst bumping into something will give a nearly-perfect location.
Then, the NPC picks the most interesting sensation (touch, sight, sound, in descending order), breaking ties with sensation strength, and navigates towards the location it thinks that sensation came from.
Its all path finding for me, To get an object to move around your game world without glitching and respecting obstacles.
Line of sight, is more intelligent path finding than Grid based, its possible to be more performant as well. Though navigating from A to B, can require a plethora of decisions.
@chemicalcrux Thats a pretty good idea. I was actually already trying to implement the beginnings of a similar system before I had even looked into pathfinding and made this post. I was inspired by RimWorld, where characters had stats related to their senses that could be improved or hampered by a number of conditions. Things like nightvision implants or having had their eye poked out by an enemy spear.
@StarBornMoonBeam I’m not too keen on using a system entirely based off of line-of-sight, as in the presence of pathfinding on large maps (for instance, going from one town to the next) the NPC may have prior knowledge or simply a map to figure their way. However your grid-based example has me intrigued! From the looks of things your NPCs avoid colliding with each other; do they move from cell to cell until they are unallowed, upon which they look for a new path? If that isn’t the case, how do they avoid each other?