AI Scan Area over a Range

I was wondering if someone could help me out with a problem. I’m not really sure where to begin in implementing it, so this is more of a question of concept then it is of actual code. If I know what I need to write, I’m usually decent at being able to do it. Any help would be great. Here’s the problem:

I want to be able to search for objects (like the player) over a preset 2D distance and shape based on the rotation of whatever is searching. I have three base VisionTypes that the AI will be able to have, which are the following:

  1. limitedFrontal: Can only see in a 90 degree angle (cone/triangle) in-front of itself
  2. fullFrontal: Can only see in a 180 degree angle in-front of itself
  3. complete: Can see everything around it.

Everything in my game is already in a grid based formation, similar to a chess game. Only one object can be on a single cell in the grid at a time, and it is turn based. Overall, what I want is that when it’s an AI’s turn, they look around based on their set VisionType and see if there is anything around that it cares about. If there is, then set itself up to move towards it, if possible. Obviously I would also need to take walls and other obstacles into account for the vision. The Pathfinding part is already done, so overall I just need it to be able to actually find something and then set a course based on the cell that the thing of interest is on.

I just don’t know how to do this. Is there some sort of Raycast function that casts over more than just a single point/line? If not, how would I obtain such a functionality?

Basically, limitedFrontal Vision would essentially be a 2D Triangle, where the upper tip is the point of origin, spreading outward over a range. fullFrontal Vision would essentially be a 2D Rectangle where the center of a side would be the point of origin, spreading outward over a range. Complete would be a 2D Square, where the point of origin would be the center of the square and it would spread outward in all directions.

Anything caught inside of these would be inspected for interest based on the object that cast the view. For example, a monster wouldn’t really care about other monsters of the same type, but might care about a rival monster type or the player, and seek them out and attempt to attack them.

Any thoughts on how to implement this would be great. Thanks in advance.

Have the unit spherecast and get everything its looking for with in its range.
Check the angle between the direction to target and your unit’s forward to determine if its within the units view. Finally raycast to target to check for obstructions.

Spherecast will return an array of colliders so do the above to all returned possible targets or stop at first viable target… whatever is easier.

Is a spherecast the best sollution for a gridbased system? I fear you will run into some weird behavior if you do that. For instance, when the unit is standing at the edge of the sphere, and the grid is touching but the unit is not. Isn’t it better to take the rotation into account and cast raycasts on all the squares in range of the AI? Takes a bit more programming, but for a grid based system, the results would be more consistent.

If you’re using a grid based system why would you even need to worry about using a spherecast or even a raycast system? If it’s Grid based then you should already be storing all object locations based on the grid. Why not try writing a script that checks the AI’s current position and direction, and then run a vision check based on the grid system.

So if the AI is in grid location (5, 4) facing y positive with limited vision then have it check grids (5, 5) (5, 6) and (5,7). If it returns the player in (5,6) then see if there is an obstruction in (5,5). For a triangle view you would just increase each row check by last row + 2. So once again if they are (5,4) with a y positive direction then you would check (5,5) then (4,6) (5,6) (6,6) and finally (3,7) (4,7) (5,7) (6,7) (7,7). And then I’m pretty sure you would know how to adapt that for an all around view.

If you are using a grid based system then you should try to hold as much information you can in the grid and have as much of the game as you can based around the grid. It will make everything a lot easier to code and work with in the long run.

Woof - talk about overblown solutions. OverlapSphere and a dot product is all you need.

enum VisionType = { Cone, OneEighty, Full }
float visionRange = 10;
VisionType visionType = VisionType.Cone;

Collider[] nearby = Physics.OverlapSphere(transform.position, visionRange);
Vector3 myForward = transform.forward.normalized;
for (var i = 0; i < nearby.Length; i++)
{
    if (visionType == VisionType.Full)
    {
        Debug.Log("I see you!");
    }
    else
    {
        Vector3 toTarget = (nearby[i].transform.position - transform.position).normalized;
        float dot = Vector3.Dot(myForward, toTarget);
        if (visionType == VisionType.Cone  dot >= 0.7f || visionType == VisionType.OneEighty  dot >= 0)
        {
            Debug.Log("I see you!");
        }
    } 
}

You got me with the dot being easier heh… but in yours how does he take walls/ obstructions into account?

Everything is in a grid-like formation however the parameters of the grid is not set, ever. The word grid was used because it was the best way to visualize it. The playable area is randomly generated and can go in many different directions over many locations in the scene, depending on the generation parameters, however all tiles placed by the terrain generation are 2x2 (or whatever two dimensional square size I wish). It’s not a square grid and overall the grid is not really assigned coordinates like that because it’s not needed.

This is a far simpler solution than I had anticipated. As I expected, the main part of it is a function I never really knew about (because I’ve never really needed it before).

That’s actually not that big of a deal. I have Base Scripts for all Non-PC objects anything would be interested in. I could simply attempt to get that script. If it fails, then it’s not something that the AI would care about. In the case of catching the Player, I could easily look for the script that deals with Player Control or something like that, to distinguish player from potential rival monster. As for taking literal vision into account, I can simply do a Raycast from AI → Object of Interest. If it collides into something other than the OoI, or fails for some reason, it can’t see it, and remove it from the list of potential targets.

Thanks a lot for the input guys, I really appreciate the assistance on this. If you guys have anything else to add, I’m all ears (eyes, technically).

Indeed - this doesn’t do that. It would be trivial to raycast all of the elements in nearby against a layer mask representing the obstacles and only evaluate the dot products for those instances where the raycast does not return a hit.