Intersection of bounds and cone-of-vision: Any ideas?

I am performing a relatively simple test to see if an object is within a cone of vision of an AI agent. Now I know there are several ways to achieve this. First of all I want to determine if the object is within the cone of vision, and then raycast to see if its obscured or not.

The problem is:

1. Large objects that could be partially in the cone of vision but their center is not.
2. Or what about when their center is obscured by a wall but part of them is showing?

I would like to hear how you have attempted to solve these problems.

You are gonna have to determine how realistic you want to make LOS rules, as there are a couple variations I can think of:

Simplest:
Raycast from Enemy Center to Player Center, and consider anything obscuring to be a non-LOS

More Complex:
Find objects within Cone LOS (with dot product and distance). Then take any objects within the cone and do multiple raycasts back from object to enemy AI. This is where you have a choice of complexity of LOS. You can take the Box collider of the Object you want to check if you can see, and Raycast from all 4 sides of the nearest face (top,bottom,left,right, and possibly center). As soon as one doesn’t hit any walls you have sight. You can use insets from the edges of the face to add a small buffer to LOS edges so if he’s 1 pixel over the corner he won’t be seen. Or you can not raycast from bottom, if you don’t want someone to see under things (but still be able to see over low cover).

Dunno how clear it was so I’ll recap:

1. Do Cone Check from Enemy to Objects
2. For Each Object in Cone do X number of Raycasts at offsets from center from Object to Enemy.
3. Tweak where to place raycast offets + how many to do based on performance/precision.

2 quick ideas just in my head - a dot product between the viewing cone direction and the direction to the agent from an objects transform.position will give you the angel between them… since you define a cone with an angle, it will be simple to tell if that AI saw the object or not… that’s a very coarse approach though

if this is a PC game, you could try a novel approach and actually render the scene from the AI’s point of view?
Just render a 128x128 (or smaller!) render texture and use RenderWithShader with an extremely cheap shader (just set color to black or something) but on objects of interest have their materials set to something like bright red. …if I remember correctly you can mask what the render with texture renders with?

evaulaute the pixels of the render texture you just captured and if any come up red… the AI saw something of interest!
this takes care of large/small objects and pivots being hidden as it’s now all about what the AI really did see!

my 2c

@urgrund
The issue with that is that using a low res Render Target really only saves you on Pixel Shader and Fill Rate, whereas Vertex Shader and Draw Call count are still high (especially if you have 100+ enemies, bloating your draw call count 100 times).

Also you would need to incorporate slightly different colours per object if you wanted to be able to be able to determine who it was exactly that was seen (for example using ids 1-255 and having a different red value for each id). And you would need special consideration for transparent objects.

It can be a good idea for a lower enemy count, but in the end it could result in hindering the amount of enemies available at any time. Or it would drastically limit scene complexity, as each successive enemies cost is based on the current scene rendering complexity.

Thanks for the insight everyone. I am already using Vector3.Angle to determine if the enemies are in the cone of sight and currently just doing one Raycast to the center of the target. Very simple but suffers from both issues I mentioned above.

I will probably go with the idea of multiple raycasts and tweak it as you mentioned.

I like the novel approach that urgrund mentions but as has been pointed out it would have it’s limitations.

Thanks for the input.

You could maybe use Mesh.bounds/Renderer.bounds to get a point on the extreme left, right, top and bottom edges of the mesh/renderer and test those points.

hmm I would have thought the rendertarget approach would still be pretty cheap?
consider a ‘current-gen’ game by todays standards, even on console, will have realtime water reflections so that requires a rendertarget of at least 1/4 the screen buffer to get a decent quality in the reflection plus it will render the geometry with at minimum a flat diffuse shader (you often pass ‘reflection geometry’ to the reflection camera though to reduce vertex processing)

you could even draw the other agents as camera aligned solid quads so you don’t have to skin the agents for the vis test?

anyway… just a random thought and would obviously need testing to see if it’s viable
if it were a high-end PC game I don’t think it’d be an issue… I wouldn’t do this on console or web
~~

One other thing is, what about using the Occlusion Culling data?
I haven’t actually played around with it at all in Unity, but it would be great if you could get a list of objects to be culled from a certain vantage point… since the static geometry is pre-calculated for visiblility, it wouldn’t be that hard to say “From this AI Agents camera fustrum, give me the objects that I will draw”…

Again, just a random rambling… I’ve no idea how flexible or open Unitys implementtation of OC actually is.

Let us know how you go!