I’m currently working through one of the unity stealh tutorials where i’m coding enemy sight testing, and it detects the player just by doing a straight raycast towards them, to see if anything is blocking the path.
I see a problem with this, and i think i’ll let this video demonstrate the issue:
I’d be willing to bet this problem has come up in a lot of games. If any object, no matter how tiny, blocks that raycast, the player is essentially invisible.
I’m wondering if there’s any standardised solution to this, maybe a commonly used code snippet. I’d think doing several raycasts towards various points on the player would prevent this problem. but why reinvent the wheel.
Ray casts are a pretty common approach to this. The more complex your geometry the more casts you need to be accurate but usually casting to the centre and the bounding box of the target is effective and not too costly.
Some of the error can actually make sense (if you see a tiny bit of someones elbow you may not recognise it as a person).
You may need special cases (for example defining a window zone which is larger than the window and if any cast hits the window zone then cast extra casts through the window to make sure the edges of the character aren’t hidden but their head is sitting in the middle of a window).
I stick gameobjects on all my entities that can be sensed called ‘Aspects’. They’re just a gameobject with a script attached to it that has some data associated with it.
These ‘Aspects’ register themselves with the AspectManager. This way I can search all known Aspects easily with out expensive calls to search all GameObjects. Furthermore I can use the AspectManager to organize the Aspects to get ones in a near by region quickly as well (partitioning a large scene basically).
There are then also ‘Sensors’ that can sense ‘Aspects’. These can have geometry associated with them about how they can see. Cones, spheres, etc. First an overlap is done with this geometry and the aspects to see if they’re even in the line of sight.
Once they met that requirement that’s when I get more detailed. My ‘Sensor’ also have a flag on it to turn ‘line of sight’ on and off. This way I can have sensors that are speedier, and/or they emulate a different style of sense (hearing… smell… etc).
If line of sight is turned on. This is when I raycast. I raycast directly at all the aspects that overlapped my geometry. This is key… any entity could have more than one Aspect. This way a large entity could have Aspects on torso, arms, legs, head, everything, so that if part of the body is showing, it gets caught.
If I want greater detail I could also shoot radial of rays at them as well.
Furthermore I have layers that get ignored by my raycast (configurable on the sensor) so that I can put things like props on that layer and not have it register. Such as that can… I’d put it on a layer that is ignored, so that it doesn’t get to act as cover.
OR
Another option is things that can be picked up like that could have ‘Aspects’ put on them as well. And when they get picked up the ‘owner’ of the Aspect gets set to the entity holding it. So then it appears to be a part of the entity when some other entity senses for it. You can have the ‘sense’ routine throw out props with Aspects that don’t have an owner set.