Which is more expensive- Physics.OverlapSphere, or a persistent collider component?

I’m working on my camera script, and I need the ability to detect all the colliders within an arbitrary distance of the camera so I can nudge it out of the way to prevent unwanted occlusion. The two obvious ways I can think of to handle this would be either running Physics.OverlapSphere several times per second through a coroutine, or placing a sphere trigger collider around the camera, cacheing nearby colliders in ontriggerenter, and removing them from my cache in ontriggerexit.

Intuitively it seems like the latter option would be more performant, but it also occurs to me that a collider, even one without physics, is effectively doing the equivalent of an overlap check every fixedupdate. So between my two options is there an obviously superior choice, or are they effectively identical?

1 Like

The Physics.OverlapSphere is more performant for sure, because there is no dual-geometry collision calculation happening. Normally with a collision between two colliders, the system iterates through the vertices of both colliders to determine if they’ve hit and where (it does this as soon as two collider’s bounds come within eachother). With OverlapSphere you’re simply iterating through nearby colliders to check if this single sphere point and distance from it intersects with those polys.

At least, this is all in theory, it should be more performant. The best thing to do when it comes to this stuff is to simply setup your own test scene and see how it’s performing. Because the info you get from others may be based on outdated tests/info.

5 Likes

Ooh that’s interesting, I hadn’t even considered the fact that both colliders would test each other in the case of a persistent collision component… I know this is subjective, but for that matter how fast is Physics.OverlapSphere? Given that my camera behavior tries to ensure the resulting sphere returns zero colliders, is it something I could be running every update without thinking about it? Doing that doesn’t even begin to make a dent in the profiler, but I always get really leery of placing anything in Update that could possibly be cut down to even 5-10 updates/second in a coroutine.

It’s perfectly fine to run every update, but you’ll want to put it in FixedUpdate() so that it’s not tied to framerate but the physics updates.

The cost of any collision operation, whether it’s OverlapSphere or two colliders, is the amount of vertices involved. So the denser the collision meshes in your scene, the worse it will perform when it enters those object’s bounds. But either way, the OverlapSphere is faster than using another collider to detect.

4 Likes

That’s super-duper helpful, thank you very much for clarifying how it operates on the back end :slight_smile:

1 Like

Alternatively, for this specific example you don’t even need to use a sphere. You could just use raycasts. Of course if it’s for occlusion just one would work in front of the camera, but for all cases you’d need to raycast sideways too.

Not sure whether a couple of raycasts would be faster than OverlapSphere, though.

There’s also the problem of running it in FixedUpdate. If you have a really high framerate, it’s actually possible for objects to be occluded, because the physics is running at a lower rate. If all your camera movement and rendering is in Update, it would likely make more sense to do the occlusion corrections in Update as well.

Not to turn this into a general FAQ on camera best practices, but I just ran into a related problem implementing this: my thinking is that the camera should only receive one change in position each frame, to prevent it from jittering or visibly fighting with itself as “Move to your ideal position” fights with “Don’t clip into any occluders”. Preventing clipping seems as simple as using the spherecast or raycast “whiskers” to find nearby collisions and generating a vector moving away from that collision, and getting the desired camera position is just a calculation based on the player’s position + facing, but combining them is weird: the collision avoidance coordinates apply to the camera’s current position, but lerping towards the optimal position doesn’t take the collision into account:

    public Transform target; //the object we want the camera trying to keep in view

    public float cameraMovementSpeed; //speed at which the camera changes position

    public Vector3 GetDesiredPosition()
    {
        Vector3 desiredPosition = target.position - (target.forward * 5) + target.up * 2; //The camera wants to be behind and above the player; we'll edit this to let the mouse rotate it later
        return desiredPosition;
    }

    public Vector3 GetCollisionAvoidance(Vector3 center, float radius)
    {

        Vector3 collisionAvoidanceAdjustment = Vector3.zero;
        Collider[] hitColliders = Physics.OverlapSphere(center, radius);
        for (int i = 0; i < hitColliders.Length; i++) //For every collider we find, move directly away from that collider by one unit
        {          
            collisionAvoidanceAdjustment += (transform.position - hitColliders[i].ClosestPointOnBounds(transform.position)).normalized;          
        }

        return collisionAvoidanceAdjustment;
    }

    public void Update()
    {
        Vector3 optimalPosition = GetDesiredPosition(); //This is the "perfect" position for the camera, an arbitrary number of units straight behind the player
        Vector3 collisionAdjustedPosition = GetCollisionAvoidance(transform.position, 1); //This creates a 1-unit long vector moving in the opposite direction from each collider our spherecast finds

        Vector3 desiredPosition = optimalPosition + collisionAdjustedPosition; //This is silly and doesn't work
        Vector3 newPosition = Vector3.MoveTowards(transform.position, desiredPosition, cameraMovementSpeed * Time.deltaTime); //Lerp towards our destination, multiply this by deltaTime so movement speed is framerate-independant

        transform.position = newPosition; //Apply final position, then make sure we're still looking at the player
        transform.LookAt(target);
    }

This obviously doesn’t work as intended, because adding collision adjustment and my optimal position together is worthless: the optimal position is a point straight behind the player, and all I’m doing by adding them together is reducing its position by a few units relative to nearby colliders. I could fix this by instantly applying each collider avoidance motion as I discover it, like this:

[LIST=1]
[*]        for (int i = 0; i < hitColliders.Length; i++) //For every collider we find, move directly away from that collider by one unit
[*]        {          
[*]            collisionAvoidanceAdjustment += (transform.position - hitColliders[i].ClosestPointOnBounds(transform.position)).normalized;
[*]

[*]           transform.position = Vector3.MoveTowards(transform.position, transform.position + collisionAvoidanceAdjustment, cameraMovementSpeed * Time.deltaTime);          
[*]        }
[/LIST]

However, that’s going to cause the aforementioned jitter, since there will be a small-yet-visible motion every frame for every occluder. Is there a happy compromise I’m missing?

Ah, in this case it might be better to scrap the “collider adjustment” entirely, and use a single raycast backwards from the player. So all you need to do to get the target position is:

Vector3 rayDirection = -target.forward * 5f + target.up * 2f;
RaycastHit hitInfo;
if(Physics.Raycast(target.position, rayDirection, out hitInfo, 5.5))
{
    cameraTargetPos = hit.point;
}else
{
    cameraTargetPos = target.point + rayDirection;
}

Now we KNOW cameraTargetPos can see straight to the player, because it has a raycast all the way to that position. You could also offset it by subtracting the vector (raycastDirection.normalised * someSmallFloat) if you want the camera away from the collider a bit. Now all you have to do is smoothly lerp to the target position. You can use something like this for simplicity:

transform.position += (cameraTargetPos - transform.position) * Time.deltaTime * lerpSpeed;

This will be ok, but like you said can cause juddering effects, especially when the position is changing slowly and the camera is near the target point. In real life, things move smoothly because we’re applying a force to them, which in turn creates an acceleration. Its position depends on it’s momentum as well as the acceleration. Here we’re just changing it’s position directly. Instead we can mimick real life like this:

// Should be in the range 0-1.
// A value of 1 will cause the velocity to always point at the target.
float smoothingFactor = 0.2f;

// Float controlling the speed the camera moves.
float lerpSpeed = 1f;

// Velocity of the camera.
Vector3 velocity;

// This is the vector that, when added, will move the camera to the target point.
Vector3 offset = (cameraTargetPos - transform.position);

velocity += (offset - velocity) * smoothingFactor;

transform.position += velocity * Time.deltaTime * lerpSpeed;

I haven’t tested this yet, I’m going to boot up my PC now. It may need some tweaking, like a drag variable to keep the camera speed from exploding.

Yup, as expected when you try to write code without testing it needed tweaking ;p

Full class I used after testing is here:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class CameraFollow : MonoBehaviour {

    public Transform target;

    public float distanceFromTarget = 5.5f;
    public float lerpSpeed = 6;
    [Range(0f, 1f)]
    public float dampening = 0.6f;
    public float mouseSensitivity = 5;
    public float nudge = 0.2f;

    float xRot;
    float yRot;
    Vector3 velocity;
    Vector3 targetPos;
    RaycastHit hitInfo;
  
    // Update is called once per frame
    void Update () {
        xRot -= Input.GetAxis("Mouse Y") * mouseSensitivity;
        yRot += Input.GetAxis("Mouse X") * mouseSensitivity;

        Vector3 rayDirection = RotationToVector(xRot, yRot, distanceFromTarget);

        if(Physics.Raycast(target.position, rayDirection, out hitInfo, distanceFromTarget))
        {
            targetPos = hitInfo.point;
            targetPos -= rayDirection.normalized * nudge;
        }else
        {
            targetPos = target.position + rayDirection;
        }
      

        Vector3 offset = (targetPos - transform.position);

        velocity += (offset - velocity) * dampening;

        transform.position += velocity * Time.unscaledDeltaTime * lerpSpeed;

        transform.LookAt(target);

        //Debug for seeing stuff happen in Scene view
        Debug.DrawLine(transform.position, targetPos, Color.yellow);
        Debug.DrawLine(target.position, targetPos, Color.blue);
    }

    // Returns a vector of length <length> rotated <x> degrees around the x axis
    // and <y> degrees around the y axis.
    Vector3 RotationToVector(float x, float y, float length)
    {
        Vector3 vec = Vector3.back * length;
        vec = Quaternion.Euler(x, y, 0) * vec;
        return vec;
    }
  
}

You don’t need the extra function, just set rayDirection to your offset vector (target.forward * 5 + target.up * 2).

The dampening factor needs to be bound to 0-1 or it’ll explode to infinity.

The values I used gave some pretty smooth results, but if you move the camera fast enough then it will clip through the player trying to lerp to the other side. Can be solved by just adding a minDistance and offsetting the camera if it gets too close, whatever suits your application.

Ooh that’s a really good idea- in my head I’d been visualizing its behavior as an orb following the player, but basing its behavior on a backwards raytrace makes it much cleaner, and it nicely handles a few future edge cases, like handling occlusion that can’t be maneuvered out of (e.g. the camera getting caught in a window behind the player or something).

I am curious, are you seeing kind of a weird jitter when you move? I threw together a quick obstacle course with the unity default guy, and your implementation works perfectly when standing still, but the camera seems to buck forward every so often of its own volition- Imgur: The magic of the Internet is pretty typical of movement.

I’ve been scratching my head over what could cause this- there’s nothing behind the character the raytrace could be colliding with, and the jitter isn’t the sort of thing you get when your timesteps are out of sync.

Huh, that’s weird. Looks like the animation is possibly messing with the transform. Check the Scene view for the debug lines being drawn. It should look like a blue rod with a yellow elasticy band pulling the camera along. What part of the character is the target set to? I’ll add it to my scene now and see if I can replicate it.

Found the problem. Create an empty gameobject, parent it to the root of Ethan and call it Camera Target or something. Set it’s transform to 0, 1.45, 0.

The raycast starts at the target’s position, and that happens to be very close to the floor at his feet, basically zero. So sometimes the raycast was hitting the floor, other times it went straight through.

The raycast should have a layermask that ignores floors/characters anyways, only affected by ceilings. Also it should be done in LateUpdate() so make sure any other Camera transforms happen before it tries to do its thing.

1 Like

Ooh no kidding, I never even considered the floor’s occlusion- I figured it was something specific to the model, like maybe his geometry was somehow clipping in front of the ray as his walk cycle animated the model!

So everything works perfectly now- this has considerably better gamefeel than my stand-by “child camera to player, pivot based on mouse input” prototype, and even the small issues I can see with this version will be really readily smoothed out by massaging the settings and checking for a few edge cases, so thank you!! I made a goofy obstacle course to check its behavior in worst-case scenarios, and it performs really swimmingly: http://i.imgur.com/2xG4i41.mp4

This is something I’ve heard before, and it makes sense (move the camera as late as you can, to prevent state/physics changes or weirdness from throwing the game and camera out of sync), but moving the camera in LateUpdate seems to consistently introduce a very subtle jitter to the world- is the optimal route to do tests (like the spherecast/raytrace/whiskers) in LateUpdate, cache the results, and actually apply camera motion in Update, where you correct it for deltatime to keep everything framerate-independant?

Yea, if it introduces judder keep it in Update. LateUpdate runs after coroutines and animation, so those can introduce inconsistencies. That would probably be because deltaTime is now even more inaccurate, on top of the fact it gives the last frame time, so you’re always one frame behind “perfection”. The raycast also can’t ignore floors/walls, because that’d sort of defeat the purpose lol

General rule of thumb:

  • If you’re operating on rigidbodies, put it in FixedUpdate,
  • If you’re using deltaTime, put it in Update.
  • If you’re not using deltaTime, and for algorithmic reasons you want update code to run after everything else, put it in LateUpdate.

Everything runs in the same thread anyway (it’s not like calling functions from Physics isn’t threadsafe), and we’re not really doing any physics operations here, just using the raycast function.

Actually yeah, now that you mention it even running in Update, I’m noticing a small amount of persistent jitter. It’s subtle, but much more noticeable at high levels of zoom, especially if you pop in some simple geometry for motion references. I thought that might be the editor’s overhead, but packaging it into an exe doesn’t do a lot to diminish it, and I’ve seen similiar issues with my own implementations; is that an unavoidable side-effect of placing the camera outside of the character’s hierarchy?

Hmmm, that’s interesting. I think I see it too.

Running the profiler shows most of the frames are waiting for VSync while occasionally idipping way down to about 0.3ms, but turning that off also doesn’t seem to fix it. I’m hitting a reasonably consistent 100 fps with it on, with about 2000 FPS with it off. Obviously it’s still going to not render most of those frames, my monitor is only 75hz.

Change the ‘dampening’ variable to somewhere around 0.1, it seems a little smoother. The velocity won’t be changed by as much though, it has the same effect as if you increased it’s mass so it’s extremely heavy, so it’ll look more “floaty”.

It may just be how Unity’s cameras behave, I recall reading a thread a long time ago about this. Adding in a very subtle motion blur might work.

Hmm, I’m going to mess around with this and see if I can’t accidentally trip over what’s causing it… I know that persistent jitter is classically a sign that your camera and character are one frame out of sync, but I can’t see that being the case here- the camera is moving in Update, and I’m using the standard asset third person controller, which does movement in FixedUpdate, but changing that to Update doesn’t change the behavior.

I’ve noticed it only applies on one axis, which is weird: moving straight forward and backwards doesn’t produce jitter, but strafing with A/D does, maybe that part’s just an optical illusion? I can tentatively confirm it’s root cause is the camera motion, because disabling the script and parenting a static camera to the character eliminates it, but absolutely nothing we’ve discussed or implemented should be causing this.

Let me pick my brain and post back about this, it’s almost definitely something silly and minor that both of our eyes are just skipping over.

I may have found the reason.

In the camera script, add a Start function and set Time.timeScale to 0.1. Then play the scene and move the character around. this essentially takes it to the extreme where dozens of update calls can run between a single FixedUpdate. I really can’t think of a way around this other than turning up the physics update speed on higher end computers that allow it.

Hmm, I found another workaround, but it weirds me out because it makes no sense- if you run the camera’s logic in FixedUpdate instead of Update, does that smooth it out for you? I think it does on my end, at the very least I don’t see any of the sharp jitters I can elicit when it’s in Update.