Stopping camera follow when wall is in view

Hey everyone! Hope you are having a great day.

I am working on a top-down shooter game, in which the camera follows the player. I am doing this with a very, very simple CameraFollow script, in which I Lerp to the player’s position:

using UnityEngine;

public class CameraFollow : MonoBehaviour
{
    public Transform target;

    public float smoothSpeed = 30f;
    public Vector3 offset;

    void FixedUpdate()
    {
        // Smooth camera
        Vector3 smoothedPos = Vector3.Lerp(transform.position, target.position + offset, smoothSpeed * Time.fixedDeltaTime);
        transform.position = smoothedPos;
    }
}

Here is what is currently happening:

172565-ezgifcom-gif-maker.gif

What I want to happen is that the camera stop after the wall is in view (so, the player is no longer in the center of the camera and can instead go toward that wall). Could somebody please tell me how I could do this?

Note: The walls have a 2d box collider in them, and the whole screen apart from the play area is not a wall - only the part touching the play area is a wall, and the rest is the camera’s default color. The play area is just a sprite colored black.

I would just raycast an arbitrary length in the 4 cardinal directions and offset the camera’s position by a factor of the RaycastHit2D.distance. This would prevent you from having to ‘freeze’ the camera when near a wall, which would cause issues with a scenario where, say, you had to move north or south and you were against an east or west wall, I imagine you’d still want the camera to follow. This is a pretty quick and simple implementation of this technique, I put 4 BoxCollider2D I wanted to block camera motion on a layer called “CameraBounds” and then used it as a LayerMask for the Physics2D.Raycast. This way, you aren’t offsetting the camera with everything the player runs into. I’ve attached an example and the script used which was attached to the camera with the knob as the target transform.

172577-ezgifcom-gif-maker.gif

172578-ezgifcom-gif-maker-1.gif

public class RaycastCamera2D : MonoBehaviour
{
    public Transform target;
    public Rigidbody2D rb;
    [Range(0.1f, 10f)] public float smoothSpeed = 1f;
    [Range(1, 10)] public float rayLength = 5f;
    private float angleStep = 4;
    private Vector3 offset;
    private LayerMask layerMask;

    private void Start()
    {
        layerMask = LayerMask.GetMask("CameraBounds");
    }

    void FixedUpdate()
    {
        offset = Vector3.zero;
        for (int i = 1; i <= angleStep; i++)
        {
            var rayAngle = Quaternion.AngleAxis((360f / angleStep) * i, target.forward) * target.up;
            RaycastHit2D hit = Physics2D.Raycast(target.position, rayAngle, rayLength, layerMask);
            if (hit)
            {
                offset += (rayAngle * rayLength) - (rayAngle * hit.distance);
                Debug.DrawRay(target.position, rayAngle * hit.distance);
            }
            else
                Debug.DrawRay(target.position, rayAngle * rayLength);
        }
        Debug.Log(offset);

        Vector3 smoothedPos = Vector3.Lerp(transform.position, new Vector3(target.position.x - offset.x, target.position.y - offset.y, transform.position.z), smoothSpeed * Time.fixedDeltaTime);
        rb.MovePosition(smoothedPos);

    }
}

If you wanted to optimize this code, you could set a CircleCollider2D with a radius of your raycast and use it as a trigger to see if you’re close enough to a wall to even bother raycasting to squeeze out a little more performance.

UPDATE: Probably worth noting that I implemented this with a kinematic Rigidbody2D attached to the camera so I could utilize Rigidbody2D.MovePosition, but you could probably accomplish it just as well with setting the transform.position directly.

UPDATE 2: To answer OP’s question in the comments about dynamically raycasting to the frustum’s bounds, yes, you absolutely can, and you’re on the right path with your thoughts about Camera.orthagraphicSize. Since implementing an example of how to do this introduced some slightly complicated math, I decided to go ahead and make the example more robust. Unfortunately I can’t post a video, but I’ll post the code and explain how it works:

using System.Collections.Generic;
using UnityEngine;
using System.Linq;

public class RaycastCamera2D : MonoBehaviour
{
    public Transform target;
    public Rigidbody2D rb;
    [Range(0.1f, 10f)] public float smoothSpeed = 1f;
    public float angleStep = 4;
    private List<Vector3> contacts = new List<Vector3>();
    private LayerMask layerMask;
    private float height => 2f * Camera.main.orthographicSize;

    private void Start()
    {
        layerMask = LayerMask.GetMask("CameraBounds");
    }

    void FixedUpdate()
    {
        contacts.Clear();
        for (int i = 0; i < angleStep; i++)
        {
            var rayHeight = (height / 2) * Mathf.Sin(Mathf.Deg2Rad * ((360 / angleStep) * i));
            var rayLength = (height / 2) * Mathf.Cos(Mathf.Deg2Rad * ((360 / angleStep) * i)) * Camera.main.aspect;
            var ray = new Vector3(rayLength, rayHeight, 0f);
            var hypotenuse = Mathf.Sqrt((ray.x * ray.x) + (ray.y * ray.y));
            RaycastHit2D hit = Physics2D.Raycast(target.position, ray, hypotenuse, layerMask);
            if (hit)
            {
                contacts.Add (ray - ray * (hit.distance / hypotenuse));
                Debug.DrawRay(target.position, ray * (hit.distance / hypotenuse));
            }
            else
                Debug.DrawRay(target.position, ray);
        }
        Vector2 avgOffset = Vector2.zero;
        try
        {
            avgOffset = new Vector3(
                contacts.Average(c => c.x),
                contacts.Average(c => c.y));
        }
        catch (System.Exception e) { }
        Vector3 smoothedPos = Vector3.Lerp(transform.position, new Vector3(target.position.x - avgOffset.x, target.position.y - avgOffset.y, transform.position.z), smoothSpeed * Time.fixedDeltaTime);
        rb.MovePosition(smoothedPos);
    }
}

So, in order to make this a bit more dynamic, some trig had to be introduced. We set the rayHeight to half the height of our screen multiplied by the sine of our current angleStep in our loop, similarly we set the rayLength using our angleStep's cosine multiplied by the screen height and the camera aspect ratio. This allows us to dynamically generate raycasts that span your orthagraphic camera’s frustum not just in the 4 cardinal directions, but as many angleSteps as we want. We’ve also gathered enough information about our ray to give it a direction and generate a hypotenuse (length) along that direction. Once we raycast and determine whether or not a boundary has been hit, we get the vector of the of the hitPosition as a factor of our hypotenuse along the ray direction and add it to a list of List<Vector3> contacts. We use this to generate the average offset of all of our points that hit something and then offset the camera x and y by that value.

The one drawback I’ve noticed utilizing this method is that it is mathematically going to draw in near convex corners and push out near concave corners, but with 8 to 16 angleSteps, the transition is actually pretty nice. Anyways, I hope this is helpful!

This might sound weird and impractical for multiple walls, but you could have trigger colliders on each of the walls that extend outward. Then, have a bool that is set to true OnTriggerEnter() with any of the walls. Finally, only run your camera follow in the fixed update if the bool is set to false.

private bool NearWall = false;

void FixedUpdate()
{
    if (!NearWall)
    {
        // Camera follow
    }
}

void OnTriggerEnter(Collision collider)
{
    if (collider.gameObject.CompareTag("Wall"))
    {
        NearWall = true;
    }
    else
    {
        NearWall = false;
    }
}

void OnTriggerExit()
{
    NearWall = false;
}

@megargayu