# How to set 2D boundaries to a moving camera

I’m making a 2D R-Type/Gradius style shmup in C#. I’ve set the boundaries of the player’s movement to the camera using the following script attached to the player:

``````	// Restrict the Player to the camera's boundaries
// Top boundary
if(pos.y + shipBoundaryRadius > Camera.main.orthographicSize) {
}
// Bottom boundary
if(pos.y - shipBoundaryRadius < -Camera.main.orthographicSize) {
}

// Gets Screen height and width and calculates ratio (can support multiple resolutions)
// Camera.main.orthographicSize only detects height, not width, so this method is used
float screenRatio = (float)Screen.width / (float)Screen.height;
float widthOrtho = Camera.main.orthographicSize * screenRatio;
// Right boundary
if(pos.x + shipBoundaryRadius > widthOrtho) {
}
// Left boundary
if(pos.x - shipBoundaryRadius < -widthOrtho) {
}

// Update position of Player
transform.position = pos;
``````

This works all fine and good, but once I try to move the camera, such as this script attached to the camera:

``````public int cameraSpeed = 1;

// Update is called once per frame
void Update () {
Vector3 pos = transform.position;
Vector3 velocity = new Vector3(cameraSpeed * Time.deltaTime, 0, 0);
pos += transform.rotation * velocity;
transform.position = pos;
}
``````

It appears that the boundaries set in the first script are absolute and don’t follow the camera once it starts moving, leaving the player behind.

I feel like I’m missing something obvious here, but any help is appreciated.

I might be wrong, but isn’t your widthortho variable giving you the entire screen width, when maybe you only want half if your player is in the center of the screen?

Clamp the values. I’m too lazy to work out the math for multiple screen sizes at the moment, but once you calculate the boundaries just clamp the x component of the camera’s location to them.