Calculating World position of screen's corners?

Is this the correct way to do this?
To add some context, these positions will be used to place box collider around the screen.

// Screens coordinate corner location
	var upperLeftScreen = Vector3(0, Screen.height, ship.transform.position.y );
	var upperRightScreen = Vector3(Screen.width, Screen.height, ship.transform.position.y);
	var lowerLeftScreen = Vector3(0, 0, ship.transform.position.y);
	var lowerRightScreen = Vector3(Screen.width, 0, ship.transform.position.y);
	
	//Corner locations in world coordinates
	var upperLeft = camera.ScreenToWorldPoint(upperLeftScreen); 
	var upperRight = camera.ScreenToWorldPoint(upperRightScreen);
	var lowerLeft = camera.ScreenToWorldPoint(lowerLeftScreen);
	var lowerRight = camera.ScreenToWorldPoint(lowerRightScreen);

Looks right to me!

Hmm, it doesn’t work though…

In what way is it not working? The z component of your Vector3 variables should be measured in world units from the camera you are using, i.e. it is relative.

See the docs: Unity - Scripting API: Camera.ScreenToWorldPoint

If you use camera.nearClipPlane like in the example, you will get the exact world position of the screen corners.

Fixed it!

Here’s what I did:

// Screens coordinate corner location
	var upperLeftScreen = Vector3(0, Screen.height, depth );
	var upperRightScreen = Vector3(Screen.width, Screen.height, depth);
	var lowerLeftScreen = Vector3(0, 0, depth);
	var lowerRightScreen = Vector3(Screen.width, 0, depth);
	
	//Corner locations in world coordinates
	var upperLeft = camera.ScreenToWorldPoint(upperLeftScreen); 
	var upperRight = camera.ScreenToWorldPoint(upperRightScreen);
	var lowerLeft = camera.ScreenToWorldPoint(lowerLeftScreen);
	var lowerRight = camera.ScreenToWorldPoint(lowerRightScreen);
	upperLeft.y = upperRight.y = lowerLeft.y = lowerRight.y = ship.transform.position.y

where

depth = (ship.transform.position.y-camera.transform.position.y);
3 Likes

Thanks for sharing your solution.
I came up with an alternative way of computing the same thing, I’m not sure which one is lighter on the CPU though.
Also, my solution only works if you’re using an orthographic camera in 2D.

        var upperLeft = camera.transform.position + new Vector3(-camera.aspect * camera.orthographicSize, camera.orthographicSize);
        var upperRight = camera.transform.position + new Vector3(camera.aspect * camera.orthographicSize, camera.orthographicSize);
        var lowerLeft = camera.transform.position + new Vector3(-camera.aspect * camera.orthographicSize, -camera.orthographicSize);
        var lowerRight = camera.transform.position + new Vector3(camera.aspect * camera.orthographicSize, -camera.orthographicSize);

If your camera aspect doesn’t change during the game, you can even precompute these Vector3 once in the class and save a bit more on computation.