If you’re only dealing with bounding boxes, all you need is the camera rect and the UI element rect in the same coordinate space, then you can do some box intersection math to find the amount of overlap.
Thanks, Jeffery!
Can I use transform.GetWorldCorners and camera.pixelRect in order to do that? Or should I also use camera.WorldToScreenPoint?
Also - would that work for all canvas render modes and camera projection types?
You just need to get both sets of corners into the same coordinate space. Since this is about visibility on-screen, you’ll probably want to use Screen-space for the comparison.
This is probably not the most efficient turnaround for getting this data, but one apparent solution is to take the result of GetWorldCorners, convert each one to Screen-space using Camera.WorldToScreenPoint.
Then you can get the corners of the camera viewport in Screen-space with Camera.ViewportToScreenSpace. Viewport space is 0 to 1, so the four inputs would be (0,0), (0,1), (1,0), and (1,1).
With those two sets of corners, you can apply an axis-aligned bounding box “AABB” overlap check.
That should account for any camera setup, even if the camera rect is not taking up the full screenspace.
The thing I don’t understand is - why do I need to use WorldToScreenPoint? I wrote some code using only transform.GetWorldCorners and camera.pixelRect and it seemed like it worked great. Isn’t it that enough?
Tried it only in Overlay render mode and for the orthographic camera. Maybe that’s why?
I’m interested in actual pixels by the way - does that make a difference?
I’m not sure how that worked, because pixelRect is a screenspace rect with (0,0) at the bottom left corner of the screen, but GetWorldCorners gives you Unity units relative to the world origin.
PixelRect could probably work, but you’ll still have to convert the WorldSpace coordinates to ScreenSpace for comparison.
The only way I could see that working is if your camera is positioned so that the screen origin matches the world origin, and all your art uses PixelsPerUnit = 1, in which case the World Units would be pixels.
To me the most logical thing to do is convert GetWorldCorners to screenspace corners, and compare with pixelRect to find overlap.
Something is weird…
I’m getting the object world coreners and then convert them to screen space using camera.WorldToScreenPoint
Then, I’m converting the camera to screen space using camera.ViewportToScreenPoint (new Vector3(0, 0)) (as an example).
But it looks like the ViewportToScreenPoint fits the original world corners values and not the WorldToScreenPoint values.
To me it looks like:
Render Mode = Screen Overlay - compare object to screen pixels
Render Mode = Screen Camera - compare object to camera pixels rect
Render Mode = World Space - get both sets of points to world points and then compare
Okay, so I’ve confirmed my assumptions about these values, and it is as I thought.
You don’t have to worry about canvas render mode. PixelRect always gives you the correct screen space rect for your camera and GetWorldCorners will give you the world-space corners of a RectTransform, regardless of the Canvas Render mode. So you can indeed convert the result of GetWorldCorners with “WorldToScreenPoint” and then compare with PixelRect.
This will show you in the inspector what the two values look like:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class test : MonoBehaviour {
public Vector3[] cameraCorners;
public Vector3[] objectCorners;
Canvas canvas;
RectTransform self;
void Start () {
self = transform as RectTransform;
canvas = GetComponentInParent<Canvas>();
cameraCorners = new Vector3[4];
objectCorners = new Vector3[4];
}
private void Update() {
cameraCorners[0] = Camera.main.pixelRect.min;
cameraCorners[1] = new Vector3(Camera.main.pixelRect.yMax, Camera.main.pixelRect.xMin);
cameraCorners[2] = Camera.main.pixelRect.max;
cameraCorners[3] = new Vector3(Camera.main.pixelRect.yMin, Camera.main.pixelRect.xMax);
self.GetWorldCorners(objectCorners);
for(int i = 0; i < 4; i++){
objectCorners[i] = Camera.main.WorldToScreenPoint(objectCorners[i]);
}
}
}
So in this example, you have “cameraCorners”, which are the pixel-coordinates of the screen – from (0,0) to (resolutionX, resolutionY). Then you have “objectCorners” which are the pixel coordinates of the RectTransform in the same coordinate space as “cameraCorners”.
You can use these eight corner values to determine how much they overlap.
So here it seems like I should have just compared the world points with the pixel rect and that WorldToScreenPoint is unnecessary.
Canvas is Screen Camera and World Space:
objectCorners[0] = {(1.7, 1.1, 91.8)}
objectCorners[2] = {(7.3, 5.0, 91.8)} WorldToScreenPoint(objectCorners[0]) = {(981.0, 658.0, 101.8)} WorldToScreenPoint(objectCorners[2]) = {(1586.0, 1086.0, 101.8)}
Camera.main.pixelRect.min = {(0.0, 0.0, 0.0)}
Camera.main.pixelRect.max = {(1586.0, 1086.0, 0.0)}
If the camera moves then in the case of Screen Camera the values remains the same, and in World Space the WorldToScreenPoint changes.
So in both of these cases it seems like using WorldToScreenPoint is necessary and how I should compare the two.