# Raycasting from Camera to a Sphere Collider attached to Camera fails to detect RaycastHits.

Here’s a GIF of me playing around with the main camera. I then started to notice something weird on how RaycastHit gets the hit points.

The main camera has a sphere collider attached to it, and I have a script attached to the main camera where it would shoot 4 rays and raycast them from the main camera to the sphere collider, which is the first thing the raycasts first hit.

If the main camera is pointing towards the horizon, or is pointing up, funky world coordinates start to show up. World coordinates are pointed out in the second GIF below:

EDIT:

Could someone explain to me why the world coordinates are funky sometimes? Funky, as in, sometimes, the world coordinates would be all zeroes. Or the bottom right would be the same as the top right. They are not consistent sometimes.

2388986–162695–Bug.zip (259 KB)

Are the zeroes happening when the camera points at the sky and the raycast hits nothing?

Yes. But this is where it gets confusing.

The Raycasts should be always hitting the sphere collider, because the main camera is completely inside the sphere collider. It should at least hit that collider instead of endlessly going into the sky.

The raycast against colliders are actually sensitive to direction: the sphere collider faces outward, therefore raycasts originating inside it will not collide with it.

Similarly if you had a plane (mesh collider) facing one way, if a ray went through it in that same direction as its normal, it would not collide.

How do you “flip” or “inverse” the sphere collider, so that the raycast will hit the sphere collider? Or, is it because the normal of the raycast is equal to the ray that the Raycast() is casting, therefore it could not detect any “collisions” or “hits”?

And I thought Raycast() could just give me where the rays would stop at if it hits a collider, and gives me a RaycastHit…

I guess I would need help on finding out where a raycast stops at in the world scene if it touches something (colliders, objects, etc.). Does anyone know? Thanks.

EDIT:

One way I could probably use is to switch out the sphere collider with a cube collider. But then again, Raycast() is sensitive to directions, and shooting a raycast from inside a collider would be null in many cases. (Null means, I wouldn’t be able to get any RaycastHit info, or any info on where the raycast ends at in the world scene.)

If the normal of the collider at the point where your ray hits it is facing in the same hemisphere as the direction of the ray, then it won’t collide. That goes for all colliders, both mesh-based ones and coordinate/computational based ones like spheres: colliders are sensitive to rays only impinging upon their “outward” surface.

If you expect the sphere to always be there and know its size and want to see where a ray from your camera touches it, you can simply compute that point of contact based on the unit vector in the direction of the ray multiplied by the radius of the sphere and skip the raycast entirely.

For unit vector calculations, the only problem I would have is how to calculate the precise locations of the 4 corners of the camera’s viewport, aka, the player’s screen coordinates at viewport (0f, 0f), (1f, 0f), (0f, 1f), and (1f, 1f)? Or rather, the precise angles that I need to rotate the unit vectors, so the 4 unit vectors are aligned with the camera’s 4 corners if the unit vectors were extended by the length of the sphere collider’s radius.

Raycasting from the camera is easier, because you can just obtain the viewport’s coordinates, and the camera’s matrix projection handles calculating the angles. And I’m not familar with obtaining the matrix projection from the camera as the camera rotates on the fly…

EDIT:

Ray.direction seems to be what I may be looking for, to use as unit vectors. Am I right? It’s really hard to tell when I’m experimenting it, because I don’t have anything else to compare for accuracy.

How are you raycasting in the first place? Do you know about the camera helpers that let you raycast from any arbitrary point on your view plane?

This is the function:

Lower left is from (0,0) and upper right is from (1,1).

I think that ray will start from your near clipping plane, NOT from the camera’s location. You can simply offset it.

This is what I used for raycasting. CameraView class is the only thing I changed. I cannot get any trapezoid shape in the minimap as I angle my main camera. Maybe it’s because the corners just so happens to be a rectangle…

``````//CameraView - This is where I used raycasting.

using UnityEngine;
using System.Collections;

public class CameraView : MonoBehaviour {
public Collider sphereCollider;
public Vector3 bottomLeft;
public Vector3 bottomRight;
public Vector3 topLeft;
public Vector3 topRight;

public void Update() {
Ray bottomLeftRay = Camera.main.ViewportPointToRay(new Vector3(0f, 0f));
Ray bottomRightRay = Camera.main.ViewportPointToRay(new Vector3(1f, 0f));
Ray topLeftRay = Camera.main.ViewportPointToRay(new Vector3(0f, 1f));
Ray topRightRay = Camera.main.ViewportPointToRay(new Vector3(1f, 1f));

}
}
``````
``````//DrawLines - This is where I used the obtained RaycastHit hit points for rendering lines in the minimap camera.

using UnityEngine;
using System.Collections;

public class DrawLines : MonoBehaviour {
public CameraView mainCameraView;
public Camera orthographicCamera;
public Vector3 topLeft, topRight, bottomLeft, bottomRight;

public void Start() {
if (this.orthographicCamera == null) {
GameObject obj = GameObject.FindGameObjectWithTag("Orthographic");
if (obj != null) {
this.orthographicCamera = obj.GetComponent<Camera>();
}
}
if (this.mainCameraView == null) {
GameObject obj = GameObject.FindGameObjectWithTag("MainCamera");
if (obj != null) {
this.mainCameraView = obj.GetComponent<CameraView>();
}
}
}

public void Update() {
this.topLeft = this.orthographicCamera.WorldToViewportPoint(this.mainCameraView.topLeft);
this.topRight = this.orthographicCamera.WorldToViewportPoint(this.mainCameraView.topRight);
this.bottomRight = this.orthographicCamera.WorldToViewportPoint(this.mainCameraView.bottomRight);
this.bottomLeft = this.orthographicCamera.WorldToViewportPoint(this.mainCameraView.bottomLeft);

this.topLeft.z = -1f;
this.topRight.z = -1f;
this.bottomLeft.z = -1f;
this.bottomRight.z = -1f;
}

public void OnPostRender() {
GL.PushMatrix();
{
GL.Begin(GL.LINES);
{
GL.Color(Color.red);
GL.Vertex(this.topLeft);
GL.Vertex(this.topRight);
GL.Vertex(this.topRight);
GL.Vertex(this.bottomRight);
GL.Vertex(this.bottomRight);
GL.Vertex(this.bottomLeft);
GL.Vertex(this.bottomLeft);
GL.Vertex(this.topLeft);
}
GL.End();
}
GL.PopMatrix();
}
}
``````

The other way would be to do this instead:

``````//CameraView - The only thing changed.

using UnityEngine;
using System.Collections;

public class CameraView : MonoBehaviour {
public Collider sphereCollider;
public Vector3 bottomLeft;
public Vector3 bottomRight;
public Vector3 topLeft;
public Vector3 topRight;

public void Update() {
Ray bottomLeftRay = Camera.main.ViewportPointToRay(new Vector3(0f, 0f));
Ray bottomRightRay = Camera.main.ViewportPointToRay(new Vector3(1f, 0f));
Ray topLeftRay = Camera.main.ViewportPointToRay(new Vector3(0f, 1f));
Ray topRightRay = Camera.main.ViewportPointToRay(new Vector3(1f, 1f));

RaycastHit hit;
if (Physics.Raycast(bottomLeftRay, out hit)) {
this.bottomLeft = hit.point;
}
else {
}

if (Physics.Raycast(bottomRightRay, out hit)) {
this.bottomRight = hit.point;
}
else {
}

if (Physics.Raycast(topLeftRay, out hit)) {
this.topLeft = hit.point;
}
else {
}

if (Physics.Raycast(topRightRay, out hit)) {
this.topRight = hit.point;
}
else {
``````var ray = Camera.main.ScreenPointToRay( inputPosition );