Hi… I am trying to select objects in my 3d world via a touch event.
I am using this fairly simplistic piece of code to cast a ray from the camera through the touched screen point using ScreenPointToRay:
private var hit : RaycastHit;
private var ray : Ray;
function FixedUpdate () {
if(Input.touchCount == 1) {
ray = Camera.main.ScreenPointToRay(Input.touches[0].position);
Debug.DrawLine(ray.origin,ray.direction * 10);
if(Physics.Raycast(ray.origin, ray.direction * 10, hit)){
Debug.Log(hit.transform.name);
}
}
}
This works perfectly fine using a perspective camera, but problems arise if I switch the camera to orthographic mode.
The ray’s origin does move according to touch position, while its direction remains fixed - this seems fine, as I need a ray parallel to my (orthographic) camera’s z axis.
BUT: the visible ray drawn by DrawLine is not parallel to the camera’s z - it always ends in the same fixed vanishing point, regardless of its (screenpoint) origin. This is weird, as intuitively I’d say a ray’s direction vector would always be relative to its origin’s local space. Instead direction seems to be a fixed point relative to world space (normalized somewhere at -0.5, -0.7, 04).
Furthermore, using the Unity remote, I seem to be able to select objects as expected, i.e. the ray cast with Raycast does point straight in my orthographic camera’s view direction and is not identical to the ‘confluent’ rays drawn with DrawLine.
Is it possible that Raycast and DrawLine are behaving differently using the same input variables? Do I have to TransformDirection on the ray’s direction? (can’t seem to make this work).
It’s entirely possible that I have some misconceptions about the relationship between camera/screenspace/viewport as well as the difference between points and directions in general. Suggested reading to help get me up to speed on this would be highly appreciated as well.
Thank you!