I’m using the camera.ScreenPointToRay functionality as so:
Ray ray = cam.ScreenPointToRay(touch.position);
Touch and cam are both set ahead of time. This works fine on the pc, but when I build it out to the iOS from a mac, it always gives me back the same ray - specifically, one at the position of the camera pointing in the camera’s direction.
I’m sure I’m missing something stupid, but I’ve double checked that the camera’s stats look valid and that the touch position looks right (by which I mean, they look the same as they are on the PC, where it’s working).
Has anyone seen a problem like this before?
Thanks in advance!
Edit with more info:
cam is simply a cached version of MonoBehavior.Camera, which is established in awake.
touch is defined (in the previous line) as such:
Touch touch = Input.touches*;
*
where i is simply an iteration variable (checking rays from each touch).
In addition, from my debugging these seem to have appropriate values - but even replacing my touch with various hardcoded vector2s and vector3s is having no effect on the resultant ray.
I wouldnt know about using any raycasting on iOS, but I do think another method might get the job done for you to detect touch “hits”, which I kinda just thought up myself, with some inspiration from a few touch related posts on the forums and here.
Instead of doing the raycast at all, instead take the info for turning the screenpoint into world space, (where your touch is happening in world space) and then make an if statement something like
if((touch.transform.position.x - objectithit.transform.position.x) < 1.5f && (touch.transform.position.y - objectithit.transform.position.y) < 2.0f)
{
// do my logic cause I just touched the object on the screen...
domystuff = true;
}
else
{
// dont do my logic, cancel any bools or whatnot out
domystuff = false;
}
touch is your… well duh haha, and objectithit is just a private (or public I guess) GameObject that is your possible “touch-able” object yaknow. Could be buttons, enemy targets, whatever I suppose…
And that is just the approach I been going with (android however), no rays needed, and it just seems simpler to me, although the logic usually needs some bool to control like, it in a state of “beenhit” and false if it has yet to be touched, ya know what im sayin? I guess same would go with the raycasting too wouldnt it…
Now if for some reason you just have to keep the raycasting to do something, then this wont work for you then…for instance you dont have a “mostly” 2D game this would be more problematic I think.
And one last thing, I only completed about half the logic, you would also need to do something like mytouch.x minus objectimhitting.x GREATER THAN -whatevernumber to make it not effect a whole half the screen or something, but I figure thats enough to get the idea across yaknow