I’m trying to use the new input system, and so far I absolutely hate it.
I’m making an AR project on android tablet.
I’m simply trying to click on objects in the scene.
So all I want is to know where and when the user taps on the screen.
For some reason this basic thing isn’t well documented, and on the forum here I found I had to do this:
[SerializeField]
private InputActionAsset _inputMap;
private InputAction _click;
private InputAction _pos;
void Start()
{
_click = _inputMap.FindAction("Click");
_pos = _inputMap.FindAction("Position");
_click.performed += Tap;
}
private void Tap(InputAction.CallbackContext callback)
{
Vector3 mousePos = _pos.ReadValue<Vector2>();
Ray ray = Camera.main.ScreenPointToRay(mousePos);
RaycastHit hit;
if (Physics.Raycast(ray, out hit))
{
//do stuff
}
}
with these settings:
Which is surpisingly complex for such a simple thing.
But the problem is, while this works perfectly on pc, this works badly on an actual touchscreen, it only registeres some of the touches, which makes it very annoying to use. (and impossible for drag-functionality)
And it also fails to detect what the actual primary touch is.
I tried just getting all touches, and letting all of these touches register, but that doesn’t work at all.
So my question, how are you actually supposed to do this? Because I don’t really believe this way is the correct way.