I am building a game which is supposed to have Clash Royale like UI swiping with a snap. I am not a complete newbie to Unity per se, but when I code for input controls I am always kinda confused how to approach the issue. So although I have a plan on how to code this, I am unsure of what’s the best way to detect the swipe. (I don’t like lifting code 1:1 from internet, as I feel that I lose my grasp of the issue, so that is not an option in this case)
Do I use Input.GetTouch and Input.GetMouseButtonDown for both mobile and editor function, and don’t touch eventsystems at all?
Do I use an event trigger (added to canvas?) that reacts to begindrag?
Do I create a script that utilizes IDragHandler to get the drag event?
Are these all viable options? Using drag events I think already takes both mobile and mouse input into account so that seems like the most practical way, but Input.GetTouch is how I see people handle mobile input in general so kinda confused here. Appreciate any feedback!
There is some extension package for the UI, which has scripts like snapping ui. Swiping itself should work with your scroll rect, but as this does not provide snapping yet, you have to load that UI extension. UnityUIExtensions / Unity-UI-Extensions / wiki / Home — Bitbucket
This was my best workflow for now, just had to do some tweaks here and there to get the results, but its quite useful not just in snapping scrollrects.
IPointerDown etc require a canvas / UI element, so you will have to have a RectTransform that covers the click area, or have a GameObject that does the same, but then you’ll have to add PhysicsRaycaster for your camera.
If you want to take advantage of UI system RectTransform features, then you probably might want to use EventSystem pointer events.
IDragHandler (IBeginDragHandler, IDragHandler, IEndDragHandler) is part of this same UI event thing, and can be used to automatically detect drag events for UI element. However it doesn’t move the object automatically. But you get the position data supplied to you by UI system PointerEventData and you can then move your RectTransform with that knowledge.
EventTrigger is just a component that has all the IPointerDown style pointer events that you can assign listeners to from Inspector.
Input.GetMouseButton etc work just with a mouse click and also with a single touch on touch devices. These work without any specific object, so you can detect a click without any (target) objects. To detect move direction, store current position as previous position and get the delta from that in the next frame.
Input.touches gets you an array of current touches, so you can handle multiple touches, but this pretty much works only with touch devices supporting multiple touches AFAIK. You can also create a touch yourself (touch = new Touch()) and then use mouse input (GetMouseButton etc), to set your touch state. This way you could have input.touches or GetTouch to get “real” touches and then this alternative version Input.fakeTouches, that you can use to get (at least) one touch on PC, with similar values (fingerId, phase, deltaPosition).
To be able to change a view/page with swipe you just probably need to know user touched screen, and then just detect if user swiped (position in TouchPhase.Ended is different than what you got in TouchPhase.Began) and to what direction. Then lerp / animate from one screen to another.
@eses Thanks so much! Upon reading your explanations, for a more general input like screen swipe it seemed better to manually detect the input with a combination of Input.GetMouseButton and Input.touches - I went with that and it works great.