Hi, I have following setup:
EventSystem with Touch Input Module, Canvas with Graphics Raycaster, Camera with Physics 2D Raycaster.
Small UIImage on screen and bunch of 2D Sprites with Polygon 2D Colliders in front of Orthographic camera.
According to Unity - Manual: Touch Input Module I expect to receive following events when touch starts inside UIImage:
- On touch begin:
a) UIImage script::OnBeginDrag - On touch continues:
a) parent script:: OnDrag - On touch released:
a) parent script:: OnDrop
b) parent script:: OnDragEnd
In fact I don’t get 3.a OnDrop and I need to implement blank IDragHandler on UIImage although I need only IBeginDragHandler otherwise UIImage::OnBeginDrag is not even called.
Is it something I am doing wrong or bugs in Unity engine/documentation?
I have UIImage in Canvas with script like this:
public class ShapeDraggingScript : MonoBehaviour, IPointerExitHandler, IBeginDragHandler, IDragHandler
{
public GameObject parent;
public void OnDrag(PointerEventData eventData)
{
// MUST HAVE for drag functionality working
}
public void OnPointerExit(PointerEventData eventData)
{
parent.SetActive(true);
}
public void OnBeginDrag(PointerEventData eventData)
{
// Code to destroy/create children on parent
eventData.pointerDrag = parent;
}
}
And then parent from that script is GameObject with following script:
public class DragScript : MonoBehaviour, IDragHandler, IDropHandler, IEndDragHandler
{
public void OnEndDrag(PointerEventData eventData)
{
Debug.Log("END DRAG");
}
public void OnDrop(PointerEventData eventData)
{
Debug.Log("DRAG DROP");
}
public void OnDrag(PointerEventData eventData)
{
//Debug.Log("PARENT DRAG");
}
}