I have a UI image set as the parent and children as multiple UI images. I am setting my parent’s dimensions in my Canvas Group and via a Layout Element. The thing is, I want to implement an IDragHandler interface for this whole object to use and I’ve succeeded in doing so. But somehow, the interface doesn’t only recognize the parent as the object it needs to drag, but the child images as well. And of course, the child images (who have a sprite) have different dimensions than the parent itself. So, what happens is that the object is being dragged while not being visually clicked - touched.
Below is an example where my object - in this case- a card composed of many different images -( with sprites) will be dragged even when I begin dragging from the blue spot, which is inside my background’s dimensions. (I know many will say that it may be a better practice for a card to be a simple UI Image sprite rather than creating a whole new 3D kind of object, but this was my kind of take on the subject and it seemed to be working well.)
The reason I want to use the IDragHandler Interface is because it communicates with the IDropHandler interface. I have managed to solve half of the problem by creating a drag script of my own that uses the OnMouse functions and a 2D BoxCollider, but I can’t use the IDropHandler this way. Is there a possibility I can somehow limit what the IDrag perceives as “draggable”, or should I go with the solution I found and implement my own version of a DropZone?
I have tried to also limit the UI by using Raycasting on my object, but while it works with only one image, it doesn’t when there are many in the hand, because clicking on an object enables the IDragHandler for all of them.
I just figured that it is better to implement the UI Interfaces, rather than make your own. But I would like to know if there is a way to fix this before I find myself needing to code a solution from scratch.
Thanks a lot!