Multiple Camera Layers Touch Tracking

Im hoping one of you smart people out there could help me out with this little problem ive been struggling with for the last couple of days.

My game/gui is a composite of multiple cameras using culling mask and layers. There are portions of my game that could be a composite of up to 5-6 cameras at the same time in order to get the proper layering of elements on screen. some of these layers contain the buttons, some contain the game environment. My different element don’t even live in the same place in the scene… the game is centered around the origin, while the gui element are off to the side.

This technique has worked really well for layering the visibility of the gui over the game, and im sure some of you are using the very same method in your games.

Now here is where im falling over… im currently placing code on my game camera to track clicking on objects or swiping your finger to pan around and at the same time i have code on my gui cameras to track clicks on buttons, ect. Since both pieces of code are running independently of each other based on their own camera, sometime they both get triggered, even though i want the button to take priority to clicks in the scene.

I thought about making a touch manager that each script would pole for its touch states, that way i could priorities the scripts. But before i go down this road… am i missing something? what method are you using to manage your touch states?

Thanks in advanced

-K

I am doing kind of the same thing, with fewer cameras. I am using two strategies:
During swipe handling in the main view, disable some other scripts so they don’t process the touches
Check the touch phase it helps work out the logic
For guitexture buttons , write a method like isTouchOverButton() which can be checked from another script.
That works for me, but if it’s too complex I guess you would need some kind of touchmanager and dispatcher class!

Say you have 3 camera then you are using 3 different depth of it. So, I think you can do like:

  • Start with highest depth camera [which have GUILayer]
  • Find intersection by each camera for specific touch position.
  • While you it’s return true, then SendMessage to that GUIComponent as OnMouseOver() or Up or Down what ever you need.
  • Camera will intersect higher depth GUI first [Z].

In that way we can ensure:

  • Each gui element will trigger once.
  • A generic way to find touch for gui by all active camera.
  • Touch will active for higher Z GUI.