Im hoping one of you smart people out there could help me out with this little problem ive been struggling with for the last couple of days.
My game/gui is a composite of multiple cameras using culling mask and layers. There are portions of my game that could be a composite of up to 5-6 cameras at the same time in order to get the proper layering of elements on screen. some of these layers contain the buttons, some contain the game environment. My different element don’t even live in the same place in the scene… the game is centered around the origin, while the gui element are off to the side.
This technique has worked really well for layering the visibility of the gui over the game, and im sure some of you are using the very same method in your games.
Now here is where im falling over… im currently placing code on my game camera to track clicking on objects or swiping your finger to pan around and at the same time i have code on my gui cameras to track clicks on buttons, ect. Since both pieces of code are running independently of each other based on their own camera, sometime they both get triggered, even though i want the button to take priority to clicks in the scene.
I thought about making a touch manager that each script would pole for its touch states, that way i could priorities the scripts. But before i go down this road… am i missing something? what method are you using to manage your touch states?
Thanks in advanced
-K