Here is my screen and i plan to divide it dynamically in different divisions ; in this case 4
Now how can i detect touch in these 4 regions and have different piece of code run for different touch input.
i guessed that i should use Ray cast to identify the object but here my whole screen will act as input, not object who is under Raycast.
So definitely i have to check with my camera which is orthographic and after getting its size i have to divide my screen in 4 equal regions
how i am suppose to do it ?
Define the areas of the screen in a percentage of screen space as Rects, then check if each Rect contains the touch position.
Scripting References :
Example (C#) :
// Rects as percentage of screen space
public Rect zoneRed = new Rect( 0, 0, 0.2f, 1f );
public Rect zoneYellow = new Rect( 0.2f, 0, 0.3f, 1f );
public Rect zoneOrange = new Rect( 0.5f, 0, 0.3f, 1f );
public Rect zoneBlue = new Rect( 0.8f, 0, 0.2f, 1f );
// calculated rects with screen pixel values
private Rect rectRed;
private Rect rectYellow;
private Rect rectOrange;
private Rect rectBlue;
void Start()
{
float screenWidth = (float)Screen.width;
float screenHeight = (float)Screen.height;
rectRed = new Rect( zoneRed.x * screenWidth, zoneRed.y * screenHeight, zoneRed.width * screenWidth, zoneRed.height * screenHeight );
rectYellow = new Rect( zoneYellow.x * screenWidth, zoneYellow.y * screenHeight, zoneYellow.width * screenWidth, zoneYellow.height * screenHeight );
rectOrange = new Rect( zoneOrange.x * screenWidth, zoneOrange.y * screenHeight, zoneOrange.width * screenWidth, zoneOrange.height * screenHeight );
rectBlue = new Rect( zoneBlue.x * screenWidth, zoneBlue.y * screenHeight, zoneBlue.width * screenWidth, zoneBlue.height * screenHeight );
}
void Update()
{
Touch touch;
for ( int i = 0; i < Input.touchCount; i ++ )
{
touch = Input.GetTouch(i);
if ( touch.phase == TouchPhase.Began )
OnTouchBegan( touch.position );
}
}
void OnTouchBegan( Vector3 pos )
{
// check each rect to see if it contains the touch position
if ( rectRed.Contains( pos ) )
{
Debug.Log( "RED was touched" );
// do stuff for RED
// no need to check further rects
return;
}
if ( rectYellow.Contains( pos ) )
{
Debug.Log( "YELLOW was touched" );
// do stuff for YELLOW
// no need to check further rects
return;
}
if ( rectOrange.Contains( pos ) )
{
Debug.Log( "ORANGE was touched" );
// do stuff for ORANGE
// no need to check further rects
return;
}
if ( rectBlue.Contains( pos ) )
{
Debug.Log( "BLUE was touched" );
// do stuff for BLUE
// no need to check further rects
return;
}
}
This seems a logical problem to me more than a coding one so i will answer in that manner.
Why don’t you split those divisions into four different game object with colliders and name them differently. Then you just have to raycast and check which game object you hit. This way the sprite does not need to be cut in four.
Alternatively you can set inside the image a different alpha for each region and check inside Unity with raycasting the texture’s alpha if it matches the one you set inside the image. Then if it matches just do the logic you wish in a switch statement.
I would prefer the first method because it is easier to set.