Just a simple question…
Can something like this work?
void OnGUI()
{
if(_touchi)
if (GUI.Button(new Rect(Screen.width * .5f, Screen.height * .5f, Screen.width * .1f, Screen.height * .1f), "Hello World"))
{
if (Input.touchCount > 0 && Input.GetTouch(0).phase == TouchPhase.Stationary)
{
animator.SetInteger("Direction", 1);
this.transform.Translate(Vector3.left * 7.0f * Time.deltaTime);
}
}
}
What are you trying to achieve?
Whether the code is “correct” and whether it does what you want are often two very different things.
That is a little piece, inside a bigger script that controls a 2D spaceship…
Right now, I have it set up with the keyboard, so it moves left and right and fires…
I want, if “_touchi” is true (the player selected the touch control) to have 3 buttons (I am testing with one, first) that are going to be Arrows and a circle button to control the ship
Edit: I misread, let me try that again…
I don’t think that you need to care about the touch phase, unless you specifically want to ignore input for a moving finger (which I suspect would make controls somewhat unreliable).
Also, rather than assuming that touch 0 is the one that matters, I’d capture the finger ID when the button is initially pressed and find the corresponding touch each frame.
In any case, run the code and try it out. Tweak it based on what it actually does, rather than based on what random people on the internet can guess.
People around here are happy to help, but help is easier to give and more useful to you if you can ask concrete questions about specific problems.
Allright… let´s see:
I have my Spaceship… and, instead of the GUI, I have a GameObject (which is a left arrow) and, right now is located in the bottom left corner of the screen (later I will check to make sure it is never out of bounds, regardless of the screen size) …
This GO has a script that suppose to check if is being touched or not, changing the value of a bool into true or false:
using UnityEngine;
using System.Collections;
public class TouchButtonTalk : MonoBehaviour
{
public static bool _touchingLeftArrow;
void Update()
{
if (Input.touchCount > 0 && Input.GetTouch(0).phase == TouchPhase.Stationary)
{
_touchingLeftArrow = true;
}
if (Input.GetTouch(0).phase == TouchPhase.Ended)
{
_touchingLeftArrow = false;
}
}
}
And there is the (let´s call it) “Master” script, that says what the Spaceship does and, after checking the user wants the touch control, instantiates the left arrow and calls in another function MovingWithTouchs() that will control the moving and shooting of the SP, and, right now I have:
void MovingWithTouchs()
{
if (TouchButtonTalk._touchingLeftArrow)
{
animator.SetInteger("Direction", 1); //for the animation of the ship
this.transform.Translate(Vector3.left * 7.0f * Time.deltaTime); //for the movement
}
}
My question is, since I do not have a touch device to try it out (I do have a touch device but I dont think I can try it out on it, since it is a BBQ10) … if the code will properly work making the ship to move, when the left arrow is pressed and, when it is released it will stop moving / animating?
Thanks!!!
Ahh, right, so you’re not so much asking how to code it as you’re asking whether what you’ve coded is touch compatible?
I could be missing something, but from what I can see I think your best bet is going to be trying it out on a device and working through it. For instance, in TouchButtonTalk you’re checking if there are touches (Input.touchCount > 0) and if the first touch is stationary (touch 0’s phase), but you’re not doing anything to detect if the touch is actually on the arrow.
You’re also not handling all of the potential touch phases (TouchPhase.Cancelled, for instance), or the possibility of a touch being dropped from the array without going through an end phase (shouldn’t happen, but you should still deal with it).
This stuff is all easy enough once you’ve worked with it a bit, but it’ll be difficult to get right without a device to test on and play with.
Know anyone who’s got an Android you could borrow?
I think I can get an android device… is there any “guide” on how getting it working with the Unity Remote (or is just installing it and puting any android devices in the editor?)?
(And, yeah… the first script (with the public static bool) is a component of the arrow game object… I was thinking on doing something similar for the other arrow as well
If you don’t want to actually deploy, from memory Remote is pretty easy - make sure both devices are on the same network with open access to one another and let 'em do their thing. When I was using it, though, it wasn’t especially reliable - it’d miss some touch events, and a busy wifi network meant choppy performance at best. Still a useful tool, but there’s some stuff I’d still want to deploy for.
There’s a guide in the docs to deploy to an Android device - there’s a fair bit of downloading and installing you need to do before plugging the device in, so I’d do that before I actually got the device if I were to borrow one.
Regarding the input, just being a component of a given GameObject doesn’t mean it knows if a touch collides with that object. It’s a check you have to perform yourself.
Also, though it’s not really related, I can’t see any reason for the use of static variables here. Unless you have a clear and specific reason to make something static, don’t.
Allright… I did not know about the statics… may I ask why? (it is easier to write it that way instead of FindObjectOfType()._touchingLeftArrow ) 
While at it, let me ask you something else… in a GUITextField … what code should I use to let a touch device, display the virtual keyboard, so the users can type in their names? (or is it automatic)
It’s a check you have to perform yourself >>>> There you got me… how do I perform a collider check from a touch input?
I believe that selecting a GUI.TextField prompts Unity to open the keyboard itself. Haven’t worked with OnGUI on mobile for ages, though, so I could be wrong.
As for statics, they introduce “global scope” which is typically considered a bad practice. They also make your code far less flexible in many ways, because different instances of the script all share one value. So instead of being able to make, say, a GenericTouchButton and query each individual button to see if it’s pressed, you’re going to have to make a LeftTouchButton, a RightTouchButton, a DownTouchButton and an UpTouchButton, all of which will be the same except for the different names so that your static variables don’t collide. Which is far more work than a GetComponent<…>, anyway.
You have to see if the position of the touch is inside the button on the screen. If it’s a GameObject, attach a collider and do a raycast into it. If it’s a thing drawn in the GUI, just check the position of the touch against the Rect used to draw the object.
Considering that you’re asking this type of question, I suggest checking out the Learn section. I know that’s way less cool than crackin’ away at your own game, but I getting some of the fundamentals down sooner rather than later will be a huge boon in the long run.
1 Like
Great!!! Thanks a lot… Yes, I am a begginner with Unity and C# … kind of in a hurry, since I have to have this game set up for the 12th (because of a “class” I took online… but they jave just explained the basics, like
if (Input.GetTouch(0).position.x > Screen.width / 2)
{
this.transform.Translate(Vector3.right * 7.0f * Time.deltaTime);
}
But since I don´t think that is the way I would like my game made, I am trying to go a little step further 