Well it sounds like your just talking about a normal button. If you set it up in your UI and works with a mouse click then it will also work for an android touch/tap.
Frankly, I tried using that tutorial’s method but I don’t understand how to use buttons under the canvas. The position of them is in pixels and I’m not sure how to easily place them symmetrically in the editor against objects that aren’t under the canvas and show in world units in the editor. I also don’t understand how to use that “on click” component the canvas button has.
I suppose you don’t have to put a clickable button inside the canvas and could program one outside of it, yes?
I already have a class that controls my game’s shooting with a mouse click, what I want is to have it in touch (with the ability to load your shot if you hold down the finger for longer) and the restriction of the touch-to-shoot area to the screen area where the button is.
I think it shouldn’t be difficult - I read you can use Vector2.Distance from the center of an object to effectively confine something to a circle’s area, but what I don’t get about using touch input is that documentation shows you need to tell it what number of touch input (in sequence when you have several) to check. My game should use two inputs, one for aiming and one for shooting, but the order of the touch shouldn’t matter - I want it to know when you press on the shot button or the area to aim your player to do just that whether there is already another finger elsewhere on the screen or not.
EDIT: Perhaps I can simply use
if (Input.GetTouch(0) [function to find if it’s placed inside the circle] || Input.GetTouch(1) [function to find if it’s placed inside the circle)
And then if one of them is true run the loading timer?
“I suppose you don’t have to put a clickable button inside the canvas and could program one outside of it, yes?”
UI needs to be inside a canvas object in order to work. It can be a few layers/children objects down so you can still organize things but eventually their has to be a parent canvas. So for your game are you trying to click on objects in the scene? Like you click on an object and that becomes your characters target?
The UI canvas has two main settings that I commonly change.
Canvas Scaler - Scale with screen size. X: 1920 Y: 1080
Canvas - Render Mode - Screen Space - Camera or Word Space if I want the UI to be in the world. Think enemy health bar above their heads.
As for the touch input you should be able to loop through it and use touch.position to figure out where on the screen the rouch came from. I believe you can then use that information to shoot a raycast in to the game world and see what it hits.
If it’s a button that isn’t tied to the game world. Like an open inventory button that’s overlayed on to screen I still would recommend the normal UI.
As for OnClick on the button you can hit the + icon and then assign an object. Then you can pick a script + public function to run when the click is triggered. So if you have a public Shoot function then you can use that to.
Also use Debug.Logs to see if your input/code is actually running.
Hope these nuggets of information get you going in the right direction.
Thanks for the reply, though I probably don’t completely understand yet.
What is the definition of UI? I can’t define a clickable area outside of the canvas object that would induce an action already written in one of the classes, such as counting the seconds the gun is being loaded (finger held down)?
For the enemy health bar I did it without the UI canvas, I simply created two overlapping planes, one red and under it a black one, and the red (as a child of an empty parent object to the side of it) scales down according to the HP status of the enemy. Pretty simple.
And it seems no matter the setting I use, the position it shows for canvas objects is in pixels (from its center, so if the reference size is 1080*1920 the x: 540 is all the way to the right).
UI = User Interface
Why do you need to define a clickable area outside of the canvas? The canvas should cover your whole screen.
As for the held down yeah OnClick will just be for when the user first presses it. For that you’ll want to use a button with the Event Trigger Script. See how it’s setup in the picture here.
Ah yeah that health bar works.
With you canvas object make sure the anchors are setup correctly. UI should always use the RectTransform script which is setup when you create a new UI object under the canvas. If you drag something in from the scene it won’t automatically convert to a rect transform. The rect transform gives you access to the anchors. Left click the little box to the left and set it to be stretch, it’s the lower right one in the menu. Looks like a blue 4 way arrow. Before you click it hold down shift and alt. Then that object should always be full screen. Then right click make a child, UI-Button, then you should be able to set the anchor to lower left, if I remember correctly. Then you should be able to have X and Y work the
way you are thinking.
Also sounds like you need to have crash course on the unity UI system in general.
Made this ages ago but should still have some good info. From a stream so not the most condensed form of information but you’ll get to see everything as I do it.
I meant that the clickable button won’t be a UI object of the canvas, not that it’s outside of the camera view. The camera of my game is stationary so I’m not sure what is the difference if I design a sprite as an icon and add a touch-related function to the screen area it should cover? Another thing is that the loading-power graphics I made involves a bar-shaped sprite of a color gradient fitting the different shots you get for each loading period that is masked by a second sprite, which moves upwards according to the loading time until you reach the maximum power and the colored gradient is completely revealed.
From what I can see UI objects don’t use the sprite renderer and thus don’t have that masking option, so I don’t know how can I recreate that if the power bar was a UI icon.
But I’ll try your positioning method and check your tutorial later. Thanks.
I guess if you camera is stationary then it wouldn’t matter if it was screen space or world space. Can i see a screen shot of your game?
UI does have a mask, google or youtube a bit before you make assumptions. Good habit to get in to.I know people who coded up whole systems/features that were built in to unity that they didn’t know existed.
I meant that if the camera is stationary, why would it matter if an element that could be considered as a UI would be rendered as a regular sprite object?
Of course their will be a source image. .png, .bmp, etc. But the engine will treat the source image different depending on how it’s setup. So if you set it up as a sprite then that image is in the world. When you’d move the camera away it gets smaller, when you get closer it gets bigger.
If you set it up as a UI image that’s on the screen overlay or camera space then it will scale based on screen size and basically you are working in percentages because of the reference resolution. With anchors so that the UI scales correctly with different screen resolutions and ratios.
So would it matter, yes because they have different functionality / different ways to interact with them. So I change my opinion since earlier I said “I guess it wouldn’t matter” in that specific case but once you start deal with different resolutions and screen ratios things would then be off.
Well, my game is 2d and in this specific case the camera doesn’t go further away or anything. My idea for making it work for different resolutions (in this case to have the background fill the entire screen if the phone has a taller resolution, the enemies generate further up and adjust their speed) was to have the elements that are further down the screen (the player and the shooting icons) be placed away from the screen’s edges according to a ratio of the width pixels (to my understanding if I design my game vertically you can tell Unity to have elements scale with the width if there are more or less pixels), and if there are more vertical pixels the placement of the enemies would involve translating the 0 y pixel (should be uppermost row, yes?) to a world space value and adding to that value so they’re situated just above the screen.
When I run a game in the Unity editor, does it automatically register mouse clicks as taps? I tried writing a tap script, possible incorrectly, but anyway using the mouse doesn’t do anything.
I tried running it on my Xiaomi phone but it doesn’t work for some reason. I installed the packages they tell you to install and enabled USB debugging on my phone but it still doesn’t detect it.