I’ve been looking through the documentation and a few unity examples and external examples like this http://technology.blurst.com/iphone-multi-touch/ and I figure out that isn’t possible to have more than five touches scanned on the screen or an area, like if I split the screen in two pieces with a finger.
Am I right? Or is there any way to determinate the area of pressure?
The Unity manual states “The iPhone and iPod Touch devices are capable of tracking up to five fingers touching the screen simultaneously.” As to whether it’s impossible, I don’t know, but I assume it is at least not pleasant to do.
Well, the finger touches I believe that it’s possible to handle if you have more than five touches at the same time. But what bothers me most, is the fact that, to consider a touch the iPhone get the amount of pixels that are being touched together, than I bet that it creates a square around it and then finds its center.
My doubt is that if this feature were created by the Unity Guys, or is a feature that comes from the iPhone API. And anyway, I still would like to know if it is possible to get the first step of this, when you just have an amount of pixels being pressed?
The iphone does not give you information on the touched area.
Also as mentioned, the 5 touches is a hardware limit, thats the max you can get, independent of 1 player or 100 around the phone
The iphone will just not register more than 5 touches so the 6th touch and further in a touch lookup is not beeing recognized. What is worse is that over following touch updates, that can have pretty ugly side effects as the touches are held alive to compare current against previous position and get the delta. if that gets messed, you can skip the idea of gestures and alike, it will not become possible to use them at all.
the other thing is that already 5 touches mean 60%++ of the screen invisible through to the fingers (not only touch area but the fingers also are there, never forget that), so the game will be totally unplayable.
already 2 player games that use more than 1, at worst 2 touches are worthless as there will be no visible area anymore.
Once you get beyond two touches you run into another potential issue: lost touches. The screen uses electrical resistance along the x and y axes. So, if you have three touches as follows (excuse the ascii art):
X1
X2 X3
Where X1 and X2 are “close” on the y axis and X2 and X3 are “close” on the x axis, then you will lose one touch. It is most likely going to be X2, since it has unique positions for X1 and X3. So once you get beyond two touches you have potential dead-zones for further touches.
5 touches is a limitation of the iPhone itself, and the 6th touch will actually reset the touch values (according the iPhone docs).
Well CedarParkDad that exactly why I was thinking of doing a square around the X2 and X3 when they were too close or around X1 and X2. The idea was to find an area that this two fingers certainly would be there, even if the multi-touch believes that there is only one touch.
The Blurst article linked above, says “Avoid iPhoneTouchPhase, which is unreliable (particularly with the remote)”
Whaa? I have not found the touch phase to be unreliable. I feel that any meaningful kind of touch interactions are much easier to code (maybe impossible without) using the touch phase.
Perhaps Unity Remote is unreliable, if your wireless network is out of bandwidth or something. But that would be a different issue.
Thoughts?
The touch phase is unreliable in my experience if you are trying to use the mouselook scripts from the warehouse occlusion demo and when you have a lot of multitouch action. It loses the touch ID when a finger slides off of the screen and it also gets confused as to which touch it is tracking. The biggest problem though is that it doesn’t release the touch so buttons will “stick” down and not reset.
I add some code to the Blurst page that helped me get my controller working a lot better… it’s there if anyone needs it.
my first person controller script does not use the phases as well and works fine.
Out of my view, the phases are great if you use them for gestures, because thats where they shine.
but for move / look type of things, the movement / position delta of the touches is more than enough of information, no need to waste a phase check as you couldn’t care less about the phase.
dreamora - I think you are right on the money. I was just playing the KOTR Jedi game demo on the iPhone earlier and the phase would probably work well for that kind of system ie. one gesture at a time.