hi everyone,
i wanted to start a lil discussion about gestures in XR, as i started to get into it at the last few weeks.
i have seen a few assets that where focused on that in the assets stores, but i feel like its not what i was looking for, at list from what i tried.
a bit of back story for all this:
as of lately i entered the XR development area after the place i work at got a HTC vive and an old demo project some one made in pieces and in unity 5.6.x, and was asked to piece all the parts together.
one of the parts of the project was the recognition of gestures the user did with the controllers in hand and then activate an action by it. the demo used the vr infinite gesture to show that u can use them, but it basically was only the example project of vr infinite gesture and not some use of it…
and as this was an old unity version i thought about going for newer things and try them, and i saw AirSig assets 3D Motion Gesture and Signature Recognition (for HTC Vive) and that used steamvr asset as a basis for its work and tracking too.
both this and infinite gesture had to use a button for triggering the gesture recognition as u need to hold it for the full gesture, which for my project now is good enough, but i was looking for a pure gesture recognition, no buttons whats so ever.
for example use the vr infinite gesture 2 example from their example video, but without pressing any button to start the gesture, just the gesture itself.
also as the valve knuckles controller and the new vrfree gloves by sensoryx we can now get more complexes hand gestures to use for example, create a shape with ur hands to activate an action, like creating a heart symbol with ur hands will make a heart balloon to be spawned or something like that.
how would u go to start something like this?
Disclaimer: i dont expect someone to make this for me, its more about brain storming about the right way to think when working on something like this
i kind of liked how the vr infinite gesture did thing, to start and record an action, i dont mind the button to be used for the recording even, just so i could get the gestures, but how do i find that a gesture is been made?
do i keep track of the position of the controllers at all time and camper all my gestures database to the latest inputs from the position to see if there is a match? cause that sound a bit heavy when u have a lot of gestures to work with.
should it have user training action add to the database of gestures? so it will be more userspasific? but then it makes the database heavier when u have more then one user.
and as an add thinking point, can we use the new DOT to make all this less heavy and get better results?
again this is more of a brain storming discussion on whats should be the right way to go about it so feel free to throw in all ur takes on it, would love to hear them all, sorry in advance for my English, and tnx for any inputs and ideas