BodyLanguageVR - Use gestures to trigger input in VR.

Asset Store Page

Finding traditional forms of input to be un-intuative in VR? Sick of reaching for a keyboard, or pressing gamepad buttons breaking your players immersion when all you wanted was for them to answer “Yes” to an option? Why not use a gesture like shaking your head!

With this asset your players can use gestures to trigger input!


Hello! This is an asset I originally created and mostly finished a few years ago, but had to put it on the shelf before it was ready. Hopefully its better late than never, but I’m happy to be bringing it out now! This marks my 3rd asset released on the store, and with this one I’ve tried to step things up a bit by creating something a bit more advanced. Advanced in the skill it has taken me to create it, but it also may take a more intermediate Unity dev to utilize it. Still, thanks to my experience with my other assets, I’ve done my best to make this as easy to use as possible for even the newest Unity devs.

The purpose of this asset is simply to let you get away from immersion breaking forms of input in VR, like button presses. In real life you have a form or non-verbal communication in the form of common gestures and body language. Simply waving hello at a NPC to trigger an event seems more immersive and intuitive than walking up to them and pressing a button.

I started development of this asset for a project of my own that never materialized, but thankfully I planned from day one to make this available as an asset. I designed it to allow creating your own custom input gestures. But I do have 3 main included example gestures to work with to help you get started. Head shake Yes, No, and hand wave Hello!

Over time, depending on how well this goes, I have plenty of ideas to work on for future improvements and stock input gestures. Enough to make this asset hopefully more and more invaluable over time! For now I hope to refine what I have, and welcome feedback and ideas to help guide the direction of the asset.

This supports petty much ALL VR devices, as its not tied to any one device/SDK. Naturally if you are targeting say, a mobile device, hand based gestures may be out of the question due to a lack of motion tracked controllers. Though you can still use it with head based gestures, like the 2 stock ones, it will be more limited and will truly shine with a more complete VR hardware experience. Of course, if you can conceive of a need for this not based on gestures, it can do other things. It currently supports detection of position changes based on distance, rotation changes based on degrees, and a tracked objects facing of a given direction.



1 Like

HI, i see your asset.

is it able to recognize hand using only smartphone camera? i’m working on a AR app and i need the possibilities to recognize hand using camera.

Thanks

First, this part in the opening post should answer it.

But to further elaborate. No. To use this with hands requires your device/SDK to support hands. It simply tracks the movement of a selected object. How that objects position is changed is irrelevant to this asset, though generally its assumed its changed by use of VR device positional tracking of a headset or gamepads.

If you had an external solution to track hands by a camera, and update the position of an object in the scene, then it probably could. But this asset does not provide such a thing.

New update to the asset.

I had a little more I wanted to do for this update, but I decided to release this early instead. I want to make some changes that will really shift the direction of the asset, and I don’t want to rush them. Some of the changes in this version are related.

To explain and preview, ATM this asset is focused around motion sequences. I’d like to expand its scope a little to encompass singular motions a bit. Also, its center around what I currently refer to as a “DoThen” style of input. What that means is, the users Does a motion, Then a value is returned. This is as opposed to what I’d like to additionally support with what I currently refer to as “WhileDo”. That would be, return a value While the user Does a motion/sequence.

Previously this asset was purely based around returning a boolean value for a single frame, and I now what to additionally support returning a more analog type of value while also supporting such values for more than a single frame. Since this asset is about replacing traditional input (digital buttons and analog sticks/buttons) with VR movement, I want to focus in on that goal a bit more. The original setup works well enough for the stock motions, but overall does not reflect that goal. It’s like pressing a button for a single frame.

Hi, i bought this asset but when testing it occurs that single frame output is seding more than one execution, even in example the more you rotate your head in the last step the more you gonna get debug msg about execution. I tested different steps on my own and the same is happening.

Is it possible to use this for tracking a hand moving in a circular motion or is it meant for back and forth type gestures?

Maybe depends on what you mean.

If you simply mean rotating your wrist, then yeah it can do that.

If you mean drawing a circle in the air with your hand… well, technically still yes too. For that you would have to set up trigger points/distances in a circle pattern. It would be up to you to specify how many steps would constitute a circle. But lets say 3 points/steps in a triangle formation would work. First you would want a step that detects going from the top down to the right. Id set the angle allowance to say 45 degrees. Then repeat for right to left, and repeat again for left to top. 5 points might be more ideal to feel like a true circle.

The the first 2 steps of the hand wave hello gesture should work as a good basis. Just deselect detect inverse, add 1-3 more steps, and set the up as needed to form a complete circle.

I just wanted to thank you for creating this asset. I just purchased it and I’ll play around with it after work today!