Kinect ZigFu and Unity 3D

Hi,

I’m newbie in game development (but experienced Java Developer ) and I need an advice, guidelines, good tutorials, examples and etc in creating a simple real time game. Game is very simple I need to create human-avatar character that interacts with ball (a person shoots a ball). Something like this video http://www.youtube.com/watch?v=k1RWAiaK9YE
So far I’ve installed Kinect, Unity 3D and ZigFu and all is working properly and since I’m bit limited with time, now I’m wondering what reading should I do and where can I find good sample projects.
Any help is appreciated , thanks

Pegishon

Nice start! :slight_smile:

Anyone else with suggestion how to start, best practices, except the sarcastic Italians

If you are new to Unity then I would suggest going through various basic unity tutorials etc before you go anywhere near the Kinect-related stuff. In my case this involved spending a lot of time messing with various samples, and buying a book. (can’t remember which book right now but it was from Packt publishing).

If you are short on time then you can try jumping straight into the Zigfu samples, but there is quite a chance that this will be confusing and there is no real replacement for spending quality time learning a variety of Unity aspects.

There are not a huge quantity of quality sample projects that use the Kinect right now. For those looking to use Kinect gestures to trigger traditional game actions, there is a nice AngryBotsNI sample available which is far more feature-rich than most Kinect Unity samples, but it uses the OpenNI-Unity stuff that OpenNI offer themselves, rather than the Zigfu version, and I believe the OpenNI wrapper needs a couple of small changes in order to work properly with Unity 3.5. http://arena.openni.org/OpenNIArena/Applications/ViewApp.aspx?app_id=586

Thanks elbows, I’ve failed to mention that I will join a group of very experienced 3D and CG artists so they’ll cover me for one part…

Good. But since most of the work inside Unity is about utilising such 3D art assets rather than creating them, this won’t actually shave much off your Unity learning curve.

What you could do is ignore many aspects of Unity to start with, and go straight to the areas you need to understand in order to get the core skeleton interaction stuff working. Which in this case is stuff like physics and collision detection, and instantiating prefabs. If you don’t mind your character being made of primitive objects to start with, rather than being a fully rigged character, then you have a reasonably straightforward mission. You’ll need to learn how to connect primitive objects to the Kinect stuff so that when a skeleton joint moves, the primitive object moves. Then you’ll need to look at examples of how to instantiate a prefab, e.g. a sphere that will be your ball, that you will want to instantiate at the arm position and apply some force to to get it to shoot in a certain direction. This is not too far away from more basic tutorials you will find dealing ith subjects such as how to make a bullet fire out of a gun. Then you will have to work out what exactly will be used as a trigger to tell the system that you want a ball to fire out of the persons arm at that moment in particular.

By the way Im not sure that kenshin was begin sarcastic, its quite possible he misread your post and thought that the video you posted was of your own work.

elbows is absolutely right. Setting up Kinect + Unity is easy part. Taking things further may be complicated. I was also running a few tests a while back trying to apply physics + kinematic forces. I got my objects flying nicely but finding the right balance proved to be quite a challenge. 9 out of 10 times everything worked perfectly but every now and then i got some strange results.

And i would not worry about 3d models. At least not yet. First when you got your game working the way you want by just using primitive models (like the one on the video you showed) then you can start to change the geometry and make it better looking. Also keep in mind that rigging a 3D model for kinect may also be a challenge if you no pervious experience. Make sure that your CG artists prepare the joints and their pivot points properly or otherwise you will have a mess when you press the play button in Unity.

Not sure how much experience you have in Unity, but if you are totally new to it then i suggest you to read at least one book about the subject. There are many good books about Unity. This would help you to avoid some obvious mistakes what comes with game development and would also help you to speed up the progress.

Thats my 2 cents.

Good luck :slight_smile:

Regarding the strange results you mention,I think this hints at the broader issues that tech such as the Kinect faces at this moment in time. There are very few examples (if any) of the Kinect being used beyond the very casual ‘party’ games stuff, even on the XBox360. The tech is very exciting, but its limitations become apparent very quickly, and require careful thought to overcome. Some of the issues can’t really be overcome, they have to be avoided instead, which is mostly what has lead to only a few kinds of gameplay mechanics really working for commercial XBox360 titles to date.

In some ways Microsofts early demos managed to make the Kinect look more impressive than it really is, which probably explains the curious reality where the Kinect was fastest selling commercial hardware launch, but where the game realities probably don’t begin to live up to peoples imagination when they were first attracted to buying the hardware.

It will be fascinating to see how much this changes. Until something such as the delayed Star Wars game actually arrives and delivers enjoyable results, I won’t be making the mistake of thinking that we now live in a world where its cheap easy for people to run around in a 3D game using their whole body and have a lot of depth fun to the experience.

Issues such as lag and skeleton quality can be improved somewhat over time, and have done so to a certain extent since OpenNI first came out. But the larger barriers against creating compelling Kinect experiences, such as the lack of ability to track fingers, should not be underestimated.

having said all of that this also means there is an opportunity here, for people that can come up with core gameplay mechanics that fit well with the data Kinect (and Asus Xtion) sensors can provide. And if the ideas don’t go much deeper than casual or party games then thats actually a pretty good fit with what a lot of people are doing with Unity. But its clearly not dead easy because otherwise Im sure we would have seen more interesting demos from people over the last year than has actually been the case. You’ll see something from me eventually but I don’t know as it will be a game a such since Im mostly using Unity for realtime visuals interactive installations at this point, which is stuff where the Kinect can potentially be used in a way thats fairly forgiving of glitches, not like a game glitch costing you a life in a way thats super frustrating and ruins the experience.

Thanks guys,

These kind of replies is what I hoped for. In a few months I’ll let you know ,how we went with the project.

Once again thanks and sorry kenshin if I misunderstood you.

oh hi! i’m the guy in that unity kinect video! i can’t believe it’s been more than a year since we started hacking Kinect. Anyway, zigfu will be releasing a new version with WebPlayer and Microsoft Kinect SDK support.

you should join the mailing list on http://unitykinect.com we are totally obsessive with support and I’m working on a series of videos to demonstrate how to use zigfu. for example, here’s a video i made that shows how to use our still unreleased bindings to make a character mapping and build to webplayer: http://www.youtube.com/watch?v=UwCyEzqAEBY

We are also working on a ton of demo scenes as a starting point for developers. For example, I made a game for the global game jam to demonstrate one method of shooting stuff (video and instructions and downloads in the link):
http://globalgamejam.org/2012/infinite-robot

the source including all the art assets will be up here, when i update to the new version:
https://github.com/tinkerer/InfiniteRobot

Thanks, for everything, looking forward your demos.

hi amir
i’ve been focused on K+U3D for a couple of days but still can not create my own 3d model to sync my body…:-|:-|
so confused… i work on mac lion10.7.3 OpenNI +NITE +Sensor +macport
the user rader and depth viewer both work fine and 3d model “T” to “Psi” but can not work
and say:

Initing OpenNI UnityEngine.Debug:Log(Object)
OpenNIContext:Awake() (at Assets/OpenNI/Scripts/OpenNIContext.cs:102)

yours

Hello,elbows…I’m now running this game named AngryBotsNI…I’m really excited to play it…but there is one question that keeps confusing me.I’ll be so glad if you could do me a favor and tell me about the principle of the controlling part of this game…simply…how can Unity reads the gestures and do it as a order?

Sorry, I haven’t had any time to play with Kinect stuff recent so I can’t answer your angrybotsNI question.

@kinksid: here’s a demo video of how to bind a skeleton with Zigfu (model from Mixamo) http://www.youtube.com/watch?v=UwCyEzqAEBY

BTW I uploaded an example with similar features to the AngryBotsNI example:
https://github.com/zigfu/Infinite-Robot
(baseScene2.unity is the main game scene, depends on pro features)

The game .exe/.app can be gotten from here:
http://globalgamejam.org/2012/infinite-robot

instructions on that site, touch shoulder with hand to reload, reach out to shoot, hands behind back (elbows up) to take out swords.

video:
http://www.youtube.com/watch?v=WSXx_WHk01A

The game has code for basic gesture recognition stuff. the git project is a bit of a mess, but i’m working on a series of tutorials explaining how to use Zigfu to make content like this. All the game content is free, and the source here is using the watermarked zigfu ZDK unity3D bindings. we’re upgrading the bindings and i’ll update the game as well.

Amir

I don’t know this is right forum. I have question about making video file from Kinect video stream…
Im using Mac now…

Thank you.

Hey!

I’m not sure if this is the right topic, but I will try =)

I’m currently doing some research with Kinect (ZigFu framework) and Unity 3D (version 3.5.1)
The project that I’m working on needs networking capabilities (for now this is working). But my problem is, when I run the project on the browser I can’t see the Depth image, so I can’t receive the data from the image and control the avatar…
I don’t know yet if the problem is with the unity web palyer or the zigfu framework.
If someone is having the same problem or if anyone could help me with this I woud apreciate =)
Or if there is other framework that I could use to receive data from the kinect to Unity 3D…

It’s really important for me!

Thanks in advance,

Paula

I am completely new to Kinect and wondering what version of it I would need to test the ZigFu samples.

The common XBox-Kinect sensor?
The new Microsoft Kinect Sensor for Windows? which costs 200%?

Will it work on both Pc and Mac?

I tried the web-based examples and the examples in the legacy Unity bindings (link on the zigfu web site is broken: remove “site.” from the URL) with a Kinect for Xbox on Windows XP and MacOS X 10.6 today. All this worked reasonable. I also tried the ZDK examples with Unity on Windows XP but decided that I don’t like the large zigfu logo and the limited resolution of the video stream (160x120); thus, I didn’t try this on MacOS X.

Hi, we used Kinect, Unity and ZigFu in an installation for walking around in a house and painting the walls and furniture.

We started developing on a PC and switched to Mac when the 3D model get larger. The plugin worked fine with both platforms. The examples were also very helpful understanding how the plugin works.