Trouble with Kinect and Unity

Hi everyone,

I’m working on a university project that requires the use of Kinect. I’ve installed all the necessary software and have managed to get OpenNI and Zigfu working. The project is basically a virtual dressing room, and therefore needs the users to see themselves while trying on virtual clothing. I have tried the “SimpleViewer” sample scene in Zigfu and the “VisualizationNative” scene from the project I found here:
http://forum.unity3d.com/threads/67982-Kinect-plugin/page13
but neither of them seem to work properly.

The OpenNI project is very unstable and Unity crashes almost every time I try to change something.
The Zigfu scene is more stable and doesn’t crash as much, but if I try to make a build out of a scene with a simple plane showing the image data from the camera, the build crashes immediately after opening it.

I’ve read about alternate ways of showing the image data with plugins made with openGL and C++ but I don’t know how that works, as I’ve never worked with openGL or C++.

Could someone offer some guidance?
Thank you for reading!

Well the inplementation useing the C++ SDK from February 2012 is prety straightforward…You might have a bit issues when you copy the data from the unmanaged part of memory to the managed part…

I’m sorry, I don’t understand. The C++ SDK in Unity?

Use the C++ Kinect SDK from February 2012 to write a Untiy plugin in which you Initialize and get the image buffers etc…Then use connect to your plugin via c# and use Marshal.Copy to copy your images from the unmanaged part of memory to the managed part.

The second solution I developed just for the heck of it was I created a C++ console application in which I was initializeing the Kinect etc… then I wrote a C# dll that was reading directly from the unmanaged array via a pointer…

Hmm… as I mentioned in the original post, I’ve never worked with C++ before. Are any plugins like these already finished and available online?

Sorry I dont know…I just wrote my own

With Zigfu and the Kinect SDK you cannot run the Unity Editor and the compiled application at the same time because the Kinect SDK can only work with one process at a time. Quit the unity editor and try it.

The method you are talking about is a uses a texture2D’s NativeTextureID into glBindTexture and glSubTexImage2D to fill the GPU texture with data directly. This is to get around the SetPixel/Apply API that Unity provides and is really slow. in windows you need to run unity with force-opengl for this to work. we probably have example of this in zigfu’s legacy openni package.

Thanks for your reply! I have tried running the compiled application with the Unity Editor closed. It still crashes right after opening it (in zigfu I get a crash error, in OpenNI it just closes the application).

I have also tried forcing openGL to test the opengl viewer, but it doesn’t work for some reason. No console errors, I get the tag in Unity so it is using openGL, it just doesn’t work.

Why could this be happening?

bump?