I’ve got a Unity plugin for Kinect working. Here’s a screenshot. For simplicity’s sake, plugin is stuffing the depth info into the alpha channel of the texture.
Next I’ll work on polygonizing the depth buffer… although a shader might be simpler. Does anyone have a parallax shader that uses one RGBA texture, where the alpha channel is depth?
are you sure you want to map it to a paralax shader? That doesn’t seem like you would get the full representation of depth, or more important, to just use the depth value from the image to script any objects or particles into the scene, etc… If you have the plugin reading the camera images into a texture - that’s the basic functionality for any number of implementations after that - I would love to see what you’ve got working.
I have been working on getting kinect into unity this weekend also. I got the motor and the led lights working, can read the serial, and turn the cameras on and off, but I am having difficulty figuring out how to read the image data from the pointer in the dll.
I wrote a plugin that uses the OpenKinect library. Seemed simplest at the time.
The plugin will also give you the texture without alpha, and also give an array of depth values at full precision. I added the depth to the texture alpha solely so I could quickly determine if things were working – I’m not sure how long it will take me to write code to polygonize the depths. (And it seemed like a no-brainer to add depth in a shader, rather than churning out fifty thousand polygons.) It’s only three bits of precision less, and it avoided the pain of passing arrays around to native code, which I always find annoying.
I’m not sure how that library works, but the depths aren’t bytes – they are 11-bit integers, and OpenKinect stores them as unsigned 16-bit ints. Mono maps them to the ushorts type. So maybe you need to make your array ushorts instead of bytes.
alright - thanks for the tip. I will keep at it. Do you have plans to share your plugin?
I will if I get it working, but I suspect it will be pretty messy.
thanks
I’d like to get something more interesting working before I make any releases. Right now I feel confident it works well, but until I can really visualize the depth in 3D, I’m not 100% certain.
Your first question was about a parallax shader that uses one RGBA texture, where the alpha channel is depth - I started to look into that, but wondered why not use the parallax diffuse shader built in - and assign the one texture you are writing to both the Base and also the Heightmap, because the Heightmap input uses the A of the image and the Base uses the RGB. What are you looking for that would be different from that?
What might also be interesting is to use something like the “heightmap generator” scene in the procedural mesh examples in the resources section of the unity site. This takes a texture and displaces the vertices of a mesh.
Really cool man. I made some tests with openFrameWorks and OpenKinetic and I was planning on doing something to integrate it with Unity too, but you made it first. Congrats.
I do have question about Kinect and C# is anyway to convert a unity game on visual C# project and then put on XNA ? if you are working with KInect and C# how hard that can be.
My reason for this question is i have and XNA license i can deploy games on my Xbox 360 i wonder if somebody got the idea to convert any of the games made with UNITY put them on Xbox live.
Slightly more interesting example. A grid of spheres offset by their distance values (with texture drawn on the plane below). I pointed the camera at the wall to remove the background noise. The shader on the plane is drawing the depths as greyscale, so you can see how the depth values match the pixel values (and you can see why the XBox is always telling you to back up.)
I heard that XNA licenses don’t incude Kinect support, so you might want to ask Microsoft directly about that. If you did you wouldn’t have to be hacking around with open source drivers like the rest of us.