[Release] Blob Tracking with Realsense

Hi Everyone,

At BlackBox Realities, we create interactive media for personal and group use. One of our clients asked us to create a floor projection game for their new venue and, while we have worked with floor projection toolkits in the past, we found that no one has created an easy to use asset to create blob tracking games inside of unity. To effectively track user’s in poor lit environments and to separate the user’s from the background, our solution requires the a RealSense camera, compatible with their newest unity wrapper. (We are currently looking into ways of supporting simple RGB cameras, stay tuned)


Asset Store - http://u3d.as/1EFd

We’ve included demo scenes to show how to make blobs interact with 2D and 3D elements as well as rigidbodies. We are creating a video series that walks through each of the demo scenes. More information below:

Simple Demo
Can be used to refine your blob tracking and get more information about tracking blobs. After hitting play you can click on the tracked blobs to get more information about them, such as the number of tracked points and the ID of the tracked blob.

2DInput
Shows how a blob can interact with 2D UI elements that are become by a tracked blob. It also includes a demo script to match Unity’s in-game camera to the RealSense device.

3DInput
Shows how to interact with 3D canvas elements or 3D objects.
*Limited interaction because depth is poorly perceived on a 2D monitor.

Physics Input
Shows how tracked blobs can push objects off of a table.
*Physics interactions are approximate, and cannot be used for fine-motor skills like grabbing.

This is Blackbox’s first unity’ Asset and we’d love to get the community’s feedback. If you’ve purchased the asset and have a feature request or if you have questions before purchasing the asset, please leave a comment or email us.

That looks exactly like what I’m looking for right now, but the links to the package is broken.

Sorry, the asset has been submitted for approval on the asset store, and I assumed they would have approved it by now. It shouldn’t be too much longer, I can ping one I get the email telling me its on the store.

The package is live on the asset store

I have a 5m sq area I want to track people from above. I only need location of the body, and can rig cameras 3-4m above. I would need multiple cameras to watch the area, is that possible?

Hi - I’ve been building an interactive projection table in Unity for a show in Ireland, using the RS415 camera as my depth sensor.

After a lot of testing I’ve sacked it off (in favour of a Kinect2) because after a certain distance (1.8m) the camera was creating so much noise around the outside of the interactive area that blob detection was a bit useless. But I’m not happy using the kinect and would like to use a camera that it’s available first-hand still - like the realsense.

I’ve attached a picture of the setup and I’d like to know if your blob tracking would be able to pick out the hands from the table surface they were resting on. Also would your blob tracking be able to tell where the hand was - as opposed to my efforts which couldn’t separate the hand from the elbow.

Here’s the set-up

Here’s what’s being projected on the table - note the things you’re selecting are pretty small, Bru na Boinne : Solar Cycle table by Elbow - itch.io

And here’s the depth image created in our studio - the realsense is pointing at the screen and it’s picking up a lot of excess noise data. I did improve upon this in the end, but when sensing close to the surface always caused issues.

ok I look forward to your response,

cheer,

Jerry

whoops - here’s those images properly linked.


and here’s a screenshot of the depth data it sees at 1.8 m.

ok - hope you can help. cheers.

Just a small issue: I noticed that your asset conflicts with the NUITrack SDK (PointCloud class is doubled). So maybe using a namespace or assembly definition would be wise.

How open is this package?
Is it possible for me to feed the blob tracking with Kinect’s depth image?

The FPS is too low, How to optimize?

1 Like

Hello, after 2 meters a lot of noise. How I can to connect any filter (as temporal filter) to it?

can work by kinect v2?

Hello BlackBox Team!
How difficult do you think is it to feed your system with the Kinect’s depth image?

Does this Asset allow the use of 2 - 4 realsense cameras simultaneously?

Ok, I got this working with one realsense and will attempt multiples now. Since I’ve run 4 of them using the 2.0SDK wrapper in Unity before, I’m fairly confident this will work.

Can I use it on Android phones? Can I build it right away?

Did you finally got it to work with multiple cameras?

Unfortunately No,
It is deeply tied into the RealSense SDK, and does not easily shift over to use the Kinect SDK.

Depending on the model of RealSense camera, there is min & max recommended distances with which it captures with the highest fidelity. Depending on the device’s specification, the quality of the image will break down outside of the recommended parameters. For most use cases (blob detection or basic gesture detection) it is still sufficient to process the data, noise or not. You can modify the depth cutoff/range of registration within the asset. We have had reliable success with tracking at 3-4 meters from the subject with RS cameras.