Well, because of a project I might be doing in the future, I needed some experience with webcam-tracking. The “problem” is that I don’t have unity pro, but I managed a way to communicate with unity.
The detection is done by a custom script made with “processing”, but as processing - webcams and web don’t seem to work right, I used flash as an interface between the two (debugging was quite hard…)
I’ve spent 2 days on the processing and flash/html part, and hmmm… 1 hour on the unity part. (just again a proof that unity is simple to use).
It’s just a simple unity physics example of what you can do with motion-detection…
There’s always a solution to the problem at hand right?
Was it a requirement from your side to run Unity in a browser? If not, maybe you could communicate with processing through a network connection, and skip the html/flash part?
There’s the Network Cursor example on the wiki, and processing also has some network example available. Not sure if you need to set up a seperate server or if it’s possible to connect directly, so maybe skipping one step just adds another. Might be worth checking out.
Haha no, that’s in my office (yeah, I look younger than 28… I know. hehe)
Well… I didn’t know about the Network cursor example… I really have to take a look at that. But I did use a networkport for the processing->flash communication…
We’ve been doing some work with webcamera object control and we used the processing to flash technique as well… are you using Flosc? flosc : Flash OpenSound Control.
We coupled the webcamera blob detection example, fed the data outputs through Flosc straight into Flash. We had some garbage collection issues with data packets backing up but that required some cleanup on both the Flash and processing end…
I was wondering if this was possible with Unity3D. If Processing can get to flash it should be possible to go directly into Unity3D w/o the secondary step… I guess you’re using Flash to send datapoint info. to the HTML page for Unity to read/consume?
It’s great to see the working prototype, looks very promising.
I didn’t use Flosc, but a socket server and Flash socket client. And indeed, javascript is used to link the bridge between flash and unity. I Never thought the communication speed would be so fast.
The center (x/y position) of correct blobs are send to flash from processing. I was thinking of controlling/manipulating sounds etc. with the webcam, but it just isn’t possible with the sound engine now… But maybe in the near future?
(Or if I ever get an endorser… with unity pro and external audio SDK this should work, hehe)
Hi - hope you’re still getting notifications from this old thread!
Would you be able to give a little more information about how you achieved this? It looks like you’re running the unity build as a webplayer. Could the flash file also be on the html page as a swf? would it have to be visible on the page or could it be in a hidden
?
Any information you can give on this would be great.
Well, the flash is indeed on that page, and no… it does not have to be visible.
What the flash does is in fact opening a socket to “processing” and communicating with unity.
Processing opens a socket server and sends all the tracking data to the flash socket client on the web page.
But you could do all the tracking in Flash, so there is no need need for the socket server.
great job, Kaaj,
I have few questions here, Is there any api or plug-in which allows Unity to read or receive camera tracking data directly? If not, are there any other software can receiving data instead of processing? And if you could share a tutorial about this demo, that would be great!