Sometimes i work for a internet tv channel and we use a virtual set. We shoot on a chroma key and add the virtual stuff. Those are stills, i mean no camera movement possibile.
I am trying to figured out if it is theoretically possibile to build a system like that in Unity.
I think that what i need is a third party application that get the video in and split between rgb and grayscale, picking the green background, for alpha. Then Unity should get that two signals and merge in a square polygon as a texture. Then i should place that polygon in a virtual enviroment to simulate the stage.
The advantage for me should be to have a real 3d set, with pan and zoom movement for the cameras, and add even virtual character for example.
It’s the weekend, things are usually slow, compounded with the fact that the eastern united states is under 4 ft of snow.
You can get more real-time support if you log onto the irc channel.
You can stream movies to a texture over the web, but Unity requires they be in OGG format. There doesn’t appear to be a way to grab and manipulate the data though, at least not from looking at the documentation on it.
What your looking for is possible, but would require Unity Pro and a c++ plugin. I seem to remember someone around a year ago, writing a plugin that would interface with Apple’s Ispy camera. It would certainly be easier to accomplish on a Mac than windows, with a thousand different cameras and drivers to choose from.
There are though, some hacks you can think about.
You can use the texture plugin on the wiki, to interface with a web app that pushes your movie one frame at a time.
Alternately, you can use TCPIP to have unity talk to a separate app that is feeding you the data you want running on the same machine.
If I had to make a choice in terms of speed, ease of use and flexibility, I’d go with option #3. It wouldn’t even require Unity Pro.
That really depends on what your hardware configuration is going to be. Mac or windows? Built in webcam or 3rd party?
For example, winblows has a nifty socket interface via the WinSock ocx. Makes things really simple. However writing the webcam portion of it might be a little more involved and easier on a mac.
As for the unity side of the equation, there are a good number of simple socket examples if you look around on the forums and the wiki.
A quick glance at the wiki and there seem to be a few new examples that were not there the last time I looked. One looks like it might be a Unity equivalent of winsock, or at least the beginnings of it.
It might look daunting at first, but it’s really not. You’re simply opening a connection and sending or receiving a string. A socket is a socket, and the application protocol will be put together by you so Unity and your app will be speaking the same language.
ok, so you mean that the first step is to found a way to send the video signal toward a tcp/ip, then it will be easy to grab it from unity and use it as a texture?
Well you can’t particularly ‘send a video signal’ towards tcpip. Here’s a basic rundown on the steps you need to take.
Use your card’s SDK to capture one frame of video.
Develop some sort of application protocol to encode the data as a string.
Send that string to unity via TCP.
Goto 1.
Where you manipulate the color data is up to you. If you want to encode blue or green as alpha and that will never change, it might be more efficient to do it in the application you write, rather than unity.