I’m new in Unity. In the last year I used C++(server side) and ActionScript3 in AIR to create applications.
Now I will join the 3D world with Unity. Also I’ll not simple create any Applications. I will use my Nvidia 3D Vision system(with 120Hz Monitor and Glasses).
Well, here now my Question
I have some 3D Photos(made with my FinePix W3 3D Camera). Each photo have a left and a right eye image. Otherwise is not 3D of course
How can I use a 3D Photo as texture for a wall?
The problem is, that the texture have to display the left eye image, when the monitor displays the left image and the right eye image, when the monitor displays the right eye.
How does the monitor do this? By flashing each image alternately? If that is the case you could write some code to have the texture map alternating at whatever speed. You could either alternate loading to different texture maps, or have both images on one texture map and just alternate the position of the texture in relation to the UV’s, so as to avoid reloading two separate images.
Unless I’ve missed the mark totally and it’s one of those red and blue 3d glasses dealeys, then you can just use Maya to render out a single 3d image compatible with those glasses.
For that sort of 3d you could look into this sort of tech http://www.youtube.com/watch?v=Jd3-eiid-Uw with kinnect and newer technologies coming out there’s a lot of cool stuff yet to be done.
Well just shifting the same image along the UVs wouldn’t create a 3D effect, because it wouldn’t change the angle from which you see the scene depicted in the image, which is what creates the 3D effect in the first place. Two separate images is the only way to achieve a 3D effect.
But having said that, I have no idea how to solve the problem myself. My best guess would be to try to somehow get the two images to swap out with one another at the same rate as the refresh rate of the computer monitor. I have no idea how to actually do that, but it’s a start.
Well, what I meant is if one were to combine the two images in photoshop, say image1 on the left and image2 on the right, you then slide the UV’s back and forth showing Image1 then Image2 alternately to simulate swapping the images in and out without actually loading and unloading separate image files, although the image would be bigger, provided you can even match the refresh rate. Wouldn’t take long at all to try anyway
@Fourthings: No, I don’t use Red/Blue Glasses. I’ve a active shutterglass with a 120Hz monitor. So the monitor show 60 times the left image and 60 times the right per second.
More or lees, I need only to know, when the monitor display which image to change the texture on the wall. The problem is how?
My 3d photo is a 2 image photo of course. So I have a photo for the right eye and a other for the left eye.
I found also somethings about quad buffer, but havn’t real idea how to use this.
I’ve been thinking on the same thing here. The most I got to getting 3D in Unity is, theoreticaly, to switch between two cameras very fast. In your case, you could attach image1 to object1, image2 to object2 and give them different layers. Than use camera switching, each to show one layer. It is possible it would work.
Sounds nice. Do Unity switch the cameras for me or have I to write a script, which receive some information from the graphiccard, which say, which camera has to show?
var Camera1 : Camera;
var Camera2 : Camera;
var cameraSwitch : boolean;
function Update () {
if (cameraSwitch){
Camera2.enabled = true;
Camera1.enabled = false;
cameraSwitch = false;
}
else{
Camera1.enabled = true;
Camera2.enabled = false;
cameraSwitch = true;
}
}
If you attach this code to an empty game object and assign cameras to this script, it should switch cameras every frame, thus turning one camera on and other off each frame. Now, all you can do to improve this is to limit framerate to 120 fps or use Time.time to switch cameras per time segment, but that last option could result in wierd rendering if framerate drops low or goes too high, making switch skip some of the frames. The whole thing must be perfectly in sync with shutterglasses. I’m no expert in optics, but you will probably see how it works best when implementing. And you probably won’t need any 3D textures if you’re texturing objects. The illusion should work well on it’s own if you set up the cameras right. Like looking at real objects in real world, each camera is one eye.
I finished to test your idea. The result, the camera changed on both eyes. So I see a 30fps flipping on the left and on the right eye. The applications runs max in 60fps on 3d-mode and 120fps on normal mode(no shutter glasses active).
So, I fear, I have to put the one camera in the buffer for the left eye and the other camera into the buffer for the right eye.
The question is now: How do I access the buffer of the graphic card?
@celinscak: Thanks. I found a vote for quad buffer. I vote with max(3) votes for it So hopefully anytime comes the active stereoscopic handling into.
If I understand is the quad buffer technoloy a double “double buffer” construction. Normalize a “classic” output has 2 buffers (a front and backbuffer) and the camera use this buffers(or better sayed: the cameras output is writing in that buffer).
In the active 3D mode you need now 2 double buffers, for each camera one double buffer. This looks in unity in moment like impossible.
What I can do for the moment to use passive 3D. Its mean, the graphiccard construct self the 2. camera. In this mode is it impossible to put active 3d textures to the wall. So all textures are 2D in the passive 3D-Mode.
I’ll create a 2 videos to demonstrate the different effects in the next days.
PS: In the passive 3D mode are the postrendering effects often incorrect. Also in untiy. Shadowing, Ambientocclusion and dynamics light are wrong in passive 3D, because the program doesn’t “known” that the graphics card is switch into a 3D mode.
3D mode == always 2 cameras
2D mode == normal display of a 3D world with one camera.
I made a scene with 2 cameras (left+right eye). I put a distance of 0.15 from the center point and a angle of 2°^to create a parallaxe.
Now I wrote a scipt, which turn off and turn on the camrea each frame. So I see the left-eye and right-eye image “flickering”.
Activation:
set the monitor in 120Hz
open any program, which activates the USB-Emitter in Windowmode.(I used Stereoscopic-player)
run my project in OpenGL(!) (Option: -force-opengl) and window mode
Now I see a perfect 3D.
Problems:
I can’t detect left and right eye…if it wrong I have to move my game window and hopen that the camera switchinbg start in correct order
the game have to have 120fps. One fps less and we lose the synchronisation
That all extrem experimantal, but demostrate, that Unity have already a technologie inside. We need now only a function to activate the 3ds-usb-emitter and to ask which eye is on the next fame displayed(for synchronisation).
I used:
NVidia GeForce 295 GTX
NVidia 3D Vision
Technical joke: NVidia doesn’t support 3DS with OpenGL on GeForce cards. But my experiment works only in OpenGL
Edit: I tested now also with photos. And…it work’s!
using UnityEngine;
using System.Collections;
public class CameraOnOff : MonoBehaviour {
public Camera cam;
public bool isLeft;
public bool swap;
// Use this for initialization
void Start () {
}
// Update is called once per frame
void Update () {
if(Input.GetKeyDown("u"))
if(swap == false)
swap = true;
else
swap = false;
if(Time.frameCount%2 == 0)
if(isLeft == (swap == true))
cam.enabled = false;
else
cam.enabled = true;
else
if(isLeft ==(swap == true))
cam.enabled = true;
else
cam.enabled = false;
}
}
Texture switcher:
using UnityEngine;
using System.Collections;
//using UnityEngine.Vector2;
public class FlipTexByCam : MonoBehaviour {
public Camera right;
public bool passive;
// Use this for initialization
void Start () {
}
// Update is called once per frame
void Update () {
if(Input.GetKeyDown("p"))
{
passive = passive == false;
}
Vector2 v;
string name = "_MainTex";
if(right.enabled == false || passive == true)
v= new Vector2(0, 0);
else
v = new Vector2(0.5f, 0f);
renderer.material.SetTextureOffset (name, v);
}
}
The texture is a double image. The left 50% of the width is the right image…the rest the left image. So the tiling of x is 0.5.
Don’t change the camera angle (Toe-In setting), this is resulting in wrong 3D for the brain due to vertical parallaxes.
Better use the Off-Axis Setup as shown in this PDF: http://noeol.de/s3d/stereoscopic_rendering.pdf (try babelfish to translate) you can do it with camera matrix.
I see you are some steps forward then me. How I implement real the Kamerachse und Bildachse(sry…I can’t remember the english names correctly;)) I assumed, that both the same, so I came to the idea to rotate the camera, like the eyes of use do it, if you look to a near distance object.
Ohh…I “over seen” it…I’ll take a look inside Is very nice tech demo. Can I use it to continue my researching?
PS: Könn’ uns auch mal im TS3 treffen oder per mail nen dialog eröffnen um erfahrungen auszutauschen. Bei Interesse unity@itbock.de