Runtime 3D vs. Video

I have a couple of question regarding a GUI system I’m developing.

I’ve attached a design screenshot to make things clearer.

I wish to present an object as part of a gui system which is presented in front of a 3D space.
This object is 3D projection with rotation and sometimes animation, that will be displayed in between GUI layers (that have different GUI.depth).
The object is not interactive (The user does not need to control the rotation or animation).

So my questions are:

  1. Would you display this object as a 3D run time render or a video projection?
    Notes:
  • The world and gui around the object should be seen so if I use a video it will have to have an alpha channel.
  1. Which of the solutions is faster in performance (higher FPS, memory usage etc)?
  2. Which of the solutions should be quicker to implement?
  3. Can I display a 3D rendered object in run time in between gui elements?

Thanks!

  1. Without your own GUI system, neither, or at very best through Render Textures / Multiple cameras
    GUI.depth will not allow you to get the gui behind the rest of the world, its only to define the depth between multiple components with OnGUI.

Important: The video will not have an alpha channel!
On import the video is converted to Theora and restricted to RGB, no Alpha!

  1. video clearly needs more data stored than a simple 2D / 3D object. Faster depends on what you do with it and the target system.

  2. depends on your background. It won’t be a 3 liner independent of the approach you take. You will need experimentation and board readup on the gui to get an idea of how to get it done at all.

  3. With multiple cameras and thus multiple onguis

I will probably be using a shader developed in house to display a video with transparency, but this means using two videos - one for the content and one for the alpha channel (black and white video) - how performance intensive would that be?

Generally though - which solution would you prefer to develop and why, had it been your assignment?

I personally would go with real 3D and 2 Cameras.

One for the UI layer behind the 3D object and one for the 3D object and the UI layer in front of it.

The reason is the flexibility, consistent feeling with the rest and the artistic freedom as you can use whatever you have on the model, you can use any shader you use otherwise and should you decide to alter something, you can do it right there and test it in realtime.

With a video you are forced to alter the video, potentially rework it if you had specific cuts, potentially rework the sound if it was with sound and as last, split out 2 videos again to get alpha and non alpha.
It completely kills the fast iteration approach that works that nicely in unity.

An additional point is that this approach works with Indie and Pro should only be an Indie at hand right now.

Performance: That would be something that needs to be tested. But it would cost at least twice the amount of the movie texture itself + x where x depends on the shader and target hardware…

(and the video would stick at 30 fps, as 3D could fly above)

That’s not a possible option for me, as the gui layers being displayed on one camera is a given (the gui behind the object and above it have to be on the same camera).

Well then you must use GUITextures and a video
Otherwise you can not get UI behind it, as UI is always on top of every 3D on the same camera!

Thats why you need two cameras (they both render into the same area so visually its still one camera, just to clarify that if that confused you)