using video for skybox - efficient method?

Hi, I am trying to create an animated skybox using Blender to be used in Unity. What I currently have going is this method:

Create 3d sky scene in blender with animated clouds and sun
Render the animation at 6 different angles to represent 6 different directions for cubemap
create video player with a video for each of the six views
set each video player target to its own render texture
create skybox material with each side’s texture as 1 of the 6 render textures

This method works - I have a skybox that is animated and looks correct in my scene, however it slows down my game a lot. I’m thinking there must be a more efficient way to do this? How can I create just a single video that captures my blender scene in 1 single cubemap layout (like the t shape of an unwrapped cube) rather than making an individual video for each face? I know unity can take a texture with the t-shape layout and make it into a cubemap, but how do I get a video in this format?

Thanks for any insight.

Hi!

Indeed, Unity supports various projection types so that you can do what you want with a single video. You can also explore rendering with other projections (e.g. equirectangular) if you want to maximize pixel usage, since the currently supported cubemap layout does not use all available pixels in the image.

Here is the manual section that introduces the feature: Unity - Manual: Panoramic video

Hope this helps, let us know if you have more questions!

Dominique Leroux
A/V developer at Unity

Hi again!

Just realized you were asking how to generate a cubemap video (and not whether Unity supports them, which of course I’m always quick to point out…). There are probably higher-level tools that allow you to generate them or maybe you can tweak your renderer so it produces frames already with this layout.

But assuming you’re working from an already generated set of 6 movies, I adapted this recipe that uses ffmpeg to generate a matrix of 2x2 movies:

https://trac.ffmpeg.org/wiki/Create a mosaic out of several input videos

and came up with this script that generates a cubemap movie:

-f lavfi -i color=color=red:duration=2:size=320x320 \
-f lavfi -i color=color=green:duration=2:size=320x320 \
-f lavfi -i color=color=blue:duration=2:size=320x320 \
-f lavfi -i color=color=yellow:duration=2:size=320x320 \
-f lavfi -i color=color=white:duration=2:size=320x320 \
-f lavfi -i color=color=purple:duration=2:size=320x320 \
-filter_complex "
color=color=black:size=1280x960 [base];
[0:v] setpts=PTS-STARTPTS ;
[1:v] setpts=PTS-STARTPTS ;
[2:v] setpts=PTS-STARTPTS ;
[3:v] setpts=PTS-STARTPTS [back];
[4:v] setpts=PTS-STARTPTS [top];
[5:v] setpts=PTS-STARTPTS [bottom];
[base] overlay=shortest=1:y=320 [tmp1];
[tmp1] overlay=shortest=1:x=320:y=320 [tmp2];
[tmp2] overlay=shortest=1:x=640:y=320 [tmp3];
[tmp3][back] overlay=shortest=1:x=960:y=320 [tmp4];
[tmp4][top] overlay=shortest=1:x=320 [tmp5];
[tmp5][bottom] overlay=shortest=1:x=320:y=640
" \
-y -c:v libx264 cubemap.mp4```

You can save this to a "generate_cubemap.sh" or "generate_cubemap.bat" (depending on platform) script file and run it. The example is synthetic so it uses color sources for each face. If you want to use your own sources, you can substitute each "-f lavfi -i color=..." line for "-i path/to/your/left_face.mp4" and so on. The expected face order appears in the "filter_complex" section (left, center, right, back, top, bottom).

I've also assumed 320x320 faces, feel free to change this to adapt to your sources (and adjust the x and y offsets in the overlay coordinates in the last 6 lines of the filter_complex). This is how the resulting movie looks like:

![5911778--631226--Screen Shot 2020-05-28 at 2.24.35 PM.png|1278x939](upload://jO0wBx4TYcZTvdoJTfDzYl9pzMp.png)

Hope this helps a bit more than my previous answer!

Dominique

Hi, revisiting this issue again as I never found a suitable solution for my situation. I have a video that is 4096x2048, it is an equirectangular view of the sky created from blender. I have my own shader made with Amplify Shader Editor. If I take a single frame from this video and use it as a texture (again, 4096x2048) and set that texture’s “Texture Shape” property to “cube”, and then apply this texture to my material, it works perfectly. Again, I want to reiterate that my shader takes textures in the “cube” format only.

I tried to use the video component to send this 4096x2048 video to a Render Texture. However, when the render texture “dimension” property is set to cube, it does not look the same way as the previously mentioned texture looked (when set to “cube”). Also, the RT only allows a 1:1 ratio for size when set to cube.

How can I get this equirectangular video to be formatted as a rendertexture the same way that textures are formatted when “texture shape” property is set to “cube?”