Background:
We are trying to collect simulation data using Unity using HDRP scriptable rendering. From multiple cameras, we would like to serialize (to file system) the color, depth, and motion vectors textures from these cameras.
Here is a sample (good) output from a single camera (Imgur: The magic of the Internet):
Approach:
We have a camera script (attached) which leverages 3 RenderTextures (for color, depth, and motion vectors), and uses the RenderPipelineManager.endCameraRendering callback to blit the camera’s active texture into the render textures before using ReadPixels to copy the render textures into Texture2D data for serialization into a file. The camera is configured with setTargetBuffers, and has the Depth and MotionVectors flags set in its depthTextureMode. When performing the blitting, a material / shader combination is used in conjunction with the depth and motionvectors pass to copy out the _CameraDepthTexture and the _CameraMotionVectorsTexture into the target render texture. The associated DepthShader and MotionShader are also provided.
Problems Found:
-
The script / shaders / materials used (essentially the whole pipeline) only works on mac right now - on linux (Vulkan) and Windows (DX11 or DX12 experimental) the motion vectors and depth are all black. Only on mac (Metal) does any of this work.
-
Planar Reflection Probes in the scene mess up the depth / motion vectors output (I have attached an example of this) - the only way to fix this issue is to disable the planar reflection probes in the scene. The first frame of RGB (both what is viewed in the game window and what is saved is also messed up the same way)
initial frames messed up (Imgur: The magic of the Internet):
subsequent depth / motion vector frames are still messed up, but RGB is good now (Imgur: The magic of the Internet):
Of course if we disable the planar reflection probes, then all frames are good - even initial frames. We do have other reflection probes in the scene, and everything works fine with them.
-
If we do not use setTargetBuffers, but instead set the camera’s targetTexture, we do not get depth or motion vectors (and of course in this case we ensure the targetTexture has depth).
-
The cameras do not render the correct view - all views (front, back, left, right) are rendered, but they are not associated with the correct image data; e.g. front image shows back view, right image shows front view, etc. In the scene view, each camera is oriented properly, and the preview shows the proper rendering.
imgur post with some images: Unity HDRP Multi-Camera rendering bugs - Album on Imgur
5576731–575767–UnityCamera.cs (13.5 KB)
5576731–575773–DepthShader.shader (1.38 KB)
5576731–575776–MotionShader.shader (1.55 KB)