I’m facing a challenge in our Unity project (initially on 2020.3, now on 2021.3 LTS) involving the Pico4 headset and WebRTC integration. Built-in Render Pipeline. Our aim is to display real-time coaching sessions within our app, but we’re struggling to correctly render the camera view on Android OpenGLES3. The WebRTC part is fully functional. Rendertextures seem to not work that great and seem buggy.
Here’s a summary of our attempts and issues:
ScreenCapture.CaptureScreenshotIntoRenderTexture: Works in the editor, but results in a black texture on Android. I’ve tried a lot of settings.
OnRenderImage with New Render Texture: Functional on Android works, but causes a 55% performance drop and had a culling bug (fixed in 2021.3 LTS). However even creating a rendertexture once without using it, drops the performance in half. volumeDepth and vrUsage settings helped in making the rendertexture atleast visible on android.
CommandBuffer Approach: Leads to a gray texture on Android. (works in editor)
commandBuffer = new CommandBuffer { name = "Capture Camera" };
commandBuffer.Blit(BuiltinRenderTextureType.CurrentActive, renderTexture);
cam.AddCommandBuffer(CameraEvent.AfterEverything, commandBuffer);
Our goal is to capture the camera view in a Texture2D or RenderTexture on the Pico4 without compromising performance. We don’t need a big resolution. We’re using Multiview for efficiency. Any insights or solutions for sharing the camera view with minimal performance impact would be incredibly valuable.
I’ve been tackling this for a week and would greatly appreciate your expertise and suggestions. My skills in graphics and command buffer is not that great. We have customers waiting for the feature and I feel a lot of pressure. I really hope you guys can help me out!
Quick update and request for assistance: I’ve managed to get CommandBuffer working for camera rendering in Multi Pass mode, but not in Single Pass Instanced yet (Multi View). I’m using Unity’s Default render pipeline.
Does anyone know how to make a CommandBuffer that blits camera to a RenderTexture compatible with Single Pass Instanced? Tried a custom shader for blitting, but no luck so far.
Here is my current full code:
using Unity.WebRTC;
using UnityEngine;
using UnityEngine.Rendering;
public class VRCommandBuffer : MonoBehaviour
{
public Material targetMaterial;
private CommandBuffer commandBuffer;
private Camera vrCamera;
private RenderTexture vrRenderTexture;
void Start()
{
Setup();
}
private void Setup()
{
vrCamera = GetComponent<Camera>();
vrRenderTexture = new RenderTexture(vrCamera.pixelWidth, vrCamera.pixelHeight, 0, RenderTextureFormat.ARGB32)
{
antiAliasing = vrCamera.allowMSAA ? QualitySettings.antiAliasing : 1,
format = WebRTC.GetSupportedRenderTextureFormat(SystemInfo.graphicsDeviceType),
};
commandBuffer = new CommandBuffer();
commandBuffer.name = "VRCommandBuffer";
commandBuffer.Blit(BuiltinRenderTextureType.CurrentActive, vrRenderTexture);
vrCamera.AddCommandBuffer(CameraEvent.AfterSkybox, commandBuffer);
if (targetMaterial != null)
targetMaterial.mainTexture = vrRenderTexture;
}
void OnDestroy()
{
vrCamera.RemoveCommandBuffer(CameraEvent.AfterSkybox, commandBuffer);
CommandBufferPool.Release(commandBuffer);
if (vrRenderTexture != null)
vrRenderTexture.Release();
}
}
Not quite yet. We’re currently using a double camera setup, but it still causes a significant performance hit. We’ve released it in this state and have been focusing on optimization. I’ll soon have time to explore other methods. Trying to get our current project running on Vulkan might yield different results.