Using only one camera with WebRTC

Hello.
We’ve been using this package for a long time now and are wondering if it could be possible to use only one Camera in our scene without having to duplicate it to feed the WebRTC.

We are using VR headsets and if we use our main Camera, as soon as the streaming is enabled the image will be stuck in place in the headset, but will work fine in the web browser.

The solution we found was to add another Camera to the scene as a child of the main one, but we don’t think this is suitable as we want to use Overlays camera and this conflicts with the duplication of our Cameras.

Thanks in advance.

This is the first time to know that issue for me. And unfortunatelly, I can’t imagine the cause of your issue.
Are you using WebRTC package directly to develop the VR app? I’ve never tested ScreenCapture.CaptureScreenshotIntoRenderTexture method on VR device, but is it work well?

We are using the Unity Render Streaming package. It is also understandable when using Unity in play mode : When clicking on play, it is written “Display 1 No camera Rendering” :
9346748--1307510--upload_2023-9-20_15-21-8.png
As soon as the feed is closed for the WebRTC, the message disappear. It is working like the audio bug I raised before : Sending the audio or a camera stream through WebRTC is no longer usable in the Unity app. We have to duplicate everything.

In the latest version of Unity Render Streaming, when using Camera mode in VideoStreamSender, Unity Render Streaming uses Camera.targetTexture to render camera image to texture. In design by Unity runtime, the display is not rendered when using Camera.targetTexture. Therefore, If you want to render contents on the display and streaming video concurrently, you need to use ScreenCapture.CaptureScreenshotIntoRenderTexture.

Hi, thanks for your reply. I’ve been creating a Render Texture and then send it to the webRTC and it allows me to keep my camera. But this only work on PC. When building to Android, the screen is black and I get this error
9365156--1309688--upload_2023-9-25_15-54-19.png
I tried tinkering with the numerous Color Format, but couldn’t make it work. There are a lot of options in the render Texture component. Can you reproduce the error on an android device ? It works only on the computer
9365156--1309691--upload_2023-9-25_15-55-17.png

Let me know if anybody managed to use the CaptureScreenshotIntoRenderTexture on a VR Headset. This is currently dividing my FPS by 2 when I have multiple cameras in my scene, without even setting RenderStreaming on. Our hardware is not powerful. Thanks in advance.

Bump.

Can you investigate the bottleneck of performance?
Is it improve the FPS if the video resolution makes smaller?

Hello,
The bottleneck is not in using the Render streaming, it is when adding another camera to our scene. The headset is really not made for that and adding just a camera without even importing the Render streaming package will make us loose quite a lot of FPS. That is why using a texture without having to duplicate the Camera was a perfect solution but doesn’t work in an android build :frowning:

How about Vulkan for graphics API?

Piggy backing on this, is vulkan supposed to work with WebRTC?

Hi everyone. So is there a way to use WebRTC with one camera only? Need to have a stream of a gameplay with GUI but using world space canvas is not suitable for our setup.

The performance isn’t that great, but the code you often see looks like this:

public class Peer: Monobehaviour
{
    [Seri  alizeField] private Camera cam;
    private RenderTexture rt;
    private RTCRtpSender sender;
    
    private CreatePeer() {
         var peer = new RTCPeerConnection();
         // ....
        rt = new RenderTexture(1920, 1080, 24, RenderTextureFormant.BGRA32, 0);
        var videoTrack = new VideoStreamTrack(tex);
        sender = peer.AddTrack(videoTrack);
    }
    
    private void Update() {
        var tmp = cam.targetTexture;
        cam.targetTexture = rt;
        cam.render();
        cam.targetTexture = tmp;
  }
}