Hello.
We’ve been using this package for a long time now and are wondering if it could be possible to use only one Camera in our scene without having to duplicate it to feed the WebRTC.
We are using VR headsets and if we use our main Camera, as soon as the streaming is enabled the image will be stuck in place in the headset, but will work fine in the web browser.
The solution we found was to add another Camera to the scene as a child of the main one, but we don’t think this is suitable as we want to use Overlays camera and this conflicts with the duplication of our Cameras.
This is the first time to know that issue for me. And unfortunatelly, I can’t imagine the cause of your issue.
Are you using WebRTC package directly to develop the VR app? I’ve never tested ScreenCapture.CaptureScreenshotIntoRenderTexture method on VR device, but is it work well?
We are using the Unity Render Streaming package. It is also understandable when using Unity in play mode : When clicking on play, it is written “Display 1 No camera Rendering” :
As soon as the feed is closed for the WebRTC, the message disappear. It is working like the audio bug I raised before : Sending the audio or a camera stream through WebRTC is no longer usable in the Unity app. We have to duplicate everything.
In the latest version of Unity Render Streaming, when using Camera mode in VideoStreamSender, Unity Render Streaming uses Camera.targetTexture to render camera image to texture. In design by Unity runtime, the display is not rendered when using Camera.targetTexture. Therefore, If you want to render contents on the display and streaming video concurrently, you need to use ScreenCapture.CaptureScreenshotIntoRenderTexture.
Hi, thanks for your reply. I’ve been creating a Render Texture and then send it to the webRTC and it allows me to keep my camera. But this only work on PC. When building to Android, the screen is black and I get this error
I tried tinkering with the numerous Color Format, but couldn’t make it work. There are a lot of options in the render Texture component. Can you reproduce the error on an android device ? It works only on the computer
Let me know if anybody managed to use the CaptureScreenshotIntoRenderTexture on a VR Headset. This is currently dividing my FPS by 2 when I have multiple cameras in my scene, without even setting RenderStreaming on. Our hardware is not powerful. Thanks in advance.
Hello,
The bottleneck is not in using the Render streaming, it is when adding another camera to our scene. The headset is really not made for that and adding just a camera without even importing the Render streaming package will make us loose quite a lot of FPS. That is why using a texture without having to duplicate the Camera was a perfect solution but doesn’t work in an android build
Hi everyone. So is there a way to use WebRTC with one camera only? Need to have a stream of a gameplay with GUI but using world space canvas is not suitable for our setup.