Multiplayers location and computer camera live screen based on Mirror

Hello guys, I’m new to Unity and Mirror. I want to implement a virtual bicycle character roaming task in VR with multiplayers networking in 3D terrain under Unity Mirror plugin. But the most important thing is that I need to add a computer camera screen above the head of each user’s virtual bicycle character, and when starting a multiplayer game, each user can see the change of each other’s virtual bike position in VR eyes as well as the overhead The computer camera captures the user’s own real-time footage (better with sound interaction).

In the early stage, I have used Mirror’s NetworkManager to drag the bike character prefab into it, and mounted the Network Transform script to the prefab, so as to realize the location movement when playing multiplayer(as shown in the figure below).
[197994-qq截图20220725171103.png|197994]

Then it’s time to add the computer camera live screen on the bike. My idea is to add a canvas to this prefab and create a raw image that captures the live view of the computer camera and displays it(As shown in the picture below).
[197995-qq截图20220725172025.png|197995]
But I don’t know how to capture it and whether I can use mirror to achieve a multiplayer game in which each user can see the live view of the computer camera above each other’s bicycle character’s head. In addition to be able to see the movement and camera live in VR glasses.
This is just my rough idea, do you have any thoughts or suggestions on my idea?

Well, first having any kind of live video stream will kill networking performance. Usually video streams are sent directly from a server to a client and it takes a lot of bandwidth. Your idea can be done using a video stream SDK but may need some kind of cloud servers as well, and hopefully whatever SDK you find will already provide that as a service. It looks like there is an Agora SDK on the Unity Asset store. If you mean that the live camera is from each individual player to every other player… one possibility is to turn down the frame rate on the video stream and use lossy compression. If you just send low frame rate (10fps) with compression it might work better. Whatever SDK you use for streaming should allow you to change settings. I’m not sure what SDK would work best, you will want to search around. The SDK should allow you to set channels or something and handle everything for you when streaming input from multiple sources and relaying the output to multiple clients.