WebRTC package FAQ

I think you can get footage on a track-by-track basis by adding camera footage on a track-by-track basis without using Unity Render Streaming, using only the base WebRTC for Unity package.

I created a few video tutorials regarding video streaming with Unity WebRTC without the render streaming package. Maybe these help :slight_smile:

https://www.youtube.com/watch?v=dUbLh8mXEkg

https://www.youtube.com/watch?v=Jp8SrA3Dixw

Hi!

Will there be updates to the Documentations for WebRTC and Unity Render Streaming with newer versions of Unity?

Additionally, the samples are a little tricky to understand as they are now. The code base and whatā€™s happening as the sample is running could use more detailed explaining on how they work and whatā€™s happening.
Also it would be great if we could see how to connect 2+ different builds/devices and have the samples interact with each other over LAN as well as over Web.

Finally, would there be support for WebGL in the future?

Thanks for all the support! :slight_smile:

Please forgive the repost, but this remains an open problem for us:

We have an application that runs on a Pico Neo 3 Pro Eye. Our web portal establishes a connection with our HMD, and our HMD will create two MediaStreams, each with a VideoStreamTrack attached (one for each of the inward-facing camera feeds). This seems to work beautifully. We can support 50 connections (all sharing the same two MediaStreams), no problem. They can disconnect, and new connections can form.

We can successfully stop and teardown our MediaStreams and connections and then create new connections with new MediaStreams. Everything is hunky dory until weā€™ve had about 16-20 connections (over 1-20 separate pairs of MediaStreams). The next time we try to create new MediaStreams and stream our videos with any number of connections, even though we appear to successfully add our tracks to our connection(s), we donā€™t send any image data.

  • Our teardown process is by-the-book:

  • Stop all tracks

  • Close data channels in all connections

  • Remove tracks from all connections

  • Close all connections

  • Wait for connections to close

  • Remove callbacks from our Connection wrapper

  • Remove data channel callbacks

  • Dispose of channels in all connections

  • Dispose of senders in all connections

  • Dispose of transceivers in all connections

  • Dispose of VideoStreamTracks and MediaStreams

  • Remove callbacks from all connections

  • Dispose of all connections

  • Remove callbacks from Signaling Servers

  • Even when the videos arenā€™t sending, weā€™re able to send data through data channels (we successfully renegotiate upon adding our tracks)

  • The VideoStreamTracks exist on both sides of the connection

  • We get no exceptions unless we force our app to use a codec other than VP8, in which case we get similar behavior but also a seg fault in the WebRTC.Update() method in batch.Submit()

  • Weā€™re using UniTask (GitHub - Cysharp/UniTask: Provides an efficient allocation free async/await integration for Unity.)

  • Our targetBitrate for failed video streams is 144880, whereas itā€™s 600000 for successful streams

  • Sometimes only one of the two MediaStreams will fail

  • The textures are 200x200

  • Once we have a failure, we need to restart our application before we can successfully send tracks again

  • Weā€™re using Unity 2020.3 and WebRTC 3.7

  • We were able to reproduce this bug on the Sample Scene Samples/WebRTC/3.0.0-pre.7/Example/VideoReceive by adding a few input tweaks to adapt for Pico. Because itā€™s only a single track, youā€™ll need to Add and Remove a track about 30 times while occasionally hanging up and calling again. Eventually the receiving texture will just turn black.

Please let me know what else I can give you. Any guidance will be greatly appreciated!

Hi :slight_smile: hope this thread isnā€™t dead!

Just wanted to ask, if there is a feasible sample, where Unity WebRTC can be used for ā€œdynamicā€ clients? For instance, I have a working RTC data channel connection over a STUN server with 2 clients exchanging data. Now a 3rd client connects to the signaling server.

How can I tell WebRTC, to extend the existing connections, that the 3rd client is able to send/receive data to/from clients 1 and 2, while maintaining the existing connection between client 1 and 2?

I played around a lot, but the only ā€œsolutionā€ I have so far, is that all clients must be connected to the signal server, before I start the WebRTC connection. This works okayisch so far, but isnā€™t feasible for a production environment. ^^

br,
Max

nvm, forget my previous post!

Actually implemented myself a wrapper for the webrtc package, implemented ā€œautomationā€ as far as I could do it and made it free and open source. If anyone is interested, here are:

Hope thisā€™ll help you building WebRTC stuff!

br,
Max