Greetings.
I am developing a system for studying/conducting research on multiple participant collaboration on Quest 2 (and Quest 3 when available) for my PhD thesis. The early phase of the application must mirror the players’ transform and all of their audio to some accessible cloud storage (currently Firebase Storage) for later recovery and analysis. Note that this is not only server-side or authoritative: the number of records expands as a cartesian product of participant interactions and must capture original and network-reflected data. This extraneous capture allows us to analyse perceived and actual communication amongst participants, a graph of bilateral connections by the nature of the UGS APIs.
I am embarking on the implementation of audio recording and storage. I already know of server-side recording, which is close to what we want but doesn’t fulfil our requirements. We have access to source and participant taps; it’s evident that data serialisation and upload per service are forever separate, but encoding audio twice seems wasteful. Ideally, we would just like to serialise and upload the pre-encoded audio data without going through the hoop of OnAudioFilterRead
. However, if that’s not possible, we would still like to use the exact codec logic Vivox uses and at least simulate the same audio quality.
According to this post, Vivox supports multiple codecs, Opus being one: https://support.unity.com/hc/en-us/articles/4418031177620-Vivox-Codec-comparisons-by-bitrate-CPU-usage-sample-rate-and-quality. However, as of version 16.3.0 there doesn’t seem to be a documented way to switch codecs or even access the internal codec logic when using VivoxServices.Instance
, apart from the old docs or those of Vivox Core SDK. Being redirected to non-technical/non-API documentation, even from the home page, is incredibly discouraging, and it appears that I have to interface first-hand with developers who are also incredibly busy people, leading to delays that should have been avoided at all costs.
I am interested in the Opus codec used by Vivox, accessing that and/or the Opus encoded audio stream, and bringing those data into the managed world for upload to Firebase Storage (or maybe I would have to bypass the Firebase SDK as well, but that’s another forum’s thread).
Tagging: @mhakala @NickFromVivox @vgauth