[FaceCapture] Latest features


It's been a long time I had not tested the Face Capture solution. I read from the changelogs that some features arrived.

We have now the audio and the video record which is great! I just don't know where they are recorded and how to playback them. I guessed we could playback the audio in the timeline with the take animation. How is it managed and used? I have my Sony XM3 headset connected via Bluetooth but I have no sound recorder anywhere and accessible from the UI (PC and iPhone).

EDIT : Ok I found them on my iPhone but I can't playback it with the animation for the moment. I guess it's planned for later? Maybe instead of streaming the sound from the device, we could copy it to the PC and read it from there.

We can select a take to play from the Apple device to playback it but I can't find how (I guess it's like for VirtualCamera). The doc says that a Play button appears at the bottom-right but not on my iPhone11 even after having recording several takes. https://docs.unity3d.com/Packages/com.unity.live-capture@2.0/manual/take-system-recording.html

I haven't tried timecode synchronization for the moment but if I am correct it is to synchronize multiple devices for live streaming or recording takes right?

EDIT : I'm not a fan of the new system with Evalutors. It was easier at the beginning (in the GitHub repo face-ar-remote) when every BS setting could be changed at the same place. Now we have to create an Evalutor, select the Face Mapper, assign the Evalutor to the BS, reselect the Evalutor to change the values and for each BS. It was easier to change the value at runtime like before to tweak the values. I guess even the curves can be added in the inspector of the Face Mapper. Or maybe because the Face Mapper is a ScriptableObject, every data needs to be a ScriptableObject as well (this would explain the Evalutor ScriprtableObject).

Thanks for your help!

As you found, the audio and video is just stored on the local device for now. Streaming that back to the editor is something we have considered, but is low priority at the moment.

Yes, the timecode synchronization is used for live scenarios and aligning recordings in a take when multiple devices are used.

If you don't assign an Evaluator ScriptableObject to a mapping, one is created and saved directly in the mapper. See the docs here for an example of this. You only need to create Evaluator ScriptableObjects if you want to share evaluators across multiple mappings or Mapper Assets.