[FaceCapture] Feedback

Hey (me again),

I tried the face capture. Actually it reminds me the facial-ar-remote project that I currently work on at work for a project (we can discuss about it if you are interessed in motion capture for cartoon animation). It’s really much better since it works out of the box and in Editor mode too (like the Virtual Camera so cool!). I’m really impressed by how it runs smoothly in the Editor.

I already have a feature that would be great to have. Having the possibility to record at the same time the facial and the sound from a microphone would be awesome. It’s a critical feature for our needs at work for example. Using the microphone for a live session is not easy in Unity since you need to set a duration before starting to play the sound. Plus it would be awesome to follow the same workflow that you create for the facial animation. Being able to play the sound for the device to the computer and vice-versa.

Thanks for reading! I’m looking to use it and tweak it!

Looking pretty easy to set up - The ARKit app seems to crash after about 30 seconds. I’ve tried on iPhone 11 pro, and iPad pro so far.

@GeniusKoala Thanks for the request. We have a plan to add on-device audio recording.

@benmurray2 We’re looking into this and will try and have a fix as soon as possible. Is there any more information you can provide about your setup? Your iOS version in particular. Also, does it say “Development Build” in the bottom corner of the app?

It’s worth noting that we have seen crashes on versions that aren’t the latest iOS version (14.5). Please update to the latest iOS version for best results.

Very useful. I’ll definitely be using this for character work. Recording audio will be a great addition to the app.

1 Like

Why isn’t this section accessible through the **"**Betas & Experimental Features" even though it’s shown as being under that section?

I have to constantly search for it on the email I got, or through my history :eyes:

This is a great stepping stone for raw data capturing.
I hope to see some more smoothing options too as well as things like phoneme support with the audio mentioned above, but I can tell you would have base support for this regardless, it’s just a google option for live capture cleaning.

I can see taking this roundtrip into different applications would be great too.

a small issue I seem to have is the tongue is always at the tip of the top teeth. I know the ARKit tongue is a bit gimmicky but I couldn’t find how to rest the tongue more naturally

3 Likes

The video seems to be unavailable though

1 Like

my apologies I got ahead of myself typing before the video uploaded

I agree. Recording audio is the next must-have.

I’d suggest also to provide a more photorealistic face in the example scene. I know that here the important thing is the richness of the blendshapes and the sample scene already does a great work about this, but having a more realistic face would encourage people to share the registration on social media more, letting as many people know that finally unity is making progress in virtual production.

Another thing: if I connect the client to the server while in edit mode, once I hit Play the connection is lost, and I have to reconnect. Not a bug per-se, but it would be nice if the connection stay active!

2 Likes

Sorry about that. Our thinking is that we want people to explicitly join the Open Beta, which gives us a clear indication of interest and intent. Put another way, we need to know that people want these tools so we can justify the cost of development.

I’ll ask the team if we should revisit this decision.

2 Likes

@Ruchir We just watched the forum section so we can go back that way.

Good call. For the beta the sample face is really kind of “programmer art” made by a developer to help people to see how things work without having to build a face, and I’m sure we’ll have something higher quality for release.

3 Likes

I agree. It’s just the first iteration of a beta so no wonder for this kind of sample and it’s really good enough for testing.

I was wondering actually what’s Unity strategy about face capture? Do you want to reach people for cartoon or also realistic human beings? Both need different ways since animation for cartoon is totally different. Many cartoon characters faces don’t look like the real human face and need appropriate adjustements for data computing. Plus animation for cartoon is more dynamic and need in my opinion more work than realistic human beings (I may be wrong since I’m just beginning of this topic). Regarding the Meta Humans of Epic, my guess was Unity would rather target a cartoon audience first. If I’m wrong and that you don’t really focus on one audience, in the future of this face capture feature it would be awesome to have a scene sample for different type of characters. Also if you see software like Reallusion iClone, they use the iPhone blenshapes that they retarget on their own blendshape system. Indeed iPhone blenshapes are really limited itself but it’s a good start for studio that have resources to develope around it. That’s actually what we are currently doing to reach high face capture quality for cartoon.

I think using ARKit for facial capture is a good idea as I can use existing hardware (an iPhone X) and it’s compatible with other capture systems such as Moves By Maxon for Cinema 4D. I’m working on a project that needs facial capture and planned to use the Cinema 4D solution, so have been creating ARKit compatible blend shapes. Now I can switch to facial capture inside Unity, with no changes to my models. This is a big improvement to my workflow.

2 Likes

For me these tools are very important. In order of importance I’d put facial capture first, then virtual camera.

Finally, wish this become an official release soon.

1 Like

I’m really excited about all of these new project ideas (virtual camera/face cap). it’d be really nice to have facial capture raw in unity too as an earlier production point as well as at later stages too.
It really shows potential new paths to explore and create.
and has opened a lot of closed doors that I and my team were previous exploring *fingers crossed for continued development here. We would happily help fund this project.

We use ARkit in general in our pipelines so it’s really good to see this adopted here in a similar fashion.

One thing I have recently found is that I can’t seem to find the new Sequencer package again for timeline.
is it only available through the Cinematic Studio asset?

It’s still in beta, so it’s not yet visible in most versions of Unity. Here’s how to add it to any project:

  • In Unity 2021.1 and above, click the [+] button at the top left of the Package Manager, select Add package by name… and enter com.unity.sequences in the Name field and 1.0.0-pre.5 in the Version field.

OR

  • In Unity 2020.3 and above, click the [+] button at the top left of the Package Manager, select Add package by git URL… and enter com.unity.sequences@1.0.0-pre.5.

OR

  • In Unity 2019.4 and above, edit Packages/manifest.json and add the following line to the top of the list of dependencies:

“com.unity.sequences”: “1.0.0-pre.5”,

2 Likes

Thanks @markvi works perfectly

1 Like

Greetings,

thanks for letting us try this, however, we can’t connect the FaceCapture App to the server. The app is running on a current iPhone Pro 12, starting up fine and even finding the server and port, yet, upon pressing connect it takes a second or so then displays “Can not connect to server” at the bottom.
We tried with several recent Unity version on a MacBook Pro and with Windows. Under Windows a new project shows the “configure firewall” upon the first start, doesn’t show this on the Mac, though. Also, did check with several ports.

Any ideas? Thanks!