I’m trying to see if its possible to switch between front and rear cameras while an ARSession is running. Looking at the ARFoundation samples I can’t seem to figure out how to explicitly select front or rear camera. Any help would be greatly appreciated!
ARFoundation does not let you choose a camera explicitly. ARFoundation provides a platform abstraction to ARCore and ARKit. It is up to these SDKs to choose a hardware camera. Currently, ARCore only uses the rear-facing camera. ARKit will use the front-facing camera for face tracking, but otherwise use the rear-facing camera.
What are you trying to do? Why do you need to be able to select the camera?
Thank you for your response! I’m trying to create an AR experience that works with both front and rear cameras (where available). I want to switch between them just like you would with a default camera app.
I can get this to work by having two separate scenes (one for front cam, one for rear cam) and using SceneManager to switch between the two, but if a more elegant solution is possible that’d be awesome.
Any progress on this? Some clients are asking for AR features to work on both cameras. Would be great to be able to switch them at runtime; especially with ARKit 3’s new set of features. Of course; the AR features depending on the camera and devices (as only A12 chips and newer can handle all of ARKit3’s features)
You can swap by enabling and disabling different trackable managers of ARKit. For example - if you only turn on the face manager and turn everything else off - it automatically flips to the selfie cam. If you turn on plane tracking - it will automatically flip to the “world” camera.
So rather than explicitly saying which camera to use - you implicitly make the decision by telling ARKit what kinds of things you want to track and it figures out the rest.
Oh great. Thank you for that explanation. Indeed, it seems that AR Foundation is picking the camera based on the feature/manager chosen. One other issue seems to be the People Occlusion. The end goal I’m trying to do is have it act as a Segmentation filter on the selfie/front camera; cull out the background behind the user’s face/shoulders. So far, switching managers to create such a segmentation filter seems to cause a runtime crash; my guess is because of forcing the hand of this flip. Was worried that maybe People Occlusion or even a more basic segmentation-like filter just wasn’t doable on ARKit due to hardware limitations. However, I’ve been told that ARKit native builds indeed can handle such a thing on the front camera and that it might be a bug of ARFoundation currently (since People Occlusion and all all still so new/in beta).
I guess in short… hoping to be able to have the feature work on the front camera.
I would prefer explicit control of which camera is being used, front or back. For example now in ARKit 3.0 it supports simulaneous planar tracking using the rear camera, while getting data from the front camera at the same time, but it keeps automatically switching to showing the front camera if the face manager is enabled. That’s not what I want.
@todds_unity please add support for front camera on older ios devices, I need to be able to select difference cameras for different purposes, one of these purposes might simply be accessing the front camera with no additional requirements, I would like to be able to do this without needing an additional library. Thanks =)
i can’t find how to selection of the camera function. i’m using Unity 2019.3.8f1 and AR Foundation4.0 preview. please explain how to select device camera.
See the [ARCameraManager.requestedFacingDirection](https://docs.unity3d.com/Packages/com.unity.xr.arfoundation@4.0/api/UnityEngine.XR.ARFoundation.ARCameraManager.html#UnityEngine_XR_ARFoundation_ARCameraManager_requestedFacingDirection) property.
Using ARFoundation 4 preview 3, which camera is being used keeps shifting for me between builds, sometimes facing the world, sometimes facing the user. I’m not changing it, it’s set to “World” in the ARCameraManager.
We need this feature too. Our use case is a social network based on AR, where people can place selfie videos in the world. Switching to the device camera for shooting these videos is too cumbersome and did not work well. So we found a way to record the AR camera instead. But we’d need to switch the camera on the fly. We wouldn’t care if world tracking is restarted in this case. Yet we need an on-the-fly flip.