Simultaneous use of marker type and face recognition type AR

Is it possible to recognize and display face recognition type and marker type AR at the same time with one application?

It is an application that reads a marker and displays an image of a fruit (for example, an apple) in space with AR, and displays an image of an apple on a human face.

This may be supported on some devices. Please run your AR project in Developmend Build and look for supported configuration descriptors.

For example, Apple documentation states that simultaneous front and back camera should be supported on A12/A12X:

But my iPad Pro 2020 with A12Z CPU (which is newer than A12) prints these configurations:

Configuration Descriptor 0x1f8783300 (rank 2): Rotation and Orientation, Plane Tracking, Light Estimation (Ambient Intensity), Light Estimation (Ambient Color), Raycast
Configuration Descriptor 0x1f8001878 (rank 0): World Facing Camera, Rotation and Orientation, Plane Tracking, Image Tracking, Object Tracking, Environment Probes, 2D Body Tracking, People Occlusion Stencil, People Occlusion Depth, Collaboration, Auto-Focus, Light Estimation (Ambient Intensity), Light Estimation (Ambient Color), Raycast, Meshing, Mesh Classification
Configuration Descriptor 0x1f8783350 (rank 1): World Facing Camera, Rotation Only, 2D Body Tracking, People Occlusion Stencil, People Occlusion Depth, Auto-Focus, Light Estimation (Ambient Intensity), Light Estimation (Ambient Color)
Configuration Descriptor 0x1f8782ae0 (rank 1): World Facing Camera, Rotation Only, Image Tracking, 2D Body Tracking, People Occlusion Stencil, People Occlusion Depth, Auto-Focus, Light Estimation (Ambient Intensity), Light Estimation (Ambient Color)
Configuration Descriptor 0x1f8784098 (rank -1): World Facing Camera, Rotation and Orientation, Plane Tracking, Image Tracking, Environment Probes, 2D Body Tracking, 3D Body Tracking, 3D Body Scale Estimation, Auto-Focus, Light Estimation (Ambient Intensity), Light Estimation (Ambient Color), Raycast
Configuration Descriptor 0x1f8782928 (rank 2): User Facing Camera, Rotation Only, Rotation and Orientation, Face Tracking, People Occlusion Stencil, Light Estimation (Ambient Intensity), Light Estimation (Ambient Color), Light Estimation (Spherical Harmonics), Light Estimation (Main Light Direction), Light Estimation (Main Light Intensity)

As you can see, Face Tracking is only supported in the last descriptor, which means that it’s not possible to run Face Tracking and Image Tracking simultaneously.

AR Foundation Samples repo has this example scene, this is a good place to start. My iPad Pro 2020 should be able to run this scene, but I still can’t figure out why there is no Configuration Descriptor that supports both cameras simultaneously.