Hi all! I’m using AR foundation on an Android device, and I want to take the output of the camera and put it onto a render texture live. Whenever I try to do this, I just get a black image. What I have is as follows:
On a canvas I have a little Raw Image with a render texture applied to it. Then I’m using this line that was in the documentation in Update(). Nothing works the way it should. Am I doing something wrong?
I want this render texture to update live with the motion of the camera. The plan is to use FMStream (on asset store) to live stream it to a PC for further processing.
It’s possible, but there are some caveats. While Android uses one camera texture, iOS uses two textures. So for iOS, you’ll have to Blit two times and output these two textures with a custom shader.
Subscribe to ARCameraManager.frameReceived event.
For each ARCameraFrameEventArgs.textures do the Graphics.Blit():
For Android: Graphics.Blit(texture, targetRenderTexture, arCameraBackground.material). Passing arCameraBackground.material is required for Android.
For iOS: Graphics.Blit(texture, targetRenderTexture). No need to pass arCameraBackground.material, this will produce green texture.
Works perfectly, thank you! I’m on Android though, will just that code work for iOS (without passing in the CameraBackground), or is there anything else I need to do?
@KyryloKuzyk : Thank you so much for the insight, really helps me points to the right direction with a project, but I wonder though, do you have a link to documentation regarding the 2 textures output on IOS? Not sure how it is outputted, it is by index? (ie [0] and [1] for the two textures needed)
If you subscribe to ARCameraManager.frameReceived event, then you can access these textures via ARCameraFrameEventArgs.textures.
To be sure which texture is TextureY and which one is TextureCbCr, I would not recommend accessing them by index, rather than find the index of property id and match it with the ARCameraFrameEventArgs.textures index. I think a code sample is better than a thousand of words
Not optimized for performance, just to show an idea:
using UnityEngine;
using UnityEngine.XR.ARFoundation;
public class ARKitCameraTextures : MonoBehaviour {
[SerializeField] ARCameraManager cameraManager = null;
void Awake() {
cameraManager.frameReceived += args => {
var textureYPropId = Shader.PropertyToID("_textureY");
var textureYPropIdIndex = args.propertyNameIds.IndexOf(textureYPropId);
var textureY = args.textures[textureYPropIdIndex];
var textureCbCrPropId = Shader.PropertyToID("_textureCbCr");
var textureCbCrPropIdIndex = args.propertyNameIds.IndexOf(textureCbCrPropId);
var textureCbCr = args.textures[textureCbCrPropIdIndex];
};
}
}
Turns out there is no need for “blit” or inspecting the raw textures, all I had to do, to get the AR-Camera Images on a custom object was:
using UnityEngine;
using UnityEngine.UI;
using UnityEngine.XR.ARFoundation;
[RequireComponent(typeof(RawImage))]
[RequireComponent(typeof(ARCameraBackground))]
public class ARTextureResponder : MonoBehaviour
{
void Start()
{
var rawImage = GetComponent<RawImage>();
var camBack = GetComponent<ARCameraBackground>().material;
rawImage.material = camBack;
}
}
In this case I used a RawImage , I guess any other object that has a material would also work.
Hmm, this is strange. I can confirm that using texture display names (TextureY, TextureCbCr) doesn’t work. But using the actual property names (_textureY, _textureCbCr) does work. I updated the original response , huge thanks for pointing that out!