So, as for the original issue of adding a Surface as an additional display on Android, I do now have it working. But it was not without a lot of frustration and cursing along the way. And as it seems this will be the only source of documentation of this feature for the near future, I’ll try to make it decent.
Firstly, you do need to be using Unity 2020.2.0f1 or later. It just does not work in earlier versions. If upgrading from earlier versions and you have custom gradle files, make sure you apply the fix listed above .
I’m using C++ for my MediaCodec implementation, but you should be able to do the same if you are using Java plugins.
I originally implemented this using a custom Activity derived from UnityPlayerActivity solely to get access to the protected mUnityPlayer field. That is ugly and messy, so I have since worked out how to use JNI to avoid any custom Activity class. This code is currently assuming the additional display will be at index 1 (the second display). This may cause an issue if an Android device already has more than one display. Who knows what an Android device will have these days. It would make more sense to pass in the current length of the Display.displays array property and use that as the index of the new display.
static void displayChanged(jobject surface)
{
// Get the current activity from the static field in the UnityPlayer class.
const jclass playerClass = jniEnv->FindClass("com/unity3d/player/UnityPlayer");
const jfieldID currentActivityID = jniEnv->GetStaticFieldID(playerClass, "currentActivity", "Landroid/app/Activity;");
jobject currentActivity = jniEnv->GetStaticObjectField(playerClass, currentActivityID);
// Get the current UnityPlayer instance from the current activity in the mUnityPlayer field.
// This field is protected in Java, but apparently everything is accessible via JNI.
const jclass activityClass = jniEnv->GetObjectClass(currentActivity);
const jfieldID unityPlayerID = jniEnv->GetFieldID(activityClass, "mUnityPlayer", "Lcom/unity3d/player/UnityPlayer;");
jobject unityPlayer = jniEnv->GetObjectField(currentActivity, unityPlayerID);
// Call the displayChanged method on the UnityPlayer instance
const jmethodID displayChangedID = jniEnv->GetMethodID(playerClass, "displayChanged", "(ILandroid/view/Surface;)Z");
jboolean result = jniEnv->CallBooleanMethod(unityPlayer, displayChangedID, 1, surface);
}
I call this after I create the encoder input surface.
// Create the surface that will feed the encoder
error = AMediaCodec_createInputSurface(encoder_, &inputSurface_);
if (error != AMEDIA_OK)
{
stopEncoder();
return -1;
}
// Get the Java Surface object handle to the native window
jobjectSurface_ = ANativeWindow_toSurface(jniEnv, inputSurface_);
// Add the encoder input Surface as an additional display for Unity
displayChanged(jobjectSurface_);
This will start the asynchronous process of adding the Surface as an additional display. Now, you would expect the Display.onDisplaysUpdated event to be triggered when the additional display is added or removed. No, it does not. You must regularly check the length of the Display.displays array property and respond appropriately when the length changes from the previous check.
void Update()
{
// Detect when encoder input display is added or removed
if (_displayCount != Display.displays.Length)
{
_displayCount = Display.displays.Length;
Display_onDisplaysUpdated();
}
}
Now we need to tell the relevant camera which display to render on. The call to display.SetRenderingResolution(w, h) may or may not be required. I was seeing some odd display resolutions being reported by Unity for this additional display, even though the surface it was given was of the correct dimensions.
void Display_onDisplaysUpdated()
{
if (_displayCount > 1)
{
int index = _displayCount - 1;
Debug.Log("Activating additional display");
var display = Display.displays[index];
// We are rendering to a portrait screen, hence taller than wide
display.SetRenderingResolution(720, 1280);
// The display is apparently already activated, but we call this anyway
display.Activate();
_encoderInputCamera.targetDisplay = index;
_encoderInputCamera.gameObject.SetActive(true);
}
else
{
_encoderInputCamera.gameObject.SetActive(false);
}
}
The MediaCodec encoder will now start seeing input and producing h.264 packets of the rendered output. Yay!
But the fun is not over yet. No, there are other gotchas in store when encoding video. Despite what the documentation states, setting Application.targetFrameRate had no effect on my Google Pixel 4a. The app will always try to render at the refresh rate of the screen (Screen.currentResolution.refreshRate, though what does Screen and currentResolution mean when you have multiple displays?). In my case, it always renders at 60fps regardless of what I set Application.targetFrameRate to. When creating a MediaCodec encoder, you tell it what framerate you want the video stream to play at. This will have to match the framerate of the Unity app because the rendering of the additional display is feeding the encoder with video frames. To get different framerates, I needed to set QualitySettings.vSyncCount. By default, this is 0 which means “no sync”, but on mobile devices it is always synced to the refresh rate, so 0 is effectively the same as 1. Set QualitySettings.vSyncCount to 2 to render at half the refresh rate (30fps in my case), 3 to render at one third (20fps), 4 for a quarter (15fps) and so on. It is not the cleanest solution (it also blocks updates), so I’m still working on that.
I think that is everything. It has been incredibly frustrating over the past several weeks but it is finally working. @florianpenzkofer does this look like how it should be working?