Control Unity Recorder from Script

Unfortunately no, this is not exposed yet. The reason is that this is not a fully future-proof approach to supporting other encoders. We created internal properties that work for ProRes but didn’t want to make it public because there is no way back after this. It is part of the Recorder API improvements that are in our pipeline.

The Media File Format is property int MovieRecorderSettings.encoderSelected, while Codec Format is int MovieRecorderSettings.encoderPresetSelected.

2 Likes

Hello everyone, I wanted to record the depth textures with recorder , how can we do this?

You need an HDRP 7.3+ project and then use the AOV Recorder.
7102372--846658--upload_2021-5-3_10-9-52.png

I want to record alpha and exclude the background. For this to work I need to use a camera with special tag I guess because grabbing screen doesn’t work with alpha as intended. How can I change the source of recorder via script and assign a camera there?

   private void SetRecWindowParameters(RecorderWindow _window)
    {
        var controllerSettings = ScriptableObject.CreateInstance<RecorderControllerSettings>();
        var TestRecorderController = new RecorderController(controllerSettings);

        var imageRecorder = ScriptableObject.CreateInstance<ImageRecorderSettings>();
        imageRecorder.name = "SingleFrameRecorder";
        imageRecorder.Enabled = true;
        imageRecorder.CaptureAlpha = false;
        imageRecorder.RecordMode = RecordMode.SingleFrame;
        imageRecorder.OutputFormat = ImageRecorderSettings.ImageRecorderOutputFormat.PNG;
        imageRecorder.OutputFile = "D:\\Recordings\\image_<Take>_<Frame>";      
       
        imageRecorder.imageInputSettings = new GameViewInputSettings
        {
            OutputWidth = 512,
            OutputHeight = 512
        };
        controllerSettings.SetRecordModeToFrameInterval(1,3);       
        controllerSettings.AddRecorderSettings(imageRecorder);
        controllerSettings.FrameRate = 30;
      
        RecorderOptions.VerboseMode = false;
        TestRecorderController.PrepareRecording();
        TestRecorderController.StartRecording();
       
    }

thank you

You need several things to record transparency:

  • an output format that supports transparency (PNG OK, you can’t control ProRes via script yet)
  • a camera that sees transparency (NOT the GameView, but a Target Camera would work if properly configured)
  • configuring your Recorder Settings properly

Here is how to configure a camera that records the alpha channel (the tag is what you’ll use in your Target
7130330--852089--upload_2021-5-11_10-23-49.png

Then to record transparency with this camera, do this

imageRecorder.imageInputSettings = new CameraInputSettings
{
    Source = ImageSource.MainCamera,
    OutputWidth = 512,
    OutputHeight = 512,
    RecordTransparency = true,
    CaptureUI = false,
    FlipFinalOutput = false
};

(the code can be modified to record a tagged camera with another tag)

1 Like

Thank you very much for the help,
these changes work pretty well if anyone needs later on:
Use this function and it will return a series of screenshots with alpha from your camera with the ‘Recorder’ tag.

private void SetRecWindowParameters()
    {
        var controllerSettings = ScriptableObject.CreateInstance<RecorderControllerSettings>();
        var TestRecorderController = new RecorderController(controllerSettings);   
       
        var imageRecorder = ScriptableObject.CreateInstance<ImageRecorderSettings>();
        imageRecorder.name = "SingleFrameRecorder";
        imageRecorder.Enabled = true;
        imageRecorder.CaptureAlpha = true;
        imageRecorder.RecordMode = RecordMode.SingleFrame;
        imageRecorder.OutputFormat = ImageRecorderSettings.ImageRecorderOutputFormat.PNG;
        imageRecorder.OutputFile = "D:\\Recordings\\image_<Take>_<Frame>";

        imageRecorder.imageInputSettings = new CameraInputSettings
        {
            Source = ImageSource.TaggedCamera,
            RecordTransparency = true,
            CaptureUI = false,
            FlipFinalOutput = false,
            OutputWidth = (int)_outputDimensions.x,
            OutputHeight = (int)_outputDimensions.y,
            CameraTag = "Recorder"
        };

       
        controllerSettings.SetRecordModeToFrameInterval(1,3);       
        controllerSettings.AddRecorderSettings(imageRecorder);
        controllerSettings.FrameRate = 30;
      
        RecorderOptions.VerboseMode = false;
        TestRecorderController.PrepareRecording();
        TestRecorderController.StartRecording();
       
    }
1 Like

Hi,

I’m looking at the new (and more accessible) AOV API in Unity 2020.3, and I’m wondering how it works. For instance, it was mentioned that I can now do things like render/capture ObjectIDs, but the code sample that was provided is a bit lacking. It also mentioned that I can pass in Custom Passes, but when I looked at the part of the code for passing in Custom Passes, it was nothing but an array of (injection-point, render-target). Is this saying, run the Custom Passes that have the combinations of Injection-Point & Render-Target that are in the array?

I also noticed that in the sample code, the after render callback/event-handler is checking the count of textures. Are we able to capture multiple renders on the same camera within a single frame? There’s documentation giving us a very terse explanations that are along the lines of “The AOVRequest is an AOVRequest”. Which, is better than nothing, but not by much.

Where can I get more information on how to actually use this thing?

So, another question.

Is there a way to use the Recorder’s simulation management (in Unity 2019.4 LTS) without having the Recorder record anything?

One of the things I noticed about the Unity Recorder (when combined with AOV) is that it doesn’t capture negative values even when the output is EXR images. Yet, the Recorder is excellent at getting the camera and game simulation to run the same way across multiple runs. I guess what I’m asking is, how to we replace the render capture with our own capture?

Hi,

you’re asking questions about HDRP in a thread regarding the Recorder package.
I’ll try to help but you will get better answers by posting in the right forum.

Here is a script that assigns colors to all objects in the scene:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using System;

public class ObjectIDScript : MonoBehaviour
{
    // Start is called before the first frame update
    void Start()
    {
        var RendererList = Resources.FindObjectsOfTypeAll(typeof(Renderer));

        System.Random rand = new System.Random(3);
        float stratumSize = 1.0f / RendererList.Length;

        int index = 0;
        foreach (Renderer renderer in RendererList)
        {
            MaterialPropertyBlock propertyBlock = new MaterialPropertyBlock();
            float hue = (float)rand.NextDouble(); // index * stratumSize + (float)rand.NextDouble() * stratumSize;
            propertyBlock.SetColor("ObjectColor",  Color.HSVToRGB(hue, 0.7f, 1.0f));
            renderer.SetPropertyBlock(propertyBlock);
            index++;
        }
    }
  
    // Update is called once per frame
    void Update()
    {
       
    }
}

Here is a script that writes the AOV data to PNG files:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.HighDefinition;
using UnityEngine.Rendering.HighDefinition.Attributes;

public class AOVOutputCustomPass : MonoBehaviour
{
    public RenderTexture _outputTexture = null;
    public bool _writeToDisk = false;
    public CustomPassInjectionPoint _injectionPoint = CustomPassInjectionPoint.BeforePostProcess;

    RTHandle _rt;
    Texture2D m_ReadBackTexture;
    int m_Frames = 0;

    RTHandle RTAllocator(AOVBuffers bufferID)
    {
        return _rt ?? (_rt = RTHandles.Alloc(_outputTexture.width, _outputTexture.height));
    }
    void AovCallbackEx(
        CommandBuffer cmd,
        List<RTHandle> buffers,
        List<RTHandle> customBuffers,
        RenderOutputProperties outProps
        )
    {
        if (_outputTexture == null)
        {
            return;
        }

        if (buffers.Count > 0)
        {
            cmd.Blit(buffers[0], _outputTexture);
        }

        if (customBuffers.Count > 0)
        {
            cmd.Blit(customBuffers[0], _outputTexture);
            if (_writeToDisk)
            {
                m_ReadBackTexture = m_ReadBackTexture ?? new Texture2D(_outputTexture.width, _outputTexture.height, TextureFormat.RGBAFloat, false);
                RenderTexture.active = customBuffers[0].rt;
                m_ReadBackTexture.ReadPixels(new Rect(0, 0, _outputTexture.width, _outputTexture.height), 0, 0, false);
                m_ReadBackTexture.Apply();
                RenderTexture.active = null;
                byte[] bytes = m_ReadBackTexture.EncodeToPNG();
                System.IO.File.WriteAllBytes($"output_{m_Frames++}.png", bytes);
            }
        }
    }

    AOVRequestDataCollection BuildAovRequest()
    {
        var aovRequest = AOVRequest.NewDefault();
        CustomPassAOVBuffers[] customPassAovBuffers = null;
        customPassAovBuffers = new[] { new CustomPassAOVBuffers(CustomPassInjectionPoint.BeforePostProcess, CustomPassAOVBuffers.OutputType.CustomPassBuffer) };

        var bufAlloc = _rt ?? (_rt = RTHandles.Alloc(_outputTexture.width, _outputTexture.height));

        return new AOVRequestBuilder().Add(
            aovRequest,
            RTAllocator,
            null, // lightFilter
            null,
            customPassAovBuffers,
            bufferId => bufAlloc,
            AovCallbackEx
        ).Build();
    }

    // Start is called before the first frame update
    void Start()
    {
        GetComponent<HDAdditionalCameraData>().SetAOVRequests(BuildAovRequest());
    }

    // Update is called once per frame
    void Update()
    {
       
    }
}

Assign them both to your HDRP project camera and create a RenderTexture for this to work.
You’ll get images like this in your project root. Hope this helps.

You can’t replace the Render Capture of the Recorder. What is your ultimate objective? To store negative AOVs?

Well, the ultimate objective is to extract any data the client wants and capture it, while repeating the simulation in a deterministic fashion, so that we can extract different data, allowing us augment the previous data-set while keeping all the data in sync because the simulation runs in a deterministic fashion. And, if the client wants to be able to extract a different kind of data, then we can just throw that bit of code in without have to re-write any of the previous code.

When I implemented a Custom Pass that displayed the Motion Vectors after post-processing, all the negative values were clamped to zero by the AOV Recorder. It is possible that post-processing did this, but how it did this to something that ran after post is beyond me. The first attempt to get around that was to 01 compress the motion vector values like we do for normals, but 16 bit floating point numbers only hold 11 bits of precision, so “(value / 2.0) + 0.5” causes most values to drop to zero at 120 frame per second.

A quick example, lets say that our register can only hold 4 digits:
0.5 becomes 5.000 * 10^-1
0.00001 becomes 1.000 * 10^-5
0.5 + 0.00001 becomes 5.000 10^-1 + 0.0001 * 10^-1, but we can only hold 4 digits, so the 1 gets dropped.
0.5 + 0.00001 becomes 5.000
10^-1 + 0.000 * 10^-1
Instead of getting 0.5001, we got 0.5.

That +0.5, was wiping out most of our motion vector data.

The next attempt around this was to just re-render (to a RenderTexture) at the spots the camera stops when the Recorder stops the camera to render, and it sort of worked. But, when we pulled the frame number from Time, it didn’t match up with the frame number the recorder was stamping into the images. Also, we didn’t have the time to check if the difference in frame number would be consistent across multiple machines. On the machine we tested this work around, the frame number in Time, was 4 frames larger than the one in the Recorder’s output images from the same location.

We (well, I) built my code on top of managing the AOV Extension Recorder because the base Recorder didn’t have exr output and it wouldn’t capture all of the lights and colors when set it to capture a series of images (it worked great for video capture). The AOV Recorder (Extension) plug-in had exr output and it was able to capture the entire output image when set to capturing a series of images. I still need to re-check this now that there’s newer versions of the base Recorder plugin. I know the base Recorder can now capture exr for 2020.3, but I haven’t check for 2019.4.

Yet, the reason why we’re using the Recorder is because one of our projects pre-dates the Timeline tool. It also pre-dates the Standard Surface Shader used in the Standard Render Pipeline, but it can’t predate it by too much, because it was using prototype Stand Surface Shaders. Despite getting the project to work in Unity 2019.4 and HDRP 7.5, getting the scene to run under the Timeline tool is more work than I can do with the time constraints.

Yet, one of the things that the Unity Recorder excels at, is getting the simulation to run the same way every time we run the simulation (as long as scene doesn’t contain non-deterministic random particles floating around like Morgan in the Heretic VFX scene).

So, one of the things I noticed about the new AOV API, is that it executes a delegate. This made me think, wouldn’t it be nice if we could just tell the Recorder to execute this bit of code when it tries to capture an image. So, I guess my question should have been, how does one create an extension plug-in for the Unity Recorder plug-in. I know it’s possible (or used to be possible) to create an extension plug-in for the Unity Recorder plug-in because the now defunct AOV Recorder is an extension to the Unity Recorder. The old documentation (actual documentation that tell me how the code interacts with the rest of the code) that I found for the Unity Recorder was for the Recorder that was used in Unity 2017.4. But, the Recorder gained a major performance boost in Unity 2018.4, so I don’t think I can trust that old documentation. Also, I lost the link.

You cannot yet build custom Recorders with support for all features (e.g. async GPU callbacks, adding a new codec to the Movie Recorder) because there are several things that need to be improved in our public API before exposing this.

Note that all the releases so far are compatible with 2019.4.

We are aware of the need for supporting custom passes, and I’ll make a note about your issues with negative values so that we look into this when we work on adding custom passes.

Here is some sample code that shows you the basics of adding a new Recorder. This allows you to grab the camera input, but I have not integrated the AOV API with this. You need 3 classes: the actual Recorder, its settings, and its Editor for the Recorder Window/Timeline clips.

Recorder logic:

using System.IO;
using UnityEditor.Recorder;
using UnityEditor.Recorder.Input;
using UnityEngine;
using UnityEngine.Experimental.Rendering;

class TestRecorder : GenericRecorder<TestRecorderSettings>
{
    protected internal override void RecordFrame(RecordingSession session)
    {
        // Do something with the incoming frame
        if (m_Inputs.Count == 0)
            return;

        var input = (CameraInput)m_Inputs[0];
        var renderTex = input.OutputRenderTexture;
        // RT to Texture2D
        Texture2D tex = new Texture2D(renderTex.width, renderTex.height, renderTex.graphicsFormat, TextureCreationFlags.None);
        var torestore = RenderTexture.active;
        RenderTexture.active = renderTex;
        tex.ReadPixels(new Rect(0, 0, renderTex.width, renderTex.height), 0, 0);
        tex.Apply();
        RenderTexture.active = torestore; // restore previous RT

        // Encode to PNG and save
        var png = tex.EncodeToPNG();
        var path = Settings.fileNameGenerator.BuildAbsolutePath(session); // uses settings
        File.WriteAllBytes(path, png);
    }
    protected internal override bool BeginRecording(RecordingSession session)
    {
        if (!base.BeginRecording(session))
            return false;

        // Do your setup here, open a file handle?
        Debug.LogWarning($"Recording started");
        return true;
    }

    protected internal override void EndRecording(RecordingSession session)
    {
        base.EndRecording(session);

        // Close the file handle?
        Debug.LogWarning($"Recording finished");
    }
}

Recorder settings:

using System;
using System.Collections.Generic;
using UnityEditor.Recorder;
using UnityEditor.Recorder.Input;
using UnityEngine;

/// <summary>
/// Class describing the settings for Test Recorder.
/// </summary>
[Serializable]
[RecorderSettings(typeof(TestRecorder), "Test Recorder")]
public class TestRecorderSettings : RecorderSettings
{
    [SerializeField] CameraInputSettings m_InputSettings = new CameraInputSettings();

    /// <summary>
    /// Default constructor.
    /// </summary>
    public TestRecorderSettings()
    {
    }

    protected internal override string Extension => "png";

    public override IEnumerable<RecorderInputSettings> InputsSettings
    {
        get { yield return m_InputSettings; }
    }
}

The Editor for the Recorder Window/Timeline clips

using System.Collections.Generic;
using UnityEngine;

namespace UnityEditor.Recorder
{
    [CustomEditor(typeof(TestRecorderSettings))]
    class TestRecorderEditor : RecorderEditor
    {
        internal static readonly GUIContent DummyLabel = new GUIContent("Dummy", "A dummy field");
        protected override void FileTypeAndFormatGUI()
        {
            var dummyOptions = new List<string>();
            dummyOptions.Add("Option 1");
            dummyOptions.Add("Option 2");
            EditorGUILayout.Popup(DummyLabel, 0, dummyOptions.ToArray());
        }
    }
}

1 Like

Sorry about the late replay, I was a bit preoccupied last week. Just wanted to say thank you.

I’m on Unity 2019.4.15f1 and Unity Recorder 2.5.5, and I’m not finding an OutputRenderTexture of any kind inside of the the CameraInput.

Please double check your logic and cast your objects properly. I just looked at the source code and member OutputRenderTexture does appear in class CameraInput because it’s part of parent class BaseRenderTextureInput.

So, I took a closer look. It turns out that this is a protected internal property. I can access it if I create my own Input class that inherits from CameraInput; from there, I create a method to grab the result for me.

Of course, that raises the question: How do I get TestRecorder to use use TestCameraInput : CameraInput instead of CameraInput?


Edit:
I figured it out.

First, I need to create a classed named TestCameraInput:

public class TestCameraInput
    : CameraInput
{
    public RenderTexture OutputRT
    {
        get { return OutputRenderTexture; }
        set { OutputRenderTexture = value; }
    }

    public Texture2D ReadbackTex2D
    {
        get { return ReadbackTexture; }
        set { ReadbackTexture = value; }
    }

    public Camera TargetCam
    {
        get { return TargetCamera; }
        set { TargetCamera = value; }
    }

}

Next, I need to create a class named TestInputSettings:

public class TestInputSettings
    : CameraInputSettings
{
    protected override Type InputType => typeof(TestCameraInput);
}

By overriding the InputType getter, I can tell base Recorder class to use my TestCameraInput instead of the CameraInput that’s specified by CameraInputSettings.

To finish it off, I tell the TestRecorderSettings to use an instance of TestInputSettings instead of using an instance of the CameraInputSettings class. This way, when the base Recorder enumerates over the items inside the InputSettings property that we overloaded inside of the TestRecorderSettings class, it will see that it needs to create an instance of TestCameraInput. And, because TestCameraInput is something that we created, we can add code to look and/or use the protected members of its parent classes.

By doing all of this, m_Inputs[0] will contain an instance of TestCameraInput.

Update:
So, there might be a bug with the Editor rendering code, or (more likely) there’s another component that needs to be defined. But when alter the field containing the TestCameraInputSettings, do this: [SerializeField] CameraInputSettings m_InputSettings = new TestCameraInputSettings();

When I changed the field type to TestCameraInputSettings, the Editor became unable to draw the input fields for selecting the camera and resolution due to a null reference error.

So, [SerializeField] TestCameraInputSettings m_InputSettings = new TestCameraInputSettings(); is bad, and [SerializeField] CameraInputSettings m_InputSettings = new TestCameraInputSettings(); is good when creating your own TestRecorderSettings class.

1 Like

Thanks for taking the time to share your findings in this thread!

@unitybru is there a way to access the already set recorder settings? I just want to be able to edit the filename by scripting, keeping the takes and other settings intact

For the recorders in your Recorder window?
If you need to do something like this, you should instead script everything and not rely on the Recorder window. You’ll have more control over and understanding of the settings.

Hi Bruno, so I used this script/method to start and stop the recorder, however I get an opening file error which cuts the recording short.

Opening file failed

Opening file
C:…Start of path…/Library/TempArtifacts/Primary/randomlettertempfilename.resource:
The system cannot find the specified file.

Attached is my code, which primarily uses what you posted above.

using UnityEngine;
using UnityEditor;
using UnityEditor.Recorder;
using UnityEditor.Recorder.Input;
public class RotateMug : MonoBehaviour
{
    public float timePeriod = 2;
    public float height = 30f;
    public float startAngle;
    private float timeSinceStart;
    private Vector3 pivot;
    private string funword = "silly";
    private RecorderController TestRecorderController;

    private void Start()
    {
        pivot = transform.localEulerAngles;
        pivot.y = startAngle;
        height /= 2;
        StartRecorder();
        timeSinceStart = 0;
    }

    void Update()
    {
        if (timeSinceStart < timePeriod)
        {
            Vector3 nextPos = transform.localEulerAngles;
            nextPos.y = pivot.y + height * Mathf.Sin(((Mathf.PI * 2) / timePeriod) * timeSinceStart);
            timeSinceStart += Time.deltaTime;
            transform.localEulerAngles = nextPos;
        }
        else
        {
            StopRecorder();
        }
    }

    private void StartRecorder()
    {
        var controllerSettings = ScriptableObject.CreateInstance<RecorderControllerSettings>();
        TestRecorderController = new RecorderController(controllerSettings);
        var videoRecorder = ScriptableObject.CreateInstance<MovieRecorderSettings>();
        videoRecorder.name = "My Video Recorder";
        videoRecorder.Enabled = true;
        videoRecorder.VideoBitRateMode = VideoBitrateMode.High;
        videoRecorder.ImageInputSettings = new GameViewInputSettings
        {
            OutputWidth = 640,
            OutputHeight = 480
        };
        videoRecorder.AudioInputSettings.PreserveAudio = true;
        string fileName = RecordingName();
        videoRecorder.OutputFile = fileName;
        controllerSettings.AddRecorderSettings(videoRecorder);
        controllerSettings.SetRecordModeToFrameInterval(0, 59); // 2s @ 30 FPS
        controllerSettings.FrameRate = 30;
        RecorderOptions.VerboseMode = false;
        TestRecorderController.PrepareRecording();
        TestRecorderController.StartRecording();
    }
  
    private void StopRecorder()
    {
        TestRecorderController.StopRecording();
    }
  
    private string RecordingName()
    {
        return string.Format("{0}/MP4Videos/vid_{1}",
            Application.dataPath,
            funword
        );
    }
}