Hi. I have a camera rotating around a dynamic (moving) object for a full cycle, and then approaching a fixed amount to the object and keep rotation. After each 0.5s, I capture the camera view, assign it to a RenderTexture and write it to a PNG file. My goal is to run this multiple times with different texture width and heights and obtain sets of images in different resolution from different angles and distances.
The relevant part of my code look like this:
private RenderTexture rt;
private float nextActionTime = 0.0f; // start capturing at 0s
public float period = 0.5f; // capture interval
public int texWidth = 1920;
public int texHeight = 1080;
private void LateUpdate()
{
// Capture object periodically
if (Time.time > nextActionTime)
{
nextActionTime += period;
Capture();
}
}
public void Capture()
{
Camera Cam = GetComponent<Camera>();
// rt = new RenderTexture(1920, 1080, 24, RenderTextureFormat.ARGB32);
rt = new RenderTexture(texWidth, texHeight, 24, RenderTextureFormat.ARGB32);
rt.Create();
Cam.targetTexture = rt;
RenderTexture currentActiveRT = RenderTexture.active;
RenderTexture.active = rt;
Cam.Render();
Texture2D tex = new Texture2D(Cam.targetTexture.width, Cam.targetTexture.height, TextureFormat.ARGB32, false);
tex.ReadPixels(new Rect(0, 0, Cam.targetTexture.width, Cam.targetTexture.height), 0, 0);
tex.Apply();
RenderTexture.active = currentActiveRT;
byte[] bytes = tex.EncodeToPNG();
Destroy(tex);
File.WriteAllBytes(capturePath + fileCounter + ".png", bytes);
fileCounter++;
}
}
For each “experiment”, I set texWidth and texHeight to new values and start the play mode. Another script attached to the camera rotates it and moves it closer to the object until a minimum distance is attached, and the above script captures the camera view at each 0.5s in the meantime.
My issue is, when I inspect the captured images after the experiments, I see that they are not exactly matching although, according to my setup, they are supposed to be taken at the same sampling times. It seems that the timing of the snapshots varies between each different run (i.e. restarting play mode). My best guess is that this might be caused by a slow GPU-to-CPU copy due to the usage of Texture2D.ReadPixels; however, I’m not entirely sure if this could be mitigated if I use AsyncGPUReadback to copy the data from GPU to CPU. I would quickly try it out but it seems that AsyncGPUReadback is not supported using OpenGL and might only work if a 3rd party plugin (such as this one) is used.
Another possibility is, such a deterministic sampling is not possible between different runs. Then I’d probably have to insert multiple cameras and save multiple textures of different sizes during one run. I’m not sure though how many textures I can write simultaneously to files using this approach.