Efficiently capturing rendered images?

How should I go about trying to capture rendered images from Unity efficiently? This may seem subjective, but the question is really more about if there is a way to do it and how rather than personal sentiments about best practices.

Preface

The idea for the project is to develop a Unity based render-farm of sorts. I am using Unity Pro 2.6.1 and excitedly anticipating 3.0 (maybe it will answer this question, but I think that seems unlikely).

I am well aware of code (usually on the camera) which does things like:

  • `File.WriteAllBytes(filename,((someTex2D.ReadPixels(new Rect(0, 0, Screen.width, Screen.height), 0, 0, false)).EncodeToPNG()))` la savescreenshot script.
  • and `Application.CaptureScreenshot(filename)` la ScreenShotMovie script.

to write PNGs, and

  • `(someTex2D.ReadPixels(new Rect(0, 0, Screen.width, Screen.height), 0, 0, false)).GetPixels()` to get a `Color[]` which can then be used to encode to most anything (see JPEGEncoder).

Note that I have threaded any encoding to avoid holding up Unity.

My scene normally renders at 150~180 FPS at the moment. Writing each frame out,

  • With threaded `Application.CaptureScreenshot`, it's about 32~36 FPS.
  • With `ReadPixels`, threaded `File.WriteAllBytes` and `EncodeToPNG`, it's about 28~31 FPS.
  • With `ReadPixels`, threaded `GetPixels` and some JPEG encoding, it's about 22~27 FPS

If I could get it to even 60 FPS (preferably more) and get my image data out either as images or some form of compressed data stream (bandwidth is a concern), that would be a vast improvement.

Considerations

When monitoring system performance, I'm only hitting at most 19% CPU usage and my RAM usage doesn't seem to change, so does this mean that my bottleneck is on the GPU? If so, is there a way to offset any of this load onto the underused CPU at all?

Redirecting the rendering pipeline from rendering to screen to instead render directly to a file format or data stream of some sort would be great, but from my research have only found posts indicating that it is not doable. Unity's browser plugin doesn't meet our projects needs and from what I've read, a source license seems too much for the simple thing we would need it for. Is there some way to remove this essentially unused on-screen render and get the image data out?

From my research, I found a post stating that rendertextures were essentially Frame Buffer Objects, (which as I understand represents a logical buffer to contain destinations to write to, but not the actual content). This seems to mean that I must call `ReadPixels` to get the image data into my scripts and `ReadPixels` is slow/expensive it seems (180 FPS->30 FPS = ouch) and it is a bottleneck as you can't render the next frame until you've read the pixels from the previous frame. Am I correct in this understanding of rendertextures or is there some way to use just the rendertexture to get the rendered image out of Unity?

I thought about maybe even streaming the Texture2D objects to remove the re-encoding step from the render servers for what little gain that is in the hopes that the encoding is efficient enough to meet bandwidth needs, but what does the internal structure of these texture2D objects look like and/or how would I get them out of Unity?

Any help at all is appreciated.

Please note that shader/lighting/occlusion adjustments, while effective at speeding up the renderer are not the question here. The question is about reducing the cost of getting image data out of Unity, not about speeding up the renderer.

Fantastic question

My take on it is this. Streaming the Texture2D objects wouldn't help one bit - they're pretty much just references to the internal native texture. You're seeing a higher FPS with Application.CaptureScreenShot because it doesn't have to marshall the huge colour array from the native code to your script

One way I've played with is to make a native plugin which grabs the screen data at X fps, completely bypassing unity. It's a pretty fast solution, but one which takes a lot more work

Besides that, theres not a great deal you can do - if you're going the unity only method, then you have to grab via one of those three methods you described (Though it could be faster if you use a lower resolution render texture)

I’d be interested to see more feedback on this as well. Can anyone who’s been through the paces comment on, or provide a resource link for, any of the following?:

  1. Recording the event stream based on a motion trigger event.
  2. Dumping the DX buffer to the native iOS framework in a form that we can assemble on the Obj-C side? (We’re thinking this might defer processing to the iOS media library which can do this much more efficiently)
  3. Any method or plugin (regardless of cost) that might do this at a framerate close to the poster’s desired 60FPS?

Many thanks,
Tyler

Aren’t there any simple way to do Screen Recording as a function from unity within App?
There is Application.CaptureScreenShot
Where is Application.CaptureScreenRecording?