How should I go about trying to capture rendered images from Unity efficiently? This may seem subjective, but the question is really more about if there is a way to do it and how rather than personal sentiments about best practices.
Preface
The idea for the project is to develop a Unity based render-farm of sorts. I am using Unity Pro 2.6.1 and excitedly anticipating 3.0 (maybe it will answer this question, but I think that seems unlikely).
I am well aware of code (usually on the camera) which does things like:
- `File.WriteAllBytes(filename,((someTex2D.ReadPixels(new Rect(0, 0, Screen.width, Screen.height), 0, 0, false)).EncodeToPNG()))` la savescreenshot script.
- and `Application.CaptureScreenshot(filename)` la ScreenShotMovie script.
to write PNGs, and
- `(someTex2D.ReadPixels(new Rect(0, 0, Screen.width, Screen.height), 0, 0, false)).GetPixels()` to get a `Color[]` which can then be used to encode to most anything (see JPEGEncoder).
Note that I have threaded any encoding to avoid holding up Unity.
My scene normally renders at 150~180 FPS at the moment. Writing each frame out,
- With threaded `Application.CaptureScreenshot`, it's about 32~36 FPS.
- With `ReadPixels`, threaded `File.WriteAllBytes` and `EncodeToPNG`, it's about 28~31 FPS.
- With `ReadPixels`, threaded `GetPixels` and some JPEG encoding, it's about 22~27 FPS
If I could get it to even 60 FPS (preferably more) and get my image data out either as images or some form of compressed data stream (bandwidth is a concern), that would be a vast improvement.
Considerations
When monitoring system performance, I'm only hitting at most 19% CPU usage and my RAM usage doesn't seem to change, so does this mean that my bottleneck is on the GPU? If so, is there a way to offset any of this load onto the underused CPU at all?
Redirecting the rendering pipeline from rendering to screen to instead render directly to a file format or data stream of some sort would be great, but from my research have only found posts indicating that it is not doable. Unity's browser plugin doesn't meet our projects needs and from what I've read, a source license seems too much for the simple thing we would need it for. Is there some way to remove this essentially unused on-screen render and get the image data out?
From my research, I found a post stating that rendertextures were essentially Frame Buffer Objects, (which as I understand represents a logical buffer to contain destinations to write to, but not the actual content). This seems to mean that I must call `ReadPixels` to get the image data into my scripts and `ReadPixels` is slow/expensive it seems (180 FPS->30 FPS = ouch) and it is a bottleneck as you can't render the next frame until you've read the pixels from the previous frame. Am I correct in this understanding of rendertextures or is there some way to use just the rendertexture to get the rendered image out of Unity?
I thought about maybe even streaming the Texture2D objects to remove the re-encoding step from the render servers for what little gain that is in the hopes that the encoding is efficient enough to meet bandwidth needs, but what does the internal structure of these texture2D objects look like and/or how would I get them out of Unity?
Any help at all is appreciated.
Please note that shader/lighting/occlusion adjustments, while effective at speeding up the renderer are not the question here. The question is about reducing the cost of getting image data out of Unity, not about speeding up the renderer.