Like it seems almost everyone else, I’m doing some CPU-side computer vision on the video feed from the iPad’s front camera. Unfortunately, it seems that passing the pixelData from a Unity3.5 webCamTexture to a C++ plugin is unusably slow. On my iPad2, a simple main loop, which calls this:
unityWebcamImageData = webcamTexture.GetPixels32();
where unityWebcamImageData is a pre-allocated array of the correct size, and then passes a pointer to that array to a C++ function that does nothing only gets around 6fps. (webCamTexture is 640x480 in this test.)
Now, getting CPU-side access to the video feed via AVFoundation (and indeed, also giving that data back to Unity via a GL texture) isn’t that much code, so my question is simple - is Unity’s WebCamTexture internally structured such that what I’m doing should work more efficiently? Or is there an alternative API I can use other than GetPixels32(), or is the WebCamTexture only useful if you’re doing pure GPU-side processing (grab image, use as texture during GPU render) and I should use AVFoundation if requiring CPU-side access?
As a side note, I’m also getting some bizarre image sizes back from various Mac Webcams - e.g. requesting 640x480 video from my built-in iSight returns a 640x480 texture, but requesting the same thing of my Logitech C910 returns a 640x640 texture, even though AVFoundation reports that camera as capable of 640x480, and it seems strange to pad the texture to a square nonpow-2 size.