Calling C++ OpenGL Rendering At end of frame on Unity iPhone.

I want to perform rendering using native OpenGL at the end of the Main camera being rendered, and I can’t work out how I’m meant to do it.

On Windows/Mac, there is the Native plugin example. I’m familiar with general unity plugins having written several, so I ported something like that to iPhone. As a test, I used the I same C# as the rendering plugin example (with the DLLImport directive changed for iPhone) and see the SetTimeFromUnity() function being called as expected. But no code ever gets executed in my UnityPluginEvent() or UnitySetGraphicsDevice(). I can prove this by setting a breakpoint at the top of UnityPluginEvent, sticking a printf() there, etc.

Am I correct in assuming that IssuePluginEvent() is not supported on iPhone? If so, this page here:

says that Unity.GL.* commands are executed immediately in onPostRender, so in that case, can I just thunk through to my plugin C++ OGL rendering from OnPostRender() added to my Main camera component? (E.g., imagine the example on the page above, if I wrote the GL.* bits in C++, called that, then called GL.InvalidateState() afterwards would that work?)

Basically, I want to draw a full-screen quad with a videotexture at max camera depth, from C++ AFTER unity has finished rendering the 3D frame.

Alex.

  • Footnote: Rendering directly via GL isn’t what I want to do, it’s just a way to solve the problem. What I really want is to be able to render my video camera texture using textures created via CVOpenGLESTextureCache, rather than using glTex(sub)Image2D to upload data. This would save a lot of CPU per frame, and my app is heavily CPU bound. The problem is that these fast-APIs create the GL texture where CoreVideo must allocate the texture ID, so you can’t use GetNativeTextureHandle() because that API assumes that Unity picks the ID, and your GL code uses it, not the other way round. If there’s a way to say to Unity “hey, just use this texture handle” (and I really wouldn’t mind if that were Aras saying “for now, hammer these 4 bytes 32 bytes into this struct with the texture handle you want, but you’re on your own”) then I’d love to NOT do my own rendering. But right now I desperately, desperately need some milliseconds, and there’s an awful lot of them available here.

Have you had any luck getting this to work? I have been fighting with it for a few days. Sounds like I am in a similar situation trying to get CVOpenGLESTextureCache to work to save some time over sending the video through glTexImage2D. Any advice would be appreciated.

this would be perfect - i mean the possibility of using CVOpenGLESTextureCache, for now I’m unfortunately using glTexImage2D too, ferretnt nicely summarized the situation in footnote

this probably deserves a separate entry @ http://feedback.unity3d.com/

iOS has not MT renderer so indeed the new way does not apply for the time being.

As such you can simply use the cameras OnPostRender function to do it, as it was done on Unity on desktops before Unity 3.5 and the MT Renderer already.

But the additional part is impossible.
Unity owns the texture, it sets and and controls it. You can only access and modify it, you can’t push native ones in.
If you want to go down that route for now you will either need a Unity Source license or use a different engine I fear …

I have setup an entry for this on the Feedback site, please go vote for it if you think it would be useful. http://feedback.unity3d.com/unity/all-categories/1/hot/active/2317--ability-to-use-cvopenglestexture

Since this seems to be the place where the most discussion about this topic is happening I thought I would share what I have tried to do to get this working to save others some time.

-Attempting to bind the texture with OpenGL in a native plugin during all of the render callbacks (on pre render, post render, etc). This had no effect.
-Attempting to subclass Texture and create a custom texture class that overrides the GetNativeTextureID() and GetInstanceID() allowing me to set them. This failed because Texture is a subclass of Object which is a sealed class.

At this point my only option seems to be to start drawing the geometry manually with OpenGL calls but this is not at all what I want. I would like to keep the texture in the Unity pipeline if possible and am still searching for other solutions.