Performance issues to send byte[] from C# to Java - All scenarios tested, are there others?

Hi,

I know this is a classic issue but I can’t find a viable solution.

I need to send a Texture2D content from C# to Java to have access to libs, like ffmpeg or MediaCodec. The problem is that, on the C# side, I can get the raw texture content easily, I can quickly transcode it to YUV420P (using unsafe). I then need to pass the byte[ ] to Java. If I call the JNI passing the byte[ ] (Unity automatic solution) it takes lots of time to convert it to Java array. The other solution is to encode in base64 and send a String to Java, then, on Java, decode the base64 and use the byte[ ]. This is way is much faster than Unity’s byte[ ] conversion, but not fast enough, this overhead compromises the solution. The last solution would be to have a JNI array created on C# and populate it, also too slow. So, among these 3 solutions, the best one is using base64, but not close to good enough. To have an idea of the performance:

One frame (image of 912x576), the byte[ ] from Texture is a 912x576x3 bytes of rgb. I then transcode to YUV420P, encode base64, call java, decode base64, append to video file (with MediaCodec or ffmpeg). The performance is:

— Unity C#
GetTexture+Convert to YUV: ~30ms
Encode base64: ~30ms
— Java
Decode base64: ~30ms
Media encoding: ~30ms

A total of around ~120ms per frame. But 1/2 of this time is due to base64 encoding/decoding.

Is there another possible solution? A lower level solution? I could try using storage to pass the data, but I would like to avoid unnecessary overheads. Having the byte[ ] in memory and being able to access it on both sides I believe is the only viable solution for a high performance need.

I am using IL2CPP, so, after all, we are talking about C++, data between Java and CPP shouldn’t be that separated, am I right?

Unity is 2018.4LTS

Anyone could give me more ideas to check? Someone from Unity?

Thanks!
Fernando

I would skip the Java completely if speed is something that you must have.

There are IL2CPP compiler options to remove certain checks

But most likely it will be faster to use C++
https://stackoverflow.com/a/58527828

Thanks.

  • The first solution, to disable the array check I don’t believe will make the C# (compiled to cpp) to pass the array to java without the jni conversion, which is insanely slow.

  • The second one looks more interesting. Basically I have a named pipe (fifo) that I need to write into. So, I guess I could access the pipe in C++ and call it from Unity without any problem with passing the array (it may be necessary to turn off the checks, but maybe not)

  • A third solution, which I will try soon, is to read the byte[ ] from the Texture2D on java side passing the texture IntPtr and reading using GLES.

Thanks again.

Reading the texture from opengl on Java worked really fast. But, I have a problem that it is getting 1/4 of the texture content and scaling to the expected size. I guess it is related to retina display. Not sure exactly how to handle it.

public boolean appendVideoFromTexturePtr(int ptr, int width, int height, int size)
    {
        Log.i(TAG_FFMPEG, "Reading from OpenGL - " + size);
       
        ByteBuffer inputBuffer = ByteBuffer.allocate(size);
        GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, ptr);
        GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGB, GLES20.GL_UNSIGNED_BYTE, inputBuffer);
       
        return appendVideo(inputBuffer.array());
    }

The size is widthheight3 as it is a RGB24 image.

The result:

The expected result:

If you are encoding bytes yourself, try using sbyte[ ] instead of byte[ ] (store the same values, for example cast sbyte* to byte* in unsafe block).
The performance problem comes from the fact that byte in C# is unsigned, but in Java it is signed.

Hi,

I’ve tried using sbyte[ ] and call the java method, but all I get is AndroidJavaException: java.lang.NoSuchMethodError: no non-static method with name=‘videoAppendMediaCode’ signature=‘([IIIZ)Z’ in class Ljava.lang.Object;

Here is C# side (to test it I was sending an empty sbyte[ ])

sbyte[] toSend = new sbyte[a_ImageBuffer.Length];

.Log("Calling videoAppendMediaCode", true);

result = m_Activity.Call<bool>("videoAppendMediaCode", toSend, width, height, frameIndex, encoded);

Here is Java side:

public boolean videoAppendMediaCode(byte[] imageBytes, int width, int height, int frameIndex, boolean encoded)
    {..}

But, it fails as the sbyte[ ] seems to fail when checking the Java signature of byte[ ].

You should be able to call that method using low level methods from AndroidJNI class.
Can you report the bug for sbyte[ ]?

I will report the bug.

What exactly do you suggest on AndroidJNI? Do you think that if I try calling it using AndroidJNI passing the sbyte[ ] it wouldn’t fail the same? As much as I can see on the Unity github, what I could do is the same is already done on Unity’s AndroidJavaObject.

I mean this class: Unity - Scripting API: AndroidJNI
It will be much more code than AndroidJavaObject, but it’s much faster and most direct way to call Java code.

I know you meant this class. The question is: Is there a way (100% sure) using it to pass the byte[ ] or sbyte[ ] to java without the huge overhead that exists in AndroidJavaObject?

I was optimizing that part. So if you are on a version that has the optimizations, the difference between byte[ ] and sbyte[ ] is huge (if array is large). Applies both ways.
The issue in AndroidJavaObject is probably related to method search and sbyte not being treated correctly, while it is a direct counterpart to byte in Java.

I am using Unity 2018.4.24f1. Do you know if it is optimized?

It looks it is not available on 2018.4 as AndroidJNI.ToSByteArray does not exist =Z

Introduced in 2019.1 it seems :frowning:

=(. I plan to go to newer versions of Unity. But I wanted to first release a patch to my app with a better performance on video export on Android.

Any chance I could get a little help about the other possible solution: get the raw data from the texture on Java side?

I can pass the IntPtr of the texture to Java, but getting the texture raw pixels is not that simple. I’ve read tons of things, tutorials, etc, I’ve tried a thousand of possible solutions, but the data returns empty. It just doesn’t return empty when I don’t set a Frame Buffer, but, in this case, it returns an image with wrong size, probably because it is working on another buffer that has a different size than the Texture2D.

C# side is simple:

IntPtr texPtr = tex.GetNativeTexturePtr();

success = m_Activity.Call<bool>("appendVideoFromTexturePtr", texPtr.ToInt32(), tex.width, tex.height, tex.width * tex.height * 3);

Java side:

    public boolean appendVideoFromTexturePtr(int ptr, int width, int height, int size)
    {
        Log.i(TAG_FFMPEG, "Reading from OpenGL - " + size);
           
        ByteBuffer inputBuffer = ByteBuffer.allocate(size);
                       
        int[] fboIds = new int[1];
       
        //Create buffer offscreen buffer
        GLES20.glDisable(GLES20.GL_DEPTH_TEST);
        GLES20.glGenFramebuffers(1, fboIds, 0);
        GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, fboIds[0]);
       
        //Link texture to buffer
        GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, ptr);
        GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER, GLES20.GL_COLOR_ATTACHMENT0,
                               GLES20.GL_TEXTURE_2D, ptr, 0);
       
        //Read buffer
        GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGB, GLES20.GL_UNSIGNED_BYTE, inputBuffer);

        //Get back to normal buffer
        GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0);
       
        GLES20.glDeleteFramebuffers(1, fboIds, 0);
       
        return appendVideo(inputBuffer.array());
    }

I need to know what is the secret of the Unity’s implementation of Texture2D.GetRawTextureData.

Thanks for the support and patience.

Better use long for texture pointer, since int32 is only valid on 32-bit system. The most straight forward way to get Java byte array containing those bytes would be to write C code and call it from Java, then you’d have direct JNI access and could do what ToSByteArray does in newer Unity versions.

Thanks for the int32/long but the problem is not on the texture pointer=id, it is retrieving the texture content via OpenGL. Actually in OpenGL it is not a pointer, it is an int with the ID/handle of the texture, so no need to worry about it.

In this solution I do nothing on Unity’s side beside passing the texture pointer to Java. The rest of the solutions is on Java, and it has a good performance, no transport of byte arrays, so perfect. My problem here is that I just can’t get the raw data from the texture. The pointer is correct, I can find the texture, but I can’t get the pixels data.

If I get it, problem solved. But, at the moment, I could only get it on C# side with Texture2D.GetRawTextureData on c#. Basically I need to implement it on Java, but I couldn’t so far. The code I posted on the previous reply is returning a blank/empty image.

The OpenGL texture (ptr) is only valid in Unity’s OpenGL ES context. It’s not valid in the context you have in Java.

You could try to create an EGL image from the Unity texture and share that with the Java OpenGL ES context.
You might need a native plugin to do that.

Another option could be to create a SurfaceTexture in Java and attach it to Unity as a secondary display (UnityPlayer.displayChanged on UnityPlayerActivity.mUnityPlayer) and render to it (Unity - Scripting API: Display.displays).

Thanks for the info. I would never imagine that the OpenGL functions on Java would not be on the same context of Unity’s. So it does not sound as the best way to go as we are talking about performance. It looks like I should persist with Texture2D.GetRawTextureData and implement c++ functions to write this data to the ffmpeg name pipe without going through any java.

Thanks.

I’ve reported the issue on 2018.4LTS. Case # is 1287007. Maybe the fix could include the optimization for sbyte[ ] =)

1 Like