Hello, I want to process multiple videos in Unity using OpenCV with multithreading. Specifically, I want to multithread high-cost operations like Utils.matToTexture() and Utils.textureToMat(), which are particularly taxing on the CPU. I’ve tried using basic C# Threads and UniTask, but Unity keeps crashing. Is multithreading not possible with OpenCV for Unity?
Of course, I understand Unity’s unique condition that UI elements and textures must be accessed on the main thread.
Using coroutines ultimately delays texture updates, resulting in a decrease in FPS.
Is there a good way to process multiple videos with this asset without a significant drop in frame rate?
Basically, by following the rule of not operating a single Mat in multiple threads at the same time, I don’t think the problem will occur. AsynchronousFaceDetectionWebCamExample is an example of face detection processing in a separate thread.
i would also be heavily interested in an arm64 build. The only library needed is the “libopencvforunity.so”, as i have the opencv libraries themselves anyway.
I reviewed the asyncFaceDetection example. However, this example calls OpenCV functions like utils.matToTexture() on the main thread within the update function. I want to perform the most resource-intensive tasks on a different thread. (unity does not support multi-processing)
Unfortunately, I recently realized that this is not possible in Unity due to the condition that “accessing and modifying textures must be done on the main thread.” It seems that real-time many video stream processing with OpenCV in Unity will be difficult. I want to control about 20 real-time video streams with OpenCV.
To handle more than 10 video streams, it seems that the following conditions need to be met:
Must be able to use GPU resources.(possible, but more effort and time than other development platforms or program languages)
@jylee9048 It sounds like you need batch face detection, and the best way to do this isn’t using multi-threading; it’s using batch detection (send all streams to the GPU at once) using an ML model. I’m building a platform for this at Function, and have a few face detection models to try out (BlazeFace, YuNet, DBFace, CenterFace). PM me if you’d be interested in trying Function.
Hi guys, hope everyone will be doing great, I am pretty new to OpenCV and Image processing. I’ve been assigned a task in which I have to get a user’s selfie and apply that user’s face to the 3D model in our game to make the game more immersive. Avaturn already does that, but the drawback is it requires login, and that’s not good for our users.
How can I achieve that using OpenCV for Unity? Or anything similar readily available? Thank you.
Not sure if you need to go all the way into openCV, you just need to recognize the face area on the selfie (or ask the user to place his face inside a ellipse while taking the selfie) cut it out and map it into the texture for the avatar
I tried to process ‘object detection’ using a library like YOLO from more than 10 real-time camera streams solely with Unity.
I am not currently interested in face detection. I want to handle multiple camera streams with OpenCV’s YOLO library using only Unity, without relying on other platforms or SDKs.
However, I encountered very low FPS and freezing issues, leading to failure. I currently think that this work is either impossible or very challenging.
@jylee656We’ve done what you’re trying to do already. OpenCV’s YOLO module is calling the underlying YOLO ML model. Our SDK does the same but with GPU acceleration (allowing you to make multiple simultaneous predictions) and with a very small footprint (OpenCV adds hundreds of megabytes to your app, ours is less than 10mb).
I will need to think about how to send all the camera stream textures to the GPU using shaders at once frame as you advised. Thank you for your advice.
is not as easy to be solved with “just a script” (not to mention you should try to make each script as simple as possible). you need to design a proper solution