In Utils.matToTexture2D(), The input Mat object has to be of the types ‘CV_8UC4’ (RGBA) , ‘CV_8UC3’ (RGB) or ‘CV_8UC1’ (GRAY). The Texture2D object must have the TextureFormat ‘RGBA32’ , ‘ARGB32’ , ‘RGB24’ or ‘Alpha8’.
However, in the case of CvType.CV_32F, it is possible to convert to Texture2D as in the next post. https://discussions.unity.com/t/555254 page-34#post-3525849
I created a new example based on your code.
Could you try the attached example package? (It depends on the Example folder of OpenCVForUnity)
Texture2D CropImg()
{
double fx = meshGO.GetComponent<Renderer>().bounds.size.x / 10;
double fy = meshGO.GetComponent<Renderer>().bounds.size.y / 10;
Debug.Log(fx);
Debug.Log(fy);
int sourceWidth = imgTexture2D.width;
int sourceHeight = imgTexture2D.height;
float sourceAspect = (float)sourceWidth / sourceHeight;
float targetAspect = (float)(fx / fy);
// In the OpenCV Mat, rows mean height and columns mean width.
Mat originalMat = new Mat(sourceHeight, sourceWidth, CvType.CV_8UC4);
Utils.texture2DToMat(imgTexture2D, originalMat);
Mat cropMat;
// Crop from the center.
if (sourceAspect > targetAspect)
{
int w = (int)(sourceWidth * targetAspect);
int h = sourceHeight;
int x = (sourceWidth - w) / 2;
int y = (sourceHeight - h) / 2;
var cropRect = new OpenCVForUnity.CoreModule.Rect(x, y, w, h);
cropMat = new Mat(originalMat, cropRect);
}
else
{
int w = sourceWidth;
int h = (int)(sourceHeight * (1 / targetAspect));
int x = (sourceWidth - w) / 2;
int y = (sourceHeight - h) / 2;
var cropRect = new OpenCVForUnity.CoreModule.Rect(x, y, w, h);
cropMat = new Mat(originalMat, cropRect);
}
Texture2D textureFinal = new Texture2D(cropMat.cols(), cropMat.rows(), TextureFormat.RGBA32, false);
Utils.matToTexture2D(cropMat, textureFinal);
Debug.Log("NEW SIZE IS " + textureFinal.width + " AND " + textureFinal.height);
return textureFinal;
}
I tried calibrating with a chessboard a few time, and, while the results were somewhat better than they were with the ChArUco board, both the position and rotation of the AR Objects turn out to be insufficiently accurate.
I noticed that, in the ArUco Calibration Example script, there is no way to specify the length of the square size when calibrating with a chessboard (such a parameter is used for the ChArUco board, and that value is private for some reason). A video I saw on chessboard calibrating with vanilla OpenCV claimed that the square length is important.
I want to detect custom images, which are plased inside a black rectangle and so I looked at the aruco implementation (opencv_contrib/modules/aruco/src/aruco.cpp at master · opencv/opencv_contrib · GitHub) and rewrote some parts in c# (FormDetection - Pastebin.com). In the FindContours function I can switch between Canny, thresholdand adaptiveThreshold to detect edges. The aruco implementation uses adaptiveThreshold. If I use Canny, then the blinkin problem is much more present than when using thesholds.
That is why I wanted to switch in the first place. But now I have a massive fps drop when using the adaptiveThreshold in comparison to your Aruco.detectMarkers(rgbMat, dictionary, corners, ids);, which I guess just calls the native c++ function detectMarkers, which uses adaptiveThreshold. The fps with my code is around 85 and can drop to around 50 on a complex image, wheres the fps in the Aruco.detectMarkers call is constantly above 90 fps.
I’m now trying to work with yolov object detection.
the dnn object detection script says that it can take other models/config/classes. I’ve been playing around trying to get it to accept other things, but it seems to crash every time. Any idea where I can look for different models that would work with this? The example model from github seems to have a pretty limited range.
EDIT- also looking to get color descriptions for the detected objects. Something like densecap. I’m getting a bit lost in the API, but I’m thinking I will try: make a submat object for each detected object Rect region; average out the colors of all the pixels; measure their distance from a set of colors. Please let me know if you have any advice for this one
Hello , I’m trying to build for Android platform with openCV 2.3.8 and unity 2019.3.3f1 on macOs .
The console throws this error : Could not compile build file '/Users/zedaidai/Documents/DEV/git/mila/opencvTest/Temp/gradleOut/launcher/build.gradle'.
Unfortunately, I do not yet understand the model requirements for the OpenCV dnn module. The following wiki information is detailed for the openCV dnn module.
I created a sample project to improve calibration performance using the findChessboardCornersSB method and the calibrateCameraRO method of the Calib3D class. https://github.com/EnoxSoftware/OpenCVCameraCalibrationTest
This sample project has slightly improved the re-projection error value of calibration using chessboard.
With a slight difference, there is the same problem with other phones
I set the frame to 15 (avoidAndroidFrontCameraLowLightIssue = true)and that just makes the light a little better and still different from the Android camera.
I actually tried the simple code (without OpenCV) using WebCamTexture but found that some models (google pixel) had the same problem.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class WebcamTextureLowLightIssueOnAndroid : MonoBehaviour
{
WebCamTexture webCamTexture;
// Use this for initialization
void Start()
{
var devices = WebCamTexture.devices;
for (int cameraIndex = 0; cameraIndex < devices.Length; cameraIndex++)
{
// get front camera.
if (WebCamTexture.devices[cameraIndex].isFrontFacing == true)
{
var webCamDevice = devices[cameraIndex];
webCamTexture = new WebCamTexture(webCamDevice.name, 640, 480, 30);
break;
}
}
webCamTexture.Play();
gameObject.GetComponent<Renderer>().material.mainTexture = webCamTexture;
}
void OnDisable()
{
if (webCamTexture != null)
{
webCamTexture.Stop();
Destroy(webCamTexture);
webCamTexture = null;
}
}
}
And I reported the bug a few months ago using Unity Editor’s Bug Reporter and exchanged several times with Unity developers, but unfortunately, the report seems to have been closed now, without being able to reproduce the problem in the Unity-side verification environment.
To my knowledge, this problem remains unfixed.
To fix the problem, many people need to file a bug report.
Could you report bugs to Unity as well?
I wanted to merge between the OpenPose and HandPoseEstimation script, instead of using an image file, I wanted to apply openpose dnn model of every frame coming from the camera, but i’m getting this error “CvException: Native object address is NULL” as show in the screenshot attached
thanks alot! it worked, i have another question please.
Mat output = net.forward();
i need to apply the dnn model to detect the hand joints on every frame, however the editor crushed when that line of code is executed since alot of calculation are made every frame, is there anyway to reduce it?
I always use this asset. Thank you.
Now I am in trouble.
“YoloObjectDetectionExample” does not work.
When run in the editor, It freezes.
Please help me.
Develop environment is below.
macOS Catalina Version 10.15.3
Unity 2018.4.19.f1
OpenCV for Unity 2.3.8