[RELEASED] OpenCV ObjectDetector

OpenCV ObjectDetector

Requires Unity 4.5.5 or higher.
[NEW] Support for Unity5

Works with Unity Free & Pro
iOS & Android support
Win & Mac Standalone support
Support for preview in the Editor

More advanced OpenCV Asset is published.Please check OpenCV for Unity.

OpenCV ObjectDetector can detect(Sync or Async) an object from Texture2D using OpenCV.
SampleCode | Demo Application(FaceDetection) | Tutorial & Demo Video(Unity4 Unity5) | Forum

e-mail
enox.software@gmail.com

Features:

  • You can get a processing result of detectMultiScale() of OpenCV using cascade file that you specified.
  • Object detection parameters (same as the parameters of detectMultiScale()) can be set in JSON format, You can get in JSON format Object detection result.

Overview of the method call steps:

  • LoadCascade (string filename)
  • AddObjectDetectorParam (string param)
  • Detect (Texture2D texture, string callbackGameObjectName, string callbackMethodName)

System Requirements:
Build Win Standalone & Preview Editor : Windows7 or later
Build Mac Standalone & Preview Editor : OSX 10.8 or later

Release Notes:
1.1.9
[iOS]Fixed libopencvobjectdetector.a Bitcode Setting.
1.1.8
[iOS]Enabled Bitcode.
1.1.7
[Common] Fixed the bug that occurs in the editor.
1.1.6
[iOS]Move “OpenCVObjectDetector/ iOSforXcode/opencv2.framework”to “OpenCVObjectDetector/Plugins/iOS/”folder.
1.1.5
[Common]Add SampleScene Setup Tutorial Video for Unity5.
1.1.4
[Common]Add OpenCVObjectDetectorMenuItem.cs.( This script set plugin import settings automatically from MenuItem.)
[iOS]Move “OpenCVObjectDetector/ iOSforXcode/iOS_BuildPostprocessor.cs” to “OpenCVObjectDetector/Editor”folder.
1.1.3
[Common]Update to OpenCV2.4.11
1.1.2
[Common]Divide asset for Unity4 and Unity5.
1.1.1
[Common]Support for Unity5.
1.1.0
[Common]Update to OpenCV2.4.10
1.0.9
[iOS]Support for arm64 build target.(Unity 4.6.1p3 or higher)
1.0.8
[Android]Support for x86 build target.(Unity 4.6 or higher)
1.0.7
[Common]Update SampleScene(Process of converting results of object detection to the 3D position).
1.0.6
[Common]Support for preview in the Editor.(Pro only)
[Common]Support for Win & Mac Standalone.(Pro only)
[Android]Change of location of the cascade file.Changed to use“Aseets/StreamingAssets/” folder.
[iOS] Add the cascade file to Xcode project is no longer required.Changed to use“Aseets/StreamingAssets/” folder.

is this compatible with web player build?

This asset is not compatible to the WebPlayer.

Note: On the desktop platforms, plugins are a pro-only feature. For security reasons, plugins are not usable with webplayers.

ok, thanks for the answer :slight_smile:

by the way, is there any way to do face tracking in web player
I’m already stuck on this for weeks

i use Unity 4.5.5p1 and XCode 5.1.1 or 6.1.
export xcode project, then add opencv2.framework.
but build failed, have Linker Error.

ld: warning: directory not found for option ‘-F/Users/me/Fukui/Assets/OpenCVObjectDetector/iOS’

ld: warning: directory not found for option ‘-Ffor’

ld: warning: directory not found for option ‘-FXcode’

Undefined symbols for architecture armv7:

“std::__1::__vector_base_common::__throw_length_error() const”, referenced from:

std::__1::vector<int, std::__1::allocator >::__append(unsigned long) in opencv2(histogram.o)

std::__1::vector<cv::Vec<int, 128>, std::__1::allocator<cv::Vec<int, 128> > >::__append(unsigned long) in opencv2(matrix.o)

std::__1::vector<cv::Vec<int, 64>, std::__1::allocator<cv::Vec<int, 64> > >::__append(unsigned long) in opencv2(matrix.o)

std::__1::vector<cv::Vec<int, 32>, std::__1::allocator<cv::Vec<int, 32> > >::__append(unsigned long) in opencv2(matrix.o)

std::__1::vector<cv::Vec<int, 16>, std::__1::allocator<cv::Vec<int, 16> > >::__append(unsigned long) in opencv2(matrix.o)

std::__1::vector<cv::Vec<int, 12>, std::__1::allocator<cv::Vec<int, 12> > >::__append(unsigned long) in opencv2(matrix.o)

std::__1::vector<cv::Vec<int, 9>, std::__1::allocator<cv::Vec<int, 9> > >::__append(unsigned long) in opencv2(matrix.o)

“cv::_InputArray::_InputArray(cv::Mat const&)”, referenced from:

_OpenCVObjectDetector_Detect in libopencvobjectdetector.a(opencvobjectdetector.o)

_DetectThread in libopencvobjectdetector.a(opencvobjectdetector.o)

“cv::_OutputArray::_OutputArray(cv::Mat&)”, referenced from:

_OpenCVObjectDetector_Detect in libopencvobjectdetector.a(opencvobjectdetector.o)

_DetectThread in libopencvobjectdetector.a(opencvobjectdetector.o)

“cv::CascadeClassifier::CascadeClassifier(std::string const&)”, referenced from:

_OpenCVObjectDetector_LoadCascade in libopencvobjectdetector.a(opencvobjectdetector.o)

ld: symbol(s) not found for architecture armv7

clang: error: linker command failed with exit code 1 (use -v to see invocation)

Sorry.
I was able to fix it myself.

Hello,
is it possibile to have a mouth+eyes detection on a webtexture playing?

This asset can detect the object from only Texture2D.
You can use the ”OpenCV for Unity”, which is a more advanced asset, you can real-time detect the object from WebCamTexture.
https://www.assetstore.unity3d.com/en/#!/content/21088

Released Version 1.0.7

Version changes
1.0.7
[Common]Update SampleScene(Process of converting results of object detection to the 3D position).

Help me!
i use Unity 4.5.5p1 and XCode 6.1.1 and OS X Yosemte version 10.10.1
when i export xcode project. i have add lib: opencv2.framework.
but build failed, have Linker Error.
Undefined symbols for architecture i386:

“cv::_InputArray::_InputArray(cv::Mat const&)”, referenced from:

_OpenCVObjectDetector_Detect in libopencvobjectdetector.a(opencvobjectdetector.o)

_DetectThread in libopencvobjectdetector.a(opencvobjectdetector.o)

“cv::_OutputArray::_OutputArray(cv::Mat&)”, referenced from:

_OpenCVObjectDetector_Detect in libopencvobjectdetector.a(opencvobjectdetector.o)

_DetectThread in libopencvobjectdetector.a(opencvobjectdetector.o)

“cv::CascadeClassifier::CascadeClassifier(std::string const&)”, referenced from:

_OpenCVObjectDetector_LoadCascade in libopencvobjectdetector.a(opencvobjectdetector.o)

“std::__1::__vector_base_common::__throw_length_error() const”, referenced from:

_cvCalcArrHist in opencv2(histogram.o)

_cvCalcArrBackProject in opencv2(histogram.o)

std::__1::vector<unsigned long, std::__1::allocator >::__append(unsigned long) in opencv2(histogram.o)

std::__1::vector<double, std::__1::allocator >::__append(unsigned long) in opencv2(histogram.o)

std::__1::vector<int, std::__1::allocator >::__append(unsigned long) in opencv2(histogram.o)

std::__1::vector<unsigned char*, std::__1::allocator<unsigned char*> >::__append(unsigned long) in opencv2(histogram.o)

cv::SparseMat::resizeHashTab(unsigned long) in opencv2(matrix.o)

“std::__1::__basic_string_common::__throw_length_error() const”, referenced from:

std::__1::basic_stringbuf<char, std::__1::char_traits, std::__1::allocator >::str() const in opencv2(ocl.o)

“std::__1::locale::use_facet(std::__1::locale::id&) const”, referenced from:

std::__1::basic_ostream<char, std::__1::char_traits >& std::__1::hushed:perator<<<std::__1::char_traits >(std::__1::basic_ostream<char, std::__1::char_traits >&, char const*) in opencv2(ocl.o)

std::__1::basic_ostream<char, std::__1::char_traits >& std::__1::hushed:perator<<<std::__1::char_traits >(std::__1::basic_ostream<char, std::__1::char_traits >&, char) in opencv2(ocl.o)

std::__1::basic_ostream<char, std::__1::char_traits >& std::__1::hushed:perator<<<std::__1::char_traits >(std::__1::basic_ostream<char, std::__1::char_traits >&, unsigned char) in opencv2(ocl.o)

“std::__1::ios_base::getloc() const”, referenced from:

std::__1::basic_ostream<char, std::__1::char_traits >& std::__1::hushed:perator<<<std::__1::char_traits >(std::__1::basic_ostream<char, std::__1::char_traits >&, char const*) in opencv2(ocl.o)

std::__1::basic_ostream<char, std::__1::char_traits >& std::__1::hushed:perator<<<std::__1::char_traits >(std::__1::basic_ostream<char, std::__1::char_traits >&, char) in opencv2(ocl.o)

std::__1::basic_ostream<char, std::__1::char_traits >& std::__1::hushed:perator<<<std::__1::char_traits >(std::__1::basic_ostream<char, std::__1::char_traits >&, unsigned char) in opencv2(ocl.o)

“std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >::resize(unsigned long, char)”, referenced from:

std::__1::basic_stringbuf<char, std::__1::char_traits, std::__1::allocator >::hushed:verflow(int) in opencv2(ocl.o)

std::__1::basic_stringbuf<char, std::__1::char_traits, std::__1::allocator >::str(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&) in opencv2(ocl.o)

“std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >::push_back(char)”, referenced from:

std::__1::basic_stringbuf<char, std::__1::char_traits, std::__1::allocator >::hushed:verflow(int) in opencv2(ocl.o)

“std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >::~basic_string()”, referenced from:

cv::hushed:cl::kernelToStr(cv::_InputArray const&, int, char const*) in opencv2(ocl.o)

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

“std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >::hushed:perator=(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&)”, referenced from:

std::__1::basic_stringbuf<char, std::__1::char_traits, std::__1::allocator >::str(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&) in opencv2(ocl.o)

“std::__1::basic_ostream<char, std::__1::char_traits >::sentry::sentry(std::__1::basic_ostream<char, std::__1::char_traits >&)”, referenced from:

std::__1::basic_ostream<char, std::__1::char_traits >& std::__1::hushed:perator<<<std::__1::char_traits >(std::__1::basic_ostream<char, std::__1::char_traits >&, char const*) in opencv2(ocl.o)

std::__1::basic_ostream<char, std::__1::char_traits >& std::__1::hushed:perator<<<std::__1::char_traits >(std::__1::basic_ostream<char, std::__1::char_traits >&, char) in opencv2(ocl.o)

std::__1::basic_ostream<char, std::__1::char_traits >& std::__1::hushed:perator<<<std::__1::char_traits >(std::__1::basic_ostream<char, std::__1::char_traits >&, unsigned char) in opencv2(ocl.o)

“std::__1::basic_ostream<char, std::__1::char_traits >::sentry::~sentry()”, referenced from:

std::__1::basic_ostream<char, std::__1::char_traits >& std::__1::hushed:perator<<<std::__1::char_traits >(std::__1::basic_ostream<char, std::__1::char_traits >&, char const*) in opencv2(ocl.o)

std::__1::basic_ostream<char, std::__1::char_traits >& std::__1::hushed:perator<<<std::__1::char_traits >(std::__1::basic_ostream<char, std::__1::char_traits >&, char) in opencv2(ocl.o)

std::__1::basic_ostream<char, std::__1::char_traits >& std::__1::hushed:perator<<<std::__1::char_traits >(std::__1::basic_ostream<char, std::__1::char_traits >&, unsigned char) in opencv2(ocl.o)

“std::__1::basic_ostream<char, std::__1::char_traits >::~basic_ostream()”, referenced from:

construction vtable for std::__1::basic_ostream<char, std::__1::char_traits >-in-std::__1::basic_ostringstream<char, std::__1::char_traits, std::__1::allocator > in opencv2(ocl.o)

“std::__1::basic_ostream<char, std::__1::char_traits >::~basic_ostream()”, referenced from:

construction vtable for std::__1::basic_ostream<char, std::__1::char_traits >-in-std::__1::basic_ostringstream<char, std::__1::char_traits, std::__1::allocator > in opencv2(ocl.o)

“std::__1::basic_ostream<char, std::__1::char_traits >::~basic_ostream()”, referenced from:

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

“std::__1::basic_ostream<char, std::__1::char_traits >::hushed:perator<<(double)”, referenced from:

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

“std::__1::basic_ostream<char, std::__1::char_traits >::hushed:perator<<(float)”, referenced from:

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

“std::__1::basic_ostream<char, std::__1::char_traits >::hushed:perator<<(int)”, referenced from:

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

“std::__1::basic_ostream<char, std::__1::char_traits >::hushed:perator<<(short)”, referenced from:

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

“std::__1::basic_ostream<char, std::__1::char_traits >::hushed:perator<<(unsigned short)”, referenced from:

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

“std::__1::basic_streambuf<char, std::__1::char_traits >::sync()”, referenced from:

vtable for std::__1::basic_stringbuf<char, std::__1::char_traits, std::__1::allocator > in opencv2(ocl.o)

“std::__1::basic_streambuf<char, std::__1::char_traits >::imbue(std::__1::locale const&)”, referenced from:

vtable for std::__1::basic_stringbuf<char, std::__1::char_traits, std::__1::allocator > in opencv2(ocl.o)

“std::__1::basic_streambuf<char, std::__1::char_traits >::uflow()”, referenced from:

vtable for std::__1::basic_stringbuf<char, std::__1::char_traits, std::__1::allocator > in opencv2(ocl.o)

“std::__1::basic_streambuf<char, std::__1::char_traits >::setbuf(char*, int)”, referenced from:

vtable for std::__1::basic_stringbuf<char, std::__1::char_traits, std::__1::allocator > in opencv2(ocl.o)

“std::__1::basic_streambuf<char, std::__1::char_traits >::xsgetn(char*, int)”, referenced from:

vtable for std::__1::basic_stringbuf<char, std::__1::char_traits, std::__1::allocator > in opencv2(ocl.o)

“std::__1::basic_streambuf<char, std::__1::char_traits >::xsputn(char const*, int)”, referenced from:

vtable for std::__1::basic_stringbuf<char, std::__1::char_traits, std::__1::allocator > in opencv2(ocl.o)

“std::__1::basic_streambuf<char, std::__1::char_traits >::showmanyc()”, referenced from:

vtable for std::__1::basic_stringbuf<char, std::__1::char_traits, std::__1::allocator > in opencv2(ocl.o)

“std::__1::basic_streambuf<char, std::__1::char_traits >::basic_streambuf()”, referenced from:

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > cv::hushed:cl::kerToStr(cv::Mat const&) in opencv2(ocl.o)

I was also verified in the same environment(Unity 4.5.5p1, XCode 6.1.1, OS X Yosemte 10.10.1), but the problem in the build did not occur.
opencv2.framework might not have been correctly added.

Released Version 1.0.8

Version changes
1.0.8
[Android]Support for x86 build target.(Unity 4.6 or higher)

Success to iOS 64bit build.

I have confirmed that the project using the “OpenCV ObjectDetector now version 1.0.8” in Unity4.6.1p3 is successful in iOS 64bit build.

Released Version 1.0.9

Version changes
1.0.9
[iOS]Support for arm64 build target.(Unity 4.6.1p3 or higher)

Released Version 1.1.0

Version changes
1.1.0
[Common]Update to OpenCV2.4.10

“OpenCV ObjectDetector 1.1.1(Support for Unity5)” is in pending review now in AssetStore.
I think that it is approved a few days later.

As for the current version1.1.0, only Android and Win and Mac support Unity5.
But, iOS does not support.
If you do not use iOS, there is not the problem with upgrading.

Released Version 1.1.1

Version changes
1.1.1
[Common]Support for Unity5.

Released Version 1.1.2

Version changes
1.1.2
[Common]Divide asset for Unity4 and Unity5.

Hey,
when using the Object Detector, is it possible to get the detected part of the texture into a new texture?
For example, look at this screen:

So that the detected face, which is marked blue, gets read into a new texture which just contains the part marked by your plugin?

Thanks in advance

Please use the Texture2D.GetPixels() and Texture2D.SetPixels().

/// <summary>
        /// Simples the faces detect callback.
        /// </summary>
        /// <param name="result">Result.</param>
        void SimpleFacesDetectCallback (string result)
        {
                Debug.Log ("SimpleFacesDetectCallback result" + result);

                string json = result;
       
                IDictionary detects = (IDictionary)Json.Deserialize (json);
       
                foreach (DictionaryEntry detect in detects) {
                        Debug.Log ("detects key " + detect.Key);
           
                        string key = (string)detect.Key;
           
                        if (key.Equals ("error")) {
                                Debug.Log ((string)detects [detect.Key]);
                        } else {
               
                                IList<object> rects = (IList<object>)detects [detect.Key];


                                //flip Rects by convenient method,
                                IList<object> flipRects = OpenCVObjectDetector.FlipRects (rects, ((Texture2D)GetComponent<Renderer> ().material.mainTexture).width, ((Texture2D)GetComponent<Renderer> ().material.mainTexture).height, 0);

                                Texture2D baseTexture = (Texture2D)GetComponent<Renderer> ().material.mainTexture;
               
                                foreach (IDictionary rect in flipRects) {
                                        Texture2D newTexture = new Texture2D ((int)((long)rect ["width"]), (int)((long)rect ["height"]), TextureFormat.RGBA32, false);
                                        newTexture.SetPixels (baseTexture.GetPixels ((int)((long)rect ["x"]), (int)((long)rect ["y"]), (int)((long)rect ["width"]), (int)((long)rect ["height"])));
                                        newTexture.Apply ();
                   
                   
                                        gameObject.GetComponent<Renderer> ().material.mainTexture = newTexture;
                                }


//                                #if UNITY_PRO_LICENSE || ((UNITY_ANDROID || UNITY_IPHONE) && !UNITY_EDITOR) || !(UNITY_4_5 || UNITY_4_6)
//                                OpenCVObjectDetector.DrawRects ((Texture2D)GetComponent<Renderer> ().material.mainTexture, Json.Serialize (flipRects), 0, 0, 255, 2);
//                                #endif
//
//
//                                ResultRectsToResultGameObjects (flipRects, new Color (0.0f, 0.0f, 1.0f, 0.3f), -40);
                               
                        }
                }
        }