I have made a suggestion on Unity Feedback to get simple CoreML integration like the recent ARKit. If you’ve used these you’ll know how simple it is to get going.
These libraries allow many more people to access these technologies.
Core ML enables things like native language processing, image recognition, facial tracking, etc… These could all be amazing tools for games (and non game applications) built in Unity. For example, what if you could issue commands just by speaking in VR. Or what if your game/app could react dynamically to the facial expressions of your user. The possibilities abound.
The most impressive opportunity is in combining CoreML with ARKit itself- just imagine being able to place all important game elements based on actual physical settings of the user’s environment. You can even create the game around mechanisms of finding something to progress in the game, i.e. find book, to read next clue → find door → find keys etc.
I managed to implement CoreML in Unity, the only problem I have right now is that you can’t use it with Unity ARkit because both use an “AVCaptureSession” (this is an objective C / swift function). And I can’t find a way to access the ARKit avcapturesession so I can add my CoreML output to it. I tried accessing it through the CaptureCameraController implementation from Unity but for some reason it won’t link, Symbols not found for architecture (This is a very common error with a lot of different solutions, I tried them all, none seem to work)
UPDATE: The tutorial now includes a Unity package that allows you to use both Unity ARKit and CoreML. I suggest using a Mobilenet model for the smoothest experience (still not very smooth on my iPhone 6S tho, getting around 11fps when analyzing the pixelbuffer with CoreML). Resnet50 only gives 3fps on my iPhone 6S. So now I’m looking for a way to multithread the analyzing, never used multithreading before so it should be fun!
I suggest keeping an eye on it since I am still updating it whenever I get some spare time. I’m also planning on adding a github project. But I can’t just upload my project I’m working on right now because it’s for a company. Didn’t sign any NDA since all I’m doing is research but I’m hoping to get a job there so it wouldn’t be smart just sharing their projects I guess … :')
Also I hope it’s of any help to you and if you have any suggestions feel free to send them my way. I’m trying (still very far from it tho) to make a tutorial comprehensible for everyone. Always looking to improve it.
For image classification, the plugin uses the InceptionV3 machine learning model. This model is provided in this repository inside the MLModel folder. Add this model to your Xcode project (generated by Unity after building), by dragging the model into the project navigator. Make sure that the model is added to the Unity-iPhone build target.