OAK For Unity - Spatial AI meets the power of Unity

Welcome to Spatial CV and Edge AI Revolution !

Documentation | Video Tutorials | Github | Discord

What is OAK For Unity?
OAK For Unity is native plugin for Windows, Linux and MacOS (Android support ongoing) to enable OAK devices and capabilities inside Unity.
Main goal is bring the power of OAK devices, CV and Edge AI to Unity community to build the next generation of Spatial AI / Edge AI applications.

  • OAK For Unity Creators: Build interactive videogames and experiences with pretrained models and high-level API

  • OAK For Unity Developers: Productivity tools for Unity developers: VR workflows, MoCap tools (body pose and face mesh)

  • OAK For CV/AI/Robotics Developers:

  • Unity Virtual OAK camera

  • Define custom pipelines (as much as possible inside Unity with visual tools)

  • Robotics Simulation and Digital Twins

  • Synthetic dataset generation with automatic labelling

  • Deep RL with ML-Agents

What is OAK?
OAK is real swiss army knife for CV/AI.
OAK devices are Depth CV (Computer Vision) + AI (Deep Learning) cameras from Luxonis powered by Intel MyriadX VPU (Vision Processing Unit) at unbeatable price tag (<200$)
Check out OAK devices, documentation and specs on Luxonis website.

Application examples:
Are you building interactive experiences/installation/games where users can interact with their face, head, body, eyes-gaze and/or hands?
Are you building health/sports applications monitoring users body pose?
Are you building multiplayer VR experiences? Are you looking to speedup VR development?
Are you looking/building an affordable body / face MoCap solution?
Are you building next generation of streaming application using advanced background segmentation? Or depth/3D streaming to LookingGlass holographic display?
Are you building AR filters thanks to face detection and face mesh?
Are you building monitoring applications needs object detection?
Are you building application needs OCR (Optical Character Recognition)
Are you building robotic digital twin using CV/AI?
…. Endless application list here …

Features and Roadmap:
v1.0.0 Initial Release to AssetStore (free):

  • No code approach. Just drag and drop some prefabs.

  • Access to OAK device:

  • RGB, mono images and depth (point cloud viz currently ongoing dev).

  • About point cloud: Add support for external libraries: PCL, …

  • Access to IMU if available and retrieve system information

  • Record and Replay capability

  • OAK Device Manager and multiple OAK devices support

  • OAK For Unity Creators: High-Level API - Unlock “Myriad” applications with predefined and ready-to-use pretrained models (with depth support if you run OAK-D family)

  • Face Detector

  • 3D Head Pose Estimation

  • Face emotion

  • Body Pose (MoveNet)

  • Hand Tracking and Gesture recognition (Blaze Hands)

  • Object detector (Tiny Yolo v3/4/5 and MobileNet SSD)

  • DeepLabv3 segmentation. Background removal.

  • Depth (only OAK-D family) - point cloud visualization

  • Example how to integrate your own pipeline

  • Integration with some paid assets (demos)

  • OAK For CV/AI/Robotics: Unity Virtual OAK Camera

Next versions:

  • Improve Hand track
  • Face mesh and Face animation tool
  • Humanoid support with Body pose
  • Eye-gaze
  • OCR
  • More integration with paid assets
  • OpenCV and PCL support inside Unity
  • More advanced demos: combining different pretrained models

Do you miss some models / use cases / integration with some specific paid asset? Please let us know.

  • Android support. Integration with AR.
  • End-to-end workflows for synthetic dataset generation, training and deploy
  • Integration with Unity Robotics Hub (ROS) / SystemGraph / SensorSDK
  • Integration with Unity Simulation and Unity Simulation Pro
  • Intergration with Unity ML-Agents
  • Define custom pipelines inside Unity (visual tool)

2022-01-22: New demo menu scene added. Now easier navigate through the demos

Some examples:
Old video showing some possibilites with ML pretrained models:

Body Pose:

Hand Tracking:

Synthetic Datasets:

Documentation | Video Tutorials | Github | Discord

3 Likes

Some updates about ongoing dev

Example of face detector and device manager:

One of the last experiments with hand tracking:

Some WIP on point cloud visualization

Time for weekly update on OAK For Unity:

:heavy_check_mark: Record and replay feature for demo scenes (replay working even without OAK device - automatic fallback when there is no device)
:heavy_check_mark: Xcode project to build .bundle for MacOS (in case you want to build the unity lib)
:heavy_check_mark: New demo “photobooth” using face emotion + depth to show potential uses of OAK inside Unity. Small improve pending: Load pipeline outside main ui thread and combine with background removal and blending with photobooth

Hi! any news on the release in the store? or is it possible to join as a Beta tester? I already have an OAK-D

Hi ! Working on it but still bit early to confirm dates for assetstore. Before we plan to open beta on github repo (we target this month, so should start happening soon).

To join as beta tester we’d love to get some insights from you (platform, use case, …) to make beta testing the best for everyone. Would you mind to answer following questions?:

  • Main platform are you using? (MacOS, Win, Linux)
  • Could you explain bit more about your use case? What are you looking to build with the Unity plugin?
  • What part of the Unity plugin you find most interesting? (OAK for Creators, OAK for Developers, OAK for CV/AI)
  • Do you have experience with OAK API/SDK? What platform are you using the most? (C/C++ , Python)

Thanks in advance

1 Like

Mainly looking to detect ppl and report the position back to parent stuff on top in the editor, later would be great to differentiate recognized subjects so I can parent their respective pre determined “thing”

Platform Windows

Think that could fit very well on the beta.

Are you using any specific model right now?

As explained in the first post, the beta and first version to release on the assetstore will be focused on some pretrained models. This one could be interesting for you:

  • Object detector (Tiny Yolo v3/4/5 and MobileNet SSD)

I will post here as soon as beta testing is available but for more agile discussion please also join our discord https://discord.gg/4hGT3AFPMZ (#unity and @Luxonis-Gerard)

That’s perfect ! Btw I meant if you’re using any specific ML model for ppl detection

New update. This week working on basic streams with support for point cloud and VFX thanks to the amazing work of Keijiro Takahashi

And here some updates:
:heavy_check_mark: Device Manager
:heavy_check_mark: Basic streams demo scene (color, mono, depth and disparity)
:heavy_check_mark: Point Cloud VFX demo scenes (point cloud, “matrix” effects)
:heavy_check_mark: All demo scenes has support for record and replay data beside live mode
Next:
Working on “Start here” scene: kind of main demo menu to navigate through all demos
Rest of pretrained demos: object detector scene in prep

So stay tuned please follow / star repository: GitHub - luxonis/depthai-unity: DepthAI Unity Library, Unity projects and examples (OAK For Unity) and join discussion about the roadmap

1 Like

And some images of OAK-D Lite device :slight_smile:

1 Like

Looks great! I’m very keen to try out the basic streams demo scene :slight_smile:

Hi Head-Trip ! Stay tuned on the repository. What platform are you using? (MacOS, Win, Linux)

Will do! I’m on Windows 10.

1 Like

Time for quick update. Basic streams and point cloud vfx available on repository: GitHub - luxonis/depthai-unity: DepthAI Unity Library, Unity projects and examples (OAK For Unity)
for Windows. Working on MacOS and Linux support

Hey ! new demo menu scene available.

1 Like

Hello,

So great! Can’t wait to test the Bodypose with my OAK-D lite!

Hi ! Happy to hear that ! Please follow/star the repo GitHub - luxonis/depthai-unity: DepthAI Unity Library, Unity projects and examples (OAK For Unity) and stay tuned !