[RELEASED] Midas - Monocular Depth Estimation

Unlock real-time monocular depth estimation with MiDaS models running in Unity Sentis.

Asset Store · Documentation · GitHub · OpenUPM · Hugging Face

This project is dual-licensed. You can either get it on GitHub under the MIT License, or — if you find it useful and want to support the development — for a small price on the Asset Store.

3 Likes

thats amazing!

maybe you could add some kofi-support link in github tool (since github removed paypal support for sponsors…)

1 Like

Thanks, I’ll add that. Still wondering about what’s the best approach to take for donations (GH Sponsors, Patreon, ko-fi, custom page with stripe links, …) but I guess it can’t hurt to provide multiple options.

github added some patreon support Sponsoring an open source contributor through Patreon - GitHub Docs

i think patreon takes bigger cut(?), but seems to be most popular…

kofi is free for some donations,

1 Like

A new version has been released.

v1.0.1 (February 09, 2024)

  • Added: Add a sample scene for depth estimation on videos.
  • Fixed: Fixed scale/shift calculation from min/max distance in sample scenes.

How would you add URP support for this?

It depends on what you want to do with the estimated depth / how you want to render it. If we’re talking about the naive point rendering implemented in the sample scenes specifically: I’m using DrawMeshInstancedIndirect here which AFAIK should be compatible with URP. The shader most likely needs to be updated though.

I haven’t actually used URP much so take this with a grain of salt:
I think for simple shaders it should be possible to make it work with both Built-in and URP (stuff like adding Tags {“RenderPipeline” = " UniversalPipeline" } but I’m not sure about all the necessary steps).

If that doesn’t work you’d have to create a URP compatible shader with ShaderGraph instead.

From what I read the callback on camera render is BiRP only, and for URP you need to make a custom blit render feature.
Will give it a go from scratch another day.

The end result would be using depth values in a full screen shader graph in VR

Some recent updates:

v1.2.0 (September 03, 2024)

  • Changed: Updated Sentis dependency to 2.0.0

v1.1.0 (August 17, 2024)

  • Changed: Updated Sentis dependency to 1.6.0-pre.1
  • Changed: Normalization is now handled by a separate class using the Functional API
1 Like

v1.3.0 (September 23, 2024)

  • Added: Method overloads to allow spreading the depth estimation over multiple frames.
  • Changed: Updated Sentis dependency to 2.1.0
1 Like

any suggestion on doing metric/absolute depth estimation?

I’m currently using the default model from the depth estimation example in Unity Sentis. But that model is relative depth. And the program is target to run in mobile phone, so the processing power is also needed to consider.

I personally haven’t tried any metric models yet.

The ones that seem most interesting to me are DepthPro and DepthAnything.

I’ve converted the DepthPro models for Sentis, but they’re quite large:

DepthAnything v2 models are available here:

1 Like

Thank you for sharing. I would also like to learn more about the conversion model for Sentis. As I’m new to machine learning, could you share more about the steps you take to convert a model to Sentis? I usually see models being built in Python.

Sometimes, I find .onnx model files, like the Metric3D model. However, after I import them into Unity, they contain errors, and I don’t know where or how I should debug them. Do you have any advice?

Thank you so much again for your help!

1 Like

I answered over here: Converting models to Sentis :slight_smile: