Hi,
I want to run this inference script
https://github.com/NVlabs/Deep_Object_Pose/blob/master/src/dope/inference/detector.py
in Unity Barracuda.
I have already converted the pth weights to ONNX, but now I needed to know if it is possible to run the inference based on the Python script linked above (of course if it was “converted” to C#).
I know it is not an easy task, but if anyone can give me any advice regarding if it is minimally possible to start researching about how to run it in C# (maybe using OpenCV and/or other libraries).
The idea of this network is, based on an image, predict an pre-defined object pose with 6 degrees of freedom (i.e. the location and rotation of object relatively to the camera).
Any help would be apreciated.
Thanks!
Below is a part of the code extracted from the above github project.
class ObjectDetector(object):
'''This class contains methods for object detection'''
@staticmethod
def detect_object_in_image(net_model, pnp_solver, in_img, config):
'''Detect objects in a image using a specific trained network model'''
if in_img is None:
return []
# Run network inference
image_tensor = transform(in_img)
image_torch = Variable(image_tensor).cuda().unsqueeze(0)
out, seg = net_model(image_torch)
vertex2 = out[-1][0]
aff = seg[-1][0]
# Find objects from network output
detected_objects = ObjectDetector.find_object_poses(vertex2, aff, pnp_solver, config)
return detected_objects