I’m trying to use Ros2 Slam toolkit with Unity, this is similar to the Nav2 example that Unity provides , however, I’m trying to use a Depth camera instead of Lidar since it’s not currently available in my country.
to my understanding the Slam toolkit takes in Depth data and converts them into a Laser scan, but I can’t find any good documentation on how to send the depth data from unity
this is the message description I found on ros
I’ve also written the script for creating the depth shader and can export the png image of depth
# This message contains an uncompressed image
# (0, 0) is at top-left corner of image
std_msgs/Header header # Header timestamp should be acquisition time of image
# Header frame_id should be optical frame of camera
# origin of frame should be optical center of cameara
# +x should point to the right in the image
# +y should point down in the image
# +z should point into to plane of the image
# If the frame_id here and the frame_id of the CameraInfo
# message associated with the image conflict
# the behavior is undefined
uint32 height # image height, that is, number of rows
uint32 width # image width, that is, number of columns
# The legal values for encoding are in file src/image_encodings.cpp
# If you want to standardize a new string format, join
# ros-users@lists.ros.org and send an email proposing a new encoding.
string encoding # Encoding of pixels -- channel meaning, ordering, size
# taken from the list of strings in include/sensor_msgs/image_encodings.hpp
uint8 is_bigendian # is this data bigendian?
uint32 step # Full row length in bytes
uint8[] data # actual matrix data, size is (step * rows)