Hi,
I’m currently working on a multi-robot simulation in unity (RTX4090, Intel i7-13700F, 64 GB RAM).
The goal is a simulation containing at least 3 mobile robots equipped with 2 LiDARs (published via ROS2) and 1 camera each.
Since I want to use the camera images later for deep learning, I installed the perception package.
Problem is: frames are dropping dramatically.
When using 1 Robot with 2 lidars, the frame rate is around 60 fps.
When adding the perception cam with 2D bounding box label generation, the frames drop by around 5 fps.
When adding semantic segmentation drops to 30 fps and with adding depth images, it drops to 15 fps.
That wouldn’t be a problem for me since I want to record all of that data (LiDAR via ROS) and real-time is not needed.
But unfortunately the sim time in Unity seems to be somehow decoupled from the frame rate.
I tried to publish sim time to ROS but even after recording as a ROS bag, the data is not steadily published.
Is there a way to couple sim time and FPS?
Thanks in advance!