In the evaluation, we compared ObVi-SLAM against ORB-SLAM3, OA-SLAM, and DROID-SLAM. We used the ObViSLAMEvaluation
branch from research dev fork of them under ORB-SLAM3, OA-SLAM, and DROID-SLAM. DROID-SLAM uses conda to set up its environment; For ORB-SLAM3 and OA-SLAM, we provide our docker setup inside ros-noetic-docker
.
You can follow the installation and running instructions given by ORB-SLAM3, OA-SLAM, and DROID-SLAM. You can find We also provide our docker setup. It requires the Docker Engine, Docker Compose, and Nvidia Container Toolkit. ros-noetic-docker
contains separate docker files for ORB-SLAM3 and OA-SLAM. For DROID-SLAM, we used conda to set up the environment (see the ObViSLAMEvaluation
branch from our research dev fork of DROID-SLAM) If you'd like to use our evaluation scripts to run the experiment, use Dockerfile under obvislam-eval
, which handles environemnt setup for ORB-SLAM3, OA-SLAM, and DROID-SLAM.
To build the docker image:
./build.py obvislam-eval
Now, we need to mount necessary files. In our convenient scripts, we assume the project is mounted under /root/ObVi-SLAM-Evaluation
and the data are mounted under /root/ObVi-SLAM-Evaluation/data
. To mount files, specify the followings inside the volume session of ros-noetic-docker/noetic/obvislam-eval/compose.yaml
:
volumes:
- <path_to_ObVi-SLAM-Evaluation_root_dir>:/root/ObVi-SLAM-Evaluation
- <path_to_data_root_dir>:/root/ObVi-SLAM-Evaluation/data
This will mount the entire folders under <path_to_ObVi-SLAM-Evaluation_root_dir>
and <path_to_data_root_dir>
to the container. To be safer, you can specify the read/write permission for different subfolders. For example:
volumes:
- <path_to_ObVi-SLAM-Evaluation_root_dir>:/root/ObVi-SLAM-Evaluation
- <path_to_data_root_dir>:/root/ObVi-SLAM-Evaluation/data/original_data:ro
- <path_to_data_root_dir>:/root/ObVi-SLAM-Evaluation/data/oa_slam_in:ro
- <path_to_data_root_dir>:/root/ObVi-SLAM-Evaluation/data/oa_slam_out
- ...
To lanuch the docker container:
./launch.py obvislam-eval
You can verify the container is running by docker ps
. You should see a container running with the name ${YOUR_USERNAME}-noetic-obvislam-eval-app-1
.
To start a shell session as the root user"
docker exec -it -u root $USER-noetic-obvislam-eval-app-1 $SHELL
You need to run source /.dockerenv
to set up environmental variables properly inside the container. Alternatively, you can also refer to this solution here.
You can exit from the session through Ctrl-D
. You can stop the container by:
docker stop $USER-noetic-obvislam-eval-app-1
See the DROID-SLAM page.
The ObViSLAMEvaluation
relies on the amrl_msgs to save trajectories. To install this package:
git clone https://github.com/ut-amrl/amrl_msgs.git
cd amrl_msgs
git checkout objectDetectionMsgs # Or the orbSlamSwitchTraj branch
export ROS_PACKAGE_PATH=`pwd`:$ROS_PACKAGE_PATH
make
See the ORB-SLAM3 page for further instructions.
Object detector is neccessary to run any object-SLAM algorithms including ObVi-SLAM and OA-SLAM. We relied on YOLOV5 and fine-tuned a model for object detection. You can refer to its README page to set up YOLOv5. Refer to our YOLOv5 page for the ROS object detector setup. You can download a fine-tuned model here that detects the four following classes: treetrunks, trashcans, lampposts, and benches. To run the detector for the evaluation:
python detect_ros.py --weights <path_to_weight_file> --img 960 --conf 0.2
See the OA-SLAM page.
See the ObVi-SLAM page.