This Github presents the code for the paper Addressing the generalization of 3D registration methods with a featureless baseline and an unbiased benchmark published in the MVA journal.
We propose the following:
- Simple 3D registration baseline model that outperforms most state-of-the-art learning-based methods
- Methodology to create a 3D registration benchmark from a point cloud dataset that provides a more informative evaluation than previous benchmarks
- Porpose the FAUST-partial benchmark created using this methodology
- Provide an evaluation of a great number of state-of-the-art methods on 4 benchmarks: 3DMatch, KITTI, ETH and FAUST-partial
- Provide a new rotation space subsampling
We provide a Dockerfile to facilitate running the code. Run in terminal:
cd docker
sh docker_build.sh
sh docker_run.sh CODE_PATH DATA_PATH
by adjusting the CODE_PATH
and (optionally) DATA_PATH
which are added to the docker container as volumes.
You can attach to the container using
docker exec -it egs-container /bin/bash
If you do not want to use Docker, you can install the python packages listed in docker/requirements.txt
into your own environment. We tested the code under ubuntu20.04, python3.8, cuda 11.2.2 and cudnn8.
You can run:
python register.py --dataset_name <name>
where dataset_name
can be 3DMATCH, KITTI, ETH or any of the FP-{R,T,O}-{E,M,H} benchmarks (see paper for more details). The script saves the registration results in results/timestamp
, where timestamp
changes according to the time of script execution.
To configure the parameters of the baseline for the dataset you want to register, adjust the REGISTER-{dataset-name}
option in config.yaml
. The parameters are:
DATASET-NAME
: (str) name of the dataset in uppercaseDATASET-PATH
: (str) path of the dataset in uppercaseOVERLAP-CSV-PATH
: (str) path to the already processed overlap statistics fromdata/overlaps
METHOD-NAME
: (str) name of your experiment for easier experiment trackingSET-SEED
: (bool) remove any randomness in the code (just in case)GPU-INDEX
: (int) the gpu to use to run the code, set ascuda:{GPU-INDEX}
, if you have multiple gpusDTYPE-REGISTRATION
: (str) torch type of data usedVOXEL-SIZE
: (float) size of voxel cube usually in meters (depends on the data used!)CONTINUE-RUN
: (str) if registering a whole dataset breaks, give the path to theresults/timestamp
you want to continue runningROTATION-OPTION
: (str) the precomputed rotations (see paper, Sec. 5.3). See Notes for more details.PADDING
: (str) padding of the voxelized source point cloud prior to registration (see paper, Sec. 3). Onlysame
padding supported.PV
: (int) fill the voxels that have points inside them with this valueNV
: (int) fill the empty voxels that do not have points inside them with this valuePPV
: (int) fill the padding voxels with this valueNUM-WORKERS
: (int) number of pytorch dataloader workers that prepare the dataSUBSAMPLING-INDS-PATH
: (str) path to subsampling indices.See Notes for more details.
To refine the results from the registration above, run:
python refine.py -R <results/timestamp>
where timestamp should be changed to the results path from the previous step you want to refine.
To configure the parameters of the refinement, adjust the REFINE
option in config.yaml
. The parameters are:
ICP-VERSION
: (str) ICP version to run. Choices aregeneralized
,p2point
,p2plane
MAX-ITERATION
: (int) maximum number of refining iterationsMAX-CORRESPONDENCE-DISTANCE-QUANTILE
: (float) The distance chosen for the inliers as the quantile of all the point distances.SUBSAMPLING-INDS-PATH
: (str) path to subsampling indices. See Notes for more details.
To evaluate the registration you can run:
python evaluate.py -R <results/timestamp>
where timestamp should be changed accordingly to indicate your results.
If you want to register and refine a pair of scans (and not a whole dataset) you can run:
python demo.py --pc_target_path <path/to/target/pc.ply> --pc_source_path <path/to/source/pc.ply>
where:
pc_target_path
: (str) is the path to the target point cloud in .ply formatpc_source_path
: (str) is the path to the source point cloud in .ply format
To configure the parameters of the baseline and refinement, adjust the DEMO
option in config.yaml
. The parameters are:
METHOD-NAME
: (str) name of your experiment for easier experiment trackingSET-SEED
: (bool) remove any randomness in the code (just in case)GPU-INDEX
: (int) the gpu to use to run the code, set ascuda:{GPU-INDEX}
, if you have multiple gpusVOXEL-SIZE
: (float) size of voxel cube usually in meters (depends on the data used!)ROTATION-OPTION
: (str) the precomputed rotations (see paper, Sec. 5.3). See Notes for more details.PADDING
: (str) padding of the voxelized source point cloud prior to registration (see paper, Sec. 3). Onlysame
padding supported.PV
: (int) fill the voxels that have points inside them with this valueNV
: (int) fill the empty voxels that do not have points inside them with this valuePPV
: (int) fill the padding voxels with this valueNUM-WORKERS
: (int) number of pytorch dataloader workers that prepare the dataREFINE-NAME
: (str) is the optional name of refinement algorithm - either p2point icp or p2plane icp or generalized icpREFINE-MAX-ITER
: (int) maximum number of refining iterationsREFINE-MAX-CORRESPONDENCE-DISTANCE-QUANTILE
: (float) The distance chosen for the inliers as the quantile of all the point distances.
To facilitate reproducibility and comparison, we provide the python code to create Figures 3, 4 and 5, and provide Latex code to create Tables 3, 4, 5, 6 and 7.
To create Figures 3,4 or 5 run the following:
cd analysis
python create_benchmark_comparison_figures.py --param <rotation-translation-or-overlap> --path_3DMatch <path/to/3DMatch> --path_ETH <path/to/ETH> --path_KITTI <path/to/KITTI> --path_FP <path/to/FAUST-partial> --path_FAUST_scans <path/to/FAUST/scans>
where param
can be either rotation
(Figure 3), translation
(Figure 4) or overlap
(Figure 5). You can alter the code in order to add your own benchmark to the figures and compare the rotation, translation and overlap parameter distribution.
We provide the Tables 3, 4, 5, 6 and 7 in the folder latex_tables
so you can more easily use them in your work.
Download the testing examples from here under the title Geometric Registration Benchmark
--> Downloads
. There are 8 scenes that are used for testing. In total, there are 16 folders, two for each scene with names {folder_name}
and {folder_name}-evaluation
:
7-scenes-redkitchen
7-scenes-redkitchen-evaluation
sun3d-home_at-home_at_scan1_2013_jan_1
sun3d-home_at-home_at_scan1_2013_jan_1-evaluation
sun3d-home_md-home_md_scan9_2012_sep_30
sun3d-home_md-home_md_scan9_2012_sep_30-evaluation
sun3d-hotel_uc-scan3
sun3d-hotel_uc-scan3-evaluation
sun3d-hotel_umd-maryland_hotel1
sun3d-hotel_umd-maryland_hotel1-evaluation
sun3d-hotel_umd-maryland_hotel3
sun3d-hotel_umd-maryland_hotel3-evaluation
sun3d-mit_76_studyroom-76-1studyroom2
sun3d-mit_76_studyroom-76-1studyroom2-evaluation
sun3d-mit_lab_hj-lab_hj_tea_nov_2_2012_scan1_erika
sun3d-mit_lab_hj-lab_hj_tea_nov_2_2012_scan1_erika-evaluation
We use the overlaps from PREDATOR [1] (found in data/overlaps
) to filter the data and use only those with overlap > 30%.
Download the testing data from here under Download odometry data set (velodyne laser data, 80 GB)
. 3 scenes are used for testing:
08
09
10
Download the test.pkl
from GeoTransformer [3] here and put it in the same directory where the scenes are located.
Download the testing data from here. There are 4 scenes that are used for testing:
gazeebo_summer
gazeebo_winter
wood_autumn
wood_summer
We use the overlaps from Perfect Match [2] (found in data/overlaps
) to filter the data and use only those with overlap > 30%. We obtain the overlaps from their overlapMatrix.csv
in each scene.
Download the FAUST scans from here. There are 100 scans in the training dataset named tr_scan_xxx.ply
that are used for the registration benchmark. Download the FAUST-partial
benchmark from here.
To use the benchmark with your own method, we provide a script that facilitates loading the benchmark:
python load_faust_partial.py --faust_scans_path <path/to/FAUST/scans> --benchmark_name <FP-{R,T,O}-{E,M,H}> --benchmark_root_path <path/to/FAUST-partial/benchmarks>
where:
faust_scans_path
: (str) is the path to the FAUST training scansbenchmark_name
: (str) is the name of the FAUST-partial benchmark. Choices are FP-{R,T,O}-{E,M,H}benchmark_root_path
: (str) is the path to the FAUST-partial benchmarks
The script loads the data and iterates over the registration pairs and provides the source point cloud
, target point cloud
and ground truth 4x4 transformation
. Complete the rest of the script with your own method in order to use it.
FPv1
and the new version as ICO-12-FIXED-E
. We make both benchmarks available under the same download link. Read the appropriate papers to get more details about the benchmark differences.
In the paper, we present a methodology for creating a 3D registration benchmark that provides better evaluations with more insights. We provide the code in data/FAUST-partial
with which we create the FAUST-partial benchmark, but we note that it can be used on any point cloud dataset.
First, we create the viewpoints using the points of an icosahaedron (see paper for more details). To do so, you can setup the variables in data/FAUST-partial/config.yaml
, under CREATE-INDICES-VIEWPOINTS-OVERLAP
:
FAUST-DATA-PATH
: path to training scans of FAUST datasetSAVE-TO
: path to save the newly created datasetVISUALIZE
: visualize the creation of partial benchmark (see Fig. 2 from paper)ICOSAHEDRON-SCALE
: points of icosahaedron (that act as viewpoints) lie on sphere of radiusICOSAHEDRON-SCALE
ICOSAHAEDRON-NR-DIVISIONS
: number of splits of the icosahaedron edges - the icsoahaedron starts with 12 points, and then by splitting the edges it results in 42, 162, 642,.. points
and run:
python create_indices_viewpoints_overlap.py
to get the partial scan viewpoints, indices and overlaps.
Next, we create the benchmark by sampling random rotation and translations depending on the dataset difficulty. To do so, setup the variables in data/FAUST-partial/config.yaml
, under CREATE-BENCHMARK
:
DATASET-NAME
: which dataset to create (FP-R-E, FP-O-M, ... or "All" to create all of them)SAVE-TO
: path to save the newly created benchmarkROTATION-EASY-XZ
: rotation parmaeter range for easy difficulty (in degrees) for x and z axes. Given as list of bounds.ROTATION-MEDIUM-XZ
: rotation parmaeter range for medium difficulty (in degrees) for x and z axes. Given as list of two bounds.ROTATION-HARD-XZ
: rotation parmaeter range for hard difficulty (in degrees) for x and z axes. Given as list of bounds. Lower and upper bounds are limited to -180, 180.ROTATION-EASY-Y
: rotation parmaeter range for easy difficulty (in degrees) for y ax. Given as list of bounds.ROTATION-MEDIUM-Y
: rotation parmaeter range for medium difficulty (in degrees) for y ax. Given as list of bounds.ROTATION-HARD-Y
: rotation parmaeter range for hard difficulty (in degrees) for y ax. Given as list of bounds. Lower and upper bounds are limited to -90, 90.TRANSLATION-EASY
: translation parameter range for easy difficulty (in meters) given as list of boundsTRANSLATION-MEDIUM
: translation parameter range for medium difficulty (in meters) given as list of boundsTRANSLATION-HARD
: translation parameter range for hard difficulty (in meters) given as list of bounds. Lower bound is limited to 0.OVERLAP-EASY
: overlap parameter range for easy difficulty (in percentage) given as list of boundsOVERLAP-MEDIUM
: overlap parameter range for medium difficulty (in percentage) given as list of boundsOVERLAP-HARD
: overlap parameter range for hard difficulty (in percentage) given as list of bounds. Lower and upper bounds are limited to 0, 100.
python create_benchmarks.py
This creates 9 benchmarks: for each parameter (rotation, translation and overlap) 3 difficulty levels, which are saved in: {SAVE-TO}/FP-{R,T,E}-{E,M,H}
.
This work is a continuation of our previous work greedy-grid-search, where the main difference is the computation of the rotation subsampling and the translation estimation, which provide much better results; along with some minor adjustments. The main difference from the previous benchmark FAUST-partial is that we propose a general methodology to create a new benchmark from a point cloud dataset that provides more insights into the 3D registration evaluation. Please read the accompanying papers for more details.
The different tested rotations from Table 8 (see paper) are listed below with the corresponding row from the table
# from Table 8 | Name | N | Description |
---|---|---|---|
#Ref | AA_ICO162_S10 | 3536 | Our angle axis sampling from the icosahaedron with 162 vertices and angle step 10 |
#10 | EULER-S=15-DUPLICATES | 6363 | Euler angles (from -180, 180) with angle step 15 with removed duplicate rotations |
#11 | EULER-S=15-LIMITED-DUPLICATES | 1886 | Euler angles (from -90, 90) with angle step 15 with removed duplicate rotations |
#12 | EULER-S=10-LIMITED-DUPLICATES | 6177 | Euler angles (from -90, 90) with angle step 10 with removed duplicate rotations |
#13 | HEALPIX | 4608 | Healpix sampling from Implicit-PDF paper [4] |
#14 | SUPER-FIBONACCI | 3536 | Super-fibonacci sampling from [5] |
#15 | AA_ICO42_S15 | 281 | Our angle axis sampling from the icosahaedron with 42 vertices and angle step 15 |
#16 | AA_ICO42_S10 | 913 | Our angle axis sampling from the icosahaedron with 42 vertices and angle step 10 |
#17 | AA_ICO162_S15 | 2289 | Our angle axis sampling from the icosahaedron with 162 vertices and angle step 15 |
#18 | AA_ICO162_S24_positive | 4531 | Our angle axis sampling from the icosahaedron with 162 vertices (only from the one hemisphere) and angle step 24 |
#19 | AA_ICO642_S30 | 4368 | Our angle axis sampling from the icosahaedron with 642 points and angle step 30 |
We provide the subsampling indices here.
If you use our work, please reference our paper:
@article{Bojanic24,
title = {Addressing the generalization of 3D registration methods with a featureless baseline and an unbiased benchmark},
volume = {35},
ISSN = {1432-1769},
url = {http://dx.doi.org/10.1007/s00138-024-01510-w},
DOI = {10.1007/s00138-024-01510-w},
number = {3},
journal = {Machine Vision and Applications},
publisher = {Springer Science and Business Media LLC},
author = {Bojanić, David and Bartol, Kristijan and Forest, Josep and Petković, Tomislav and Pribanić, Tomislav},
year = {2024},
month = mar
}
[1] PREDATOR
[2] Perfect Match
[3] GeoTransformer
[4] Implicit-PDF
[5] Super-fibonacci
We reuse parts of the fft-conv-pytorch repository for computing the cross-correlation on the GPU.