Implementation for "Cross-view Geo-localization via Learning Disentangled Geometric Layout Correspondence"
- numpy
- Pytorch >= 1.11
- torchvision >= 0.12
- tqdm
- scipy
- PIL
- We obtain the permission of CVUSA dataset from the owner by submit the MVRL Dataset Request Form.
- Please refer to the repo: https://github.com/viibridges/crossnet
- We obtain the permission of CVACT dataset by contacting the author directly.
- Please refer to the repo: https://github.com/Liumouliu/OriCNN
To prepare data, we follow the method of SAFA. Before running the code, one should pre-process the dataset with the provided file data_preparation.py
.
To get the duplicate images in CVUSA dataset as reported in the main paper. Please change the directory in line 7 of check_cvusa_duplicate.py
and run it to check duplicate files. A json file with all duplicate pairs will be generated and one can use it to remove those files.
Or
You can directly download the cleaned CVUSA training and validation files here. Copy and paste the unzipped files under YOUR_PATH_TO_CVUSA/dataset/splits/
folder to replace the original files.
python train.py \
--dataset CVUSA \
--data_dir path-to-your-data/ \
--n_des 8 \
--TR_heads 4 \
--TR_layers 2 \
--layout_sim strong \
--sem_aug strong \
--pt \
--cf
python test.py \
--dataset CVUSA \
--data_dir path-to-your-data/ \
--model_path path-to-your-pretrained-weight
@inproceedings{zhang2023cross,
title={Cross-view geo-localization via learning disentangled geometric layout correspondence},
author={Zhang, Xiaohan and Li, Xingyu and Sultani, Waqas and Zhou, Yi and Wshah, Safwan},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={37},
number={3},
pages={3480--3488},
year={2023}
}