#Boosting Cross-Domain Point Classification via Distilling Relational Priors from 2D Transformers
This repo is a PyTorch implementation for Boosting Cross-Domain Point Classification via Distilling Relational Priors from 2D Transformers Paper
The code has been tested with
- Python >= 3.7
- PyTorch == 1.8.0+cu111
- torch-scatter == 2.0.7
- torchsampler == 0.1.2
- torchvision == 0.9.0+cu111
Some dependent packages:
Please refer to issue #6 before installing.
cd PyTorchEMD
python setup.py install
Download the official PointDA-10 dataset and put the folder under [your_dataroot]/data/
.
After download, the directory structure should be:
${ROOT}
|--PointDA_data
| |--modelnet
| |--scannet
| |--shapenet
Download the MAE Pre-trained Vit Model and put the folder under pretrained/
.
Training on both source and target
python main.py --src_dataset modelnet --trgt_dataset scannet --dataroot [your_dataroot] --batch_size 16
python main_spst.py --exp_name 'spst' --trgt_dataset scannet --dataroot [your_dataroot] --batch_size 16 --lr 5e-5
If you want to test with pre-trained model, download it from here and place it at experiments/
Please cite this paper if you want to use it in your work,
@article{zou2024boosting,
title={Boosting Cross-Domain Point Classification via Distilling Relational Priors from 2D Transformers},
author={Zou, Longkun and Zhu, Wanru and Chen, Ke and Guo, Lihua and Guo, Kailing and Jia, Kui and Wang, Yaowei},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
year={2024},
publisher={IEEE}
}
This repo benefits from PointCLIP_V2, MAE, GAST. Thanks for their wonderful works.