Skip to content
/ PCAN Public

Code for PCAN: 3D Attention Map Learning Using Contextual Information for Point Cloud Based Retrieval

Notifications You must be signed in to change notification settings

XLechter/PCAN

Repository files navigation

PCAN: 3D Attention Map Learning Using Contextual Information for Point Cloud Based Retrieval

PCAN: 3D Attention Map Learning Using Contextual Information for Point Cloud Based Retrieval CVPR 2019

Wenxiao Zhang and Chunxia Xiao

Wuhan University

Introduction

PCAN is an attention module for point cloud based retrieval, which can predict the significance of each local point feature based on point context. This work is based on PointNetVLAD and Pointnet++.

pic-network

Pytorch Version

We implement a pytorch version in another project in models/PCAN.py. You can check it if needed.

Pre-requisites

  • Python3
  • CUDA
  • Tensorflow
  • Scipy
  • Pandas
  • Sklearn

For attention map visualization, matlab is also needed.

Compile Customized TF Operators

The TF operators are included under tf_ops, you need to compile them (check tf_xxx_compile.sh under each ops subfolder) first. Refer to Pointnet++ for more details.

Generate pickle files

Please refer to PointNetVLAD.

Training

To train our network, run the following command:

python train.py

To evaluate the model, run the following command:

python evaluate.py

Pre-trained Models

The pre-trained models for both the baseline and refined networks can be downloaded here.

Attention Map Visualization

For visualization, you can run the visualization/show_attention_map.m using matlab to visulize the attention map. We provide a weight file of a point cloud in oxford_weights folder.

To produce the weights of all the point cloud, you can run the following command:

python evaluate_save_weights.py

The the weights will be saved in .bin files in datasetname_weights folder.

You can also use the python lib mpl_toolkits.mplot3d for visualization.

If you want to produce the same visualization results in the paper, please use this model which is an earlier trained refined model when we submited the paper.

[Citation]

If you find our code useful, please cite our paper:

@inproceedings{zhang2019pcan,
  title={PCAN: 3D attention map learning using contextual information for point cloud based retrieval},
  author={Zhang, Wenxiao and Xiao, Chunxia},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={12436--12445},
  year={2019}
}

Contact

Feel free to contact me if you have any questions. wenxxiao.zhang@gmail.com

About

Code for PCAN: 3D Attention Map Learning Using Contextual Information for Point Cloud Based Retrieval

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published