Skip to content

Code ofour paper "SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud Representation" in 3DV 2022.

Notifications You must be signed in to change notification settings

hellozhuo/svnet

Repository files navigation

SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud Representation

This repository contains the PyTorch implementation for "SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud Representation" by Zhuo Su, Max Welling, Matti Pietikäinen and Li Liu* (* corresponding author).[arXiv]

The writing style of this code is based on Pixel Difference Convolution.

If you find something useful here, please consider citing this paper.

Introduction

Efficiency and robustness are increasingly needed for applications on 3D point clouds, with the ubiquitous use of edge devices in scenarios like autonomous driving and robotics, which often demand real-time and reliable responses. The paper tackles the challenge by designing a general framework to construct 3D learning architectures with SO(3) equivariance and network binarization. However, a naive combination of equivariant networks and binarization either causes sub-optimal computational efficiency or geometric ambiguity. We propose to locate both scalar and vector features in our networks to avoid both cases. Precisely, the presence of scalar features makes the major part of the network binarizable, while vector features serve to retain rich structural information and ensure SO(3) equivariance. The proposed approach can be applied to general backbones like PointNet and DGCNN. Meanwhile, experiments on ModelNet40, ShapeNet, and the real-world dataset ScanObjectNN, demonstrated that the method achieves a great trade-off between efficiency, rotation robustness, and accuracy.


Running environment

Training: Pytorch 1.9 with cuda 10.1 and cudnn 7.5, python 3.6 in an Ubuntu 18.04 system

Ealier versions may also work~ :)

Dataset

To download ModelNet40 and ShapeNet:

  bash download_datasets.sh

The following script will create a foler data and download the two datasets in it.

To download ScanObjectNN, please visit the official website and make a download request.

Training (without any rotation)

For each section, we provide scripts for both full-precision and binary version to train SVNet. The full-precision one achieves the state-of-the-art accuracy, while the binary version gives a better accuracy-efficiency balance. For both versions, we are able the train the model without any rotatoin, while test it with random rotation.

On ModelNet40

SVNet based on PointNet

python main_cls_pointnet.py --model=svnet --data-dir data --save-dir result/train --rot aligned --rot-test so3

python main_cls_pointnet.py --model=svnet --data-dir data --save-dir result/train --rot aligned --rot-test so3 --binary --wd 0

SVNet based on DGCNN

python main_cls_dgcnn.py --model=svnet --data-dir data --save-dir result/train --rot aligned --rot-test so3

python main_cls_dgcnn.py --model=svnet --data-dir data --save-dir result/train --rot aligned --rot-test so3 --binary --wd 0

On ShapeNet

SVNet based on PointNet

python main_partseg_pointnet.py --model=svnet --data-dir data --save-dir result/train --rot aligned --rot-test so3

python main_partseg_pointnet.py --model=svnet --data-dir data --save-dir result/train --rot aligned --rot-test so3 --binary --wd 0

SVNet based on DGCNN

python main_partseg_dgcnn.py --model=svnet --data-dir data --save-dir result/train --rot aligned --rot-test so3

python main_partseg_dgcnn.py --model=svnet --data-dir data --save-dir result/train --rot aligned --rot-test so3 --binary --wd 0

On ScanObjectNN

SVNet based on PointNet

python main_cls_dgcnn.py --dataset scanobjectnn --model=svnet --data-dir /data/scanobjectnn --save-dir result/train --rot aligned --rot-test so3

python main_cls_dgcnn.py --dataset scanobjectnn --model=svnet --data-dir /data/scanobjectnn --save-dir result/train --rot aligned --rot-test so3 --binary --wd 0

Evaluation (with random rotation in 3D space)

Based on the above script, simply add --test and --rot-test to evaluate the model. For example:

# To evaluate the DGCNN based binary model on ModelNet40 (assuming the model is saved on checkpoints/sv_dgcnn_binary_modelnet40.pth)
python main_cls_dgcnn.py --model=svnet --data-dir data --save-dir result/test --rot-test so3 --test checkpoints/sv_dgcnn_binary_modelnet40.pth --binary

Please see scripts.sh for more details.

Network complexity

For example, if you want to check model size and FLOPs/ADDs/BOPs of DGCNN based binary SVNet:

python params_macs/sv_dgcnn.py

Please see scripts.sh for more details.

The performance of some of the models are listed below (click the items to download the checkpoints and training logs). For ModelNet40 and ScanObjectNN, we report accuracy (%) in z/so(3) column. For ShapeNet, we report IOU (%). KD means using knowledge distillation. It should be noted that i/so(3) got the similar results:

Dataset Backbone Binary KD z/so(3) Training logs Checkpoint
ModelNet40 PointNet 86.3 log link
ModelNet40 PointNet Yes 76.3 log link
ModelNet40 DGCNN 90.3 log link
ModelNet40 DGCNN Yes 83.8 log link
ModelNet40 DGCNN Yes yes 86.8 log link
ShapeNet PointNet 78.2 link
ShapeNet PointNet Yes 67.3 link
ShapeNet DGCNN 81.4 log link
ShapeNet DGCNN Yes 68.4 log link
ShapeNet DGCNN Yes yes 71.5 log link
ScanObjectNN DGCNN 76.2 log link
ScanObjectNN DGCNN Yes 52.9 log link
ScanObjectNN DGCNN Yes yes 60.9 log link

Citation

If you find our project useful in your research, please consider citing:

@inproceedings{su2022svnet,
  title={SVNet: Where SO (3) Equivariance Meets Binarization on Point Cloud Representation},
  author={Su, Zhuo and Welling, Max and Pietik{\"a}inen, Matti and Liu, Li},
  booktitle={International Conference on 3D Vision},
  year={2022}
}

Acknowledgement

We greatly thank the following repos:

License

MIT License

About

Code ofour paper "SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud Representation" in 3DV 2022.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published