This is the official repo for PyTorch implementation of paper "NeurMips: Neural Mixture of Planar Experts for View Synthesis", CVPR 2022.
Paper | Project page | Video
- OS: Ubuntu 20.04.4 LTS
- GPU: NVIDIA TITAN RTX
- Python package manager
conda
Download and put datasets under folder data/
by running:
bash run/dataset.sh
For more details of file structure and camera convention, please refer to Dataset.
Install all python packages for training and evaluation with conda environment setup file:
conda env create -f environment.yml
conda activate neurmips
Compile the extension directly by running:
cd cuda/
python setup.py develop
Note that if you need to modify this CUDA code, simply compile again after your modification.
Download pretrained model weights for evaluation without training from scratch:
bash run/checkpoints.sh
We provide hyperparameters for each experiment in config file configs/*.yaml
, which is used for training and evaluation. For example, replica-kitchen.yaml
corresponds to Replica dataset Kitchen scene, and tat-barn.yaml
corresponds to Tanks&Temple dataset Barn scene.
Train the teacher and experts model by running:
bash run/train.sh [config]
# example: bash run/train.sh replica-kitchen
Render testing images and evaluate metrics (i.e. PSNR, SSIM, LPIPS) by running:
bash run/eval.sh [config]
# example: bash run/eval.sh replica-kitchen
The rendered images are put under folder output_images/[config]/experts/color/valid/
To render testing images with optimized CUDA code by running:
bash run/eval_fast.sh [config]
# example: bash run/eval_fast.sh replica-kitchen
The rendered images are put under folder output_images/[config]/experts_cuda/color/valid/
BibTex
@inproceedings{lin2022neurmips,
title={NeurMiPs: Neural Mixture of Planar Experts for View Synthesis},
author = {Lin, Zhi-Hao and Ma, Wei-Chiu and Hsu, Hao-Yu and Wang, Yu-Chiang Frank and Wang, Shenlong},
year={2022},
booktitle={CVPR},
}