Skip to content

Official PyTorch implementation of "UrbanIR: Large-Scale Urban Scene Inverse Rendering from a Single Video"

License

Notifications You must be signed in to change notification settings

zhihao-lin/urbanir

Repository files navigation

UrbanIR: Large-Scale Urban Scene Inverse Rendering from a Single Video

Zhi-Hao Lin1, Bohan Liu1, Yi-Ting Chen2, Kuan-Sheng Chen1, David Forsyth1, Jia-Bin Huang2, Anand Bhattad1, Shenlong Wang1

1University of Illinois at Urbana-Champaign, 2University of Maryland, College Park

teaser

🔦 Prerequisites

The code has been tested on:

  • OS: Ubuntu 22.04.4 LTS
  • GPU: NVIDIA GeForce RTX 4090, NVIDIA RTX A6000
  • Driver Version: 535, 545
  • CUDA Version: 12.2, 12.3
  • nvcc: 11.7

🔦 Installation

  • Create Conda environment:
conda create -n urbanir -y python=3.9
conda activate urbanir
  • Install python packages:
pip install -r requirements.txt
pip install torch-scatter -f https://data.pyg.org/whl/torch-2.0.1+cu117.html
git clone --recursive https://github.com/NVlabs/tiny-cuda-nn.git

Then use your favorite editor to edit tiny-cuda-nn/include/tiny-cuda-nn/common.h and set TCNN_HALF_PRECISION to 0 (see NVlabs/tiny-cuda-nn#51 for details)

cd tiny-cuda-nn/bindings/torch
python setup.py install
  • Compile CUDA extension of this project
pip install models/csrc/

🔦 Dataset and Checkpoints

🔦 Training

  • The training process is tracked and visualized with wandb, you could set up by $ wandb login
  • The training script examples are in scripts/train.sh, which are in the following format:
python train.py --config [path_to_config]
  • The checkpoint is saved to ckpts/[dataset]/[exp_name]/*.ckpt
  • The validation images are saved to results/[dataset]/[exp_name]/val

🔦 Rendering

  • The rendering script examples are in scripts/render.sh, which are in the following format:
python render.py --config [path_to_config]
  • The rendered images are saved to results/[dataset]/[exp_name]/frames, and videos are saved to results/[dataset]/[exp_name]/*.mp4
  • If the training is not complete and .../last_slim.ckpt is not available, you can either specify path with --ckpt_load [path_to_ckpt] or convert with utility/slim_ckpt.py.

🔦 Relighting

  • The relighting script examples are in scripts/relight.sh, which are in the following format:
python render.py --config [path_to_config] \
     --light [path_to_light_config] --relight [effect_name]
  • The rendered images and videos are saved to results/[dataset]/[exp_name]/[effect_name]

🔦 Configuration

  • All the parameters are listed in the opt.py, and can be added after training/rendering/relighting script with --param_name value.
  • The scene-specific parameters are listed in configs/[dataset]/[scene].txt.
  • The lighting parameters are listed in configs/light/[scene].txt, and different relighting effects can be produced by changing the configuration.

🔦 Customized Data

🔦 Citation

If you find this paper and repository useful for your research, please consider citing:

@article{lin2023urbanir,
  title={Urbanir: Large-scale urban scene inverse rendering from a single video},
  author={Lin, Zhi-Hao and Liu, Bohan and Chen, Yi-Ting and Forsyth, David and Huang, Jia-Bin and Bhattad, Anand and Wang, Shenlong},
  journal={arXiv preprint arXiv:2306.09349},
  year={2023}
}

About

Official PyTorch implementation of "UrbanIR: Large-Scale Urban Scene Inverse Rendering from a Single Video"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published