Project Page | Paper | Data | Checkpoints
Zhi-Hao Lin1, Bohan Liu1, Yi-Ting Chen2, Kuan-Sheng Chen1, David Forsyth1, Jia-Bin Huang2, Anand Bhattad1, Shenlong Wang1
1University of Illinois at Urbana-Champaign, 2University of Maryland, College Park
The code has been tested on:
- OS: Ubuntu 22.04.4 LTS
- GPU: NVIDIA GeForce RTX 4090, NVIDIA RTX A6000
- Driver Version: 535, 545
- CUDA Version: 12.2, 12.3
- nvcc: 11.7
- Create Conda environment:
conda create -n urbanir -y python=3.9
conda activate urbanir
- Install python packages:
pip install -r requirements.txt
- Install pytorch_scatter:
pip install torch-scatter -f https://data.pyg.org/whl/torch-2.0.1+cu117.html
- Install tiny-cuda-nn:
git clone --recursive https://github.com/NVlabs/tiny-cuda-nn.git
Then use your favorite editor to edit tiny-cuda-nn/include/tiny-cuda-nn/common.h
and set TCNN_HALF_PRECISION
to 0
(see NVlabs/tiny-cuda-nn#51 for details)
cd tiny-cuda-nn/bindings/torch
python setup.py install
- Compile CUDA extension of this project
pip install models/csrc/
- Please download datasets and put under
data/
folder. - Please download checkpoints and put under
ckpts
folder. - Currently the data and checkpoints of Kitti360 and Waymo Open Dataset are available.
- The training process is tracked and visualized with wandb, you could set up by
$ wandb login
- The training script examples are in
scripts/train.sh
, which are in the following format:
python train.py --config [path_to_config]
- The checkpoint is saved to
ckpts/[dataset]/[exp_name]/*.ckpt
- The validation images are saved to
results/[dataset]/[exp_name]/val
- The rendering script examples are in
scripts/render.sh
, which are in the following format:
python render.py --config [path_to_config]
- The rendered images are saved to
results/[dataset]/[exp_name]/frames
, and videos are saved toresults/[dataset]/[exp_name]/*.mp4
- If the training is not complete and
.../last_slim.ckpt
is not available, you can either specify path with--ckpt_load [path_to_ckpt]
or convert withutility/slim_ckpt.py
.
- The relighting script examples are in
scripts/relight.sh
, which are in the following format:
python render.py --config [path_to_config] \
--light [path_to_light_config] --relight [effect_name]
- The rendered images and videos are saved to
results/[dataset]/[exp_name]/[effect_name]
- All the parameters are listed in the
opt.py
, and can be added after training/rendering/relighting script with--param_name value
. - The scene-specific parameters are listed in
configs/[dataset]/[scene].txt
. - The lighting parameters are listed in
configs/light/[scene].txt
, and different relighting effects can be produced by changing the configuration.
- The camera poses are estimated with NeRFstudio pipeline (
transforms.json
). - The depth is estimated with MiDaS.
- The normal is estimated with OmniData, and authors' fork can handle image resolution in a more flexible manner.
- The shadow mask is estimated with MTMT, and authors' fork provides example script.
- The semantic map is estimated with mmsegmentation, and you could put
mmseg/run.py
in the root folder of mmsegmentation and run.
If you find this paper and repository useful for your research, please consider citing:
@article{lin2023urbanir,
title={Urbanir: Large-scale urban scene inverse rendering from a single video},
author={Lin, Zhi-Hao and Liu, Bohan and Chen, Yi-Ting and Forsyth, David and Huang, Jia-Bin and Bhattad, Anand and Wang, Shenlong},
journal={arXiv preprint arXiv:2306.09349},
year={2023}
}