CanonicalFusion: Generating Drivable 3D Human Avatars from Multiple Images (ECCV 2024 ACCEPTED 🎉)
Jisu Shin, Junmyeong Lee, Seongmin Lee, Min-Gyu Park, Ju-Mi Kang, Ju Hong Yoon, and Hae-Gon Jeon
- [2024.07] Release arxiv paper and github page!
- [2024.11] Release code and pretrained weights for single image-based 3D human reconstruction! (First phase)
- Release canonical mesh reconstruction code with multiple images via differentiable rendering. (Second phase)
Our current version contains the inference code & pretrained weights for 3D human mesh reconstruction that takes input image and fitted SMPL-X depth map. You can use them for single image-based 3D human reconstruction evaluation. We are further planning to open canonical mesh reconstruction part.
- Python 3.8
- Pytorch 2.1.0
- Cuda 12.1
- Linux / Ubuntu Environment
git clone https://github.com/jsshin98/CanonicalFusion.git
cd CanonicalFusion
You first need to build Docker image with nvdiffrast. (This is not required to current evaluation code, but required for differentiable rendering step, which will also be published soon.)
Then, it is suggested to install the Conda environment inside the Docker as detailed below:
# Create conda environment inside docker
conda create -n canonicalfusion python=3.10
conda activate canonicalfusion
# Install Pytorch and other dependencies
conda install pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia
apt-get install -y libsm6 libxext6 libxrender-dev libglib2.0-0 libgl1-mesa-glx
pip install -r requirements.txt
pip install git+https://github.com/tatsy/torchmcubes.git
conda install -c conda-forge pyembree
Currently, we provide a set of pretrained models for the inference. Models and sample dataset for inference can be downloaded here. Please check the dataset tree below.
Our model requires fitted SMPL-X for each image as an input. We provide some examples that are compatible to our model. Note that the pelvis of the SMPL-X should be nearly centered to the origin and height should be 180 to generate the plausible reconstruction results since we train our model on this setting. We follow the rendering process of PIFu to render the depth map of fitted SMPL-X.
Set the dataset path on apps/canonfusion_eval.yaml
data_dir argument.
You need to organize the dataset as following:
YOUR DATASET PATH
├── dataset_name
│ └── IMG
│ └── 0001
│ └── 000_front.png
│ └── 030_front.png
│ └── 0002
│ └── 000_front.png
│ └── ...
│ └── MASK (optional. If you don't have mask, you can just add rembg to get one.)
│ └── 0001
│ └── 000_front.png
│ └── 030_front.png
│ └── 0002
│ └── 000_front.png
│ └── ...
│ └── DEPTH
│ └── 0001
│ └── 000_front.png
│ └── 000_back.png
│ └── 030_front.png
│ └── 030_back.png
│ └── 0002
│ └── 000_front.png
│ └── 000_back.png
│ └── ...
│ └── SMPLX
│ └── 0001
│ └── 0001.json
│ └── 0001.obj
│ └── 0002
│ └── 0002.json
│ └── 0002.obj
│ └── ...
├── resource
│ └──smpl_models
│ └── smplx
│ └──pretrained_models
│ └── lbs_ckpt
│ └── best.tar
│ └── main_ckpt
│ └── COLOR
│ └── DEPTH_LBS
│ └── Real-ESRGAN
cd ./apps
python 01_human_recon_eval.py
If you find our work meaningful, please consider the citation:
@inproceedings{shin2025canonicalfusion,
title={CanonicalFusion: Generating Drivable 3D Human Avatars from Multiple Images},
author={Shin, Jisu and Lee, Junmyeong and Lee, Seongmin and Park, Min-Gyu and Kang, Ju-Mi and Yoon, Ju Hong and Jeon, Hae-Gon},
booktitle={European Conference on Computer Vision},
pages={38--56},
year={2025},
organization={Springer}
}
If you have any question, please feel free to contact us via jsshin98@gm.gist.ac.kr.