This repository is the re-production implementation of Lifting 2D StyleGAN for 3D-Aware Face Generation by Yichun Shi, Divyansh Aggarwaland Anil K. Jain in the scope of ML Reproducibility Challenge 2021.
You can create the conda environment by using:
conda env create -f environment.yml
Download pre-trained StyleGAN and face embedding network from here for training. Unzip them into the pretrained/
folder. Then you can start training by:
python tools/train.py config/ffhq_256.py
And similarly you can start training for AFHQ Cat by:
python tools/train.py config/cats_256.py
In addition to instructions above download and place checkpoint_stylegan_celeba
folder under pretrained/
. Then you can start training by:
And similarly you can start training for AFHQ Cat by:
python tools/train.py config/celeba_256.py
As the original repository, we use a re-cropped version of FFHQ to fit the style of our face embedding network. You can find this dataset here. The cats dataset can be found here.
To train a StyleGAN2 from you own dataset, check the content under stylegan2-pytorch
folder. After training a StyleGAN2, you can lift it using the training code.
You can generate random samples from a lifted gan by running:
python tools/generate_images.py /path/to/the/checkpoint --output_dir your/output/dir
You can generate random samples from 2 different LiftedGANs (whit same latent vector) by running:
python tools/generate_images_re.py --model_original /path/to/the/checkpoint --model_reproduced /path/to/the/checkpoint --output_dir your/output/dir
You can run viewpoint manipulation for a single LiftedGAN by:
python tools/generate_poses.py /path/to/the/checkpoint --output_dir your/output/dir --type yaw
You can run viewpoint manipulation by using 2 different LiftedGANs (whit same latent vector) by:
python tools/generate_poses_re.py --model_original /path/to/the/checkpoint --model_reproduced /path/to/the/checkpoint --output_dir your/output/dir --type yaw
You can run light direction manipulation for a single LiftedGAN by:
python tools/generate_lighting.py --model_original /path/to/the/checkpoint --model_reproduced /path/to/the/checkpoint --output_dir your/output/dir
You can run light direction manipulation by using 2 different LiftedGANs (whit same latent vector) by:
python tools/generate_lighting_re.py --model_original /path/to/the/checkpoint --model_reproduced /path/to/the/checkpoint --output_dir your/output/dir
You can run the command below to interpolate between two face poses:
python tools/generate_poses_interpolate.py /path/to/the/checkpoint --output_dir your/output/dir
You can run the command below to interpolate between two face poses:
python tools/generate_poses_interpolate_re.py --model_original /path/to/the/checkpoint --model_reproduced /path/to/the/checkpoint --output_dir your/output/dir --type yaw
For all experiments make sure the checkpoint file and its config.py
file are under the same folder. For viewpoint manipulation experiments you can change the type parameter to toggle between yaw and pitch manipulation.
We use the code from rosinality's stylegan2-pytorch to compute FID. To compute the FID, you first need to compute the statistics of real images:
python utils/calc_inception.py /path/to/the/dataset/lmdb
You might skip this step if you are using our pre-calculated statistics file (link). Then, to test the FID, you can run:
python tools/test_fid.py /path/to/the/checkpoint --inception /path/to/the/inception/file
Original | Reproduced |
---|---|
Original | Reproduced |
---|---|
Original | Reproduced |
---|---|
Original | Reproduced |
---|---|
Face Generation | Viewpoint Manipulation (yaw) | Viewpoint Manipulation (pitch) | Re-lighting |
---|---|---|---|
Original | Reproduced |
---|---|
Original | Reproduced |
---|---|
Original | Reproduced |
---|---|
Original | Reproduced |
---|---|