Skip to content

kligvasser/facial-diffusion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Facial Diffussion

Welcome to the Conditional Diffusion Model Training Repository!

Explore the world of diffusion models with conditional input. Several trained models are available in the Diffuser Hub. For detailed guidance, refer to the "Run-a-Model" notebook provided. Additionally, you can easily access and download the Facial 40 Attributes model from this repository. Here, "landmark" are 468 XYZ facial points extracted by the Google MediaPipe model.

Input condition (clip-landmark-arcface): Screenshot

Sample: Screenshot

Installing:

conda env create -f environment.yaml
conda activate diffusers

Optional:

pip install -U xformers

Training:

accelerate launch --config_file configs/accelerate.yaml train_conditioned.py --config ./configs/ffhq-vqvae-clip.yaml

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published