Skip to content

Summary of publicly available ressources such as code, datasets, and scientific papers for the FLAME 3D head model

Notifications You must be signed in to change notification settings

TimoBolkart/FLAME-Universe

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

98 Commits
 
 
 
 

Repository files navigation

🔥 FLAME Universe 🔥

This repository presents a list of publicly available ressources such as code, datasets, and scientific papers for the 🔥 FLAME 🔥 3D head model. We aim at keeping the list up to date. You are invited to add missing FLAME-based ressources (publications, code repositories, datasets) either in the discussions or in a pull request.

🔥 FLAME 🔥

Never heard of FLAME?

FLAME is a lightweight and expressive generic head model learned from over 33,000 of accurately aligned 3D scans. FLAME combines a linear identity shape space (trained from head scans of 3800 subjects) with an articulated neck, jaw, and eyeballs, pose-dependent corrective blendshapes, and additional global expression blendshapes. For details please see the scientific publication. FLAME is publicly available under Creative Commons Attribution license.

To download the FLAME model, # under MPI-IS/FLAME and agree to the model license. Then you can download FLAME and other FLAME-related resources such as landmark embeddings, segmentation masks, quad template mesh, etc., from MPI-IS/FLAME/download. You can also download the model with a bash script such as fetch_FLAME.

Code

List of public repositories that use FLAME (alphabetical order).
  • BFM_to_FLAME: Conversion from Basel Face Model (BFM) to FLAME.
  • CVTHead: Controllable head avatar generation from a single image.
  • DECA: Reconstruction of 3D faces with animatable facial expression detail from a single image.
  • DiffPoseTalk: Speech-driven stylistic 3D facial animation.
  • diffusion-rig: Personalized model to edit facial expressions, head pose, and lighting in portrait images.
  • EMOCA: Reconstruction of emotional 3D faces from a single image.
  • EMOTE: Emotional speech-driven 3D face animation.
  • expgan: Face image generation with expression control.
  • FaceFormer: Speech-driven facial animation of meshes in FLAME mesh topology.
  • FLAME-Blender-Add-on: FLAME Blender Add-on.
  • flame-fitting: Fitting of FLAME to scans.
  • flame-head-tracker: FLAMe-based monocular video tracking.
  • FLAME_PyTorch: FLAME PyTorch layer.
  • GANHead Animatable neural head avatar.
  • GaussianAvatars Photorealistic head avatars with FLAME-rigged 3D Gaussians.
  • GIF: Generating face images with FLAME parameter control.
  • INSTA: Volumetric head avatars from videos in less than 10 minutes.
  • INSTA-pytorch: Volumetric head avatars from videos in less than 10 minutes (PyTorch).
  • learning2listen: Modeling interactional communication in dyadic conversations.
  • LightAvatar-TensorFlow: Use of neural light field (NeLF) to build photorealistic 3D head avatars.
  • MICA: Reconstruction of metrically accurated 3D faces from a single image.
  • MeGA: Reconstruction of an editable hybrid mesh-Gaussian head avatar.
  • metrical-tracker: Metrical face tracker for monocular videos.
  • MultiTalk: Speech-driven facial animation of meshes in FLAME mesh topology.
  • NED: Facial expression of emotion manipulation in videos.
  • Next3D: 3D generative model with FLAME parameter control.
  • NeuralHaircut: Creation of strand-based hairstyle from single-view or multi-view videos.
  • neural-head-avatars: Building a neural head avatar from video sequences.
  • NeRSemble: Building a neural head avatar from multi-view video data.
  • photometric_optimization: Fitting of FLAME to images using differentiable rendering.-
  • RingNet: Reconstruction of 3D faces from a single image.
  • ROME: Creation of personalized avatar from a single image.
  • SAFA: Animation of face images.
  • Semantify: Semantic control over 3DMM parameters.
  • SPECTRE: Speech-aware 3D face reconstruction from images.
  • SplattingAvatar: Real-time human avatars with mesh-embedded Gaussian splatting.
  • SMIRK: Reconstruction of emotional 3D faces from a single image.
  • TRUST: Racially unbiased skin tone extimation from images.
  • TF_FLAME: Fit FLAME to 2D/3D landmarks, FLAME meshes, or sample textured meshes.
  • video-head-tracker: Track 3D heads in video sequences.
  • VOCA: Speech-driven facial animation of meshes in FLAME mesh topology.

Datasets

List of datasets with meshes in FLAME topology.
  • BP4D+: 127 subjects, one neutral expression mesh each.
  • CoMA dataset: 12 subjects, 12 extreme dynamic expressions each.
  • D3DFACS: 10 subjects, 519 dynamic expressions in total.
  • Decaf dataset: Deformation capture for fance and hand interactions.
  • FaceWarehouse: 150 subjects, one neutral expression mesh each.
  • FaMoS: 95 subjects, 28 dynamic expressions and head poses each, about 600K frames in total.
  • Florence 2D/3D: 53 subjects, one neutral expression mesh each.
  • FRGC: 531 subjects, one neutral expression mesh each.
  • LYHM: 1216 subjects, one neutral expression mesh each.
  • MEAD reconstructions: 3D face reconstructions for MEAD (emotional talking-face dataset).
  • NeRSemble dataset: 10 sequences of multi-view images and 3D faces in FLAME mesh topology.
  • Stirling: 133 subjects, one neutral expression mesh each.
  • VOCASET: 12 subjects, 40 speech sequences each with synchronized audio.

Publications

List of FLAME-based scientific publications.

2025

2024

2023

2022

2021

2020

2019