Can you virtually remove a face mask to see what a person looks like underneath? Our Machine Learning team proves it’s possible via an image inpainting-based ML solution.
Figure: Model Architecture (U-Net based GAN with skip connections)
The pervasive adoption of face masks during the COVID-19 pandemic has introduced significant challenges to facial recognition systems, particularly in security and authentication. This project addresses the issue of incomplete facial data caused by mask occlusion by employing Generative Adversarial Networks (GANs) to perform high-resolution face inpainting.
Our approach involves:
- U-Net Architecture with skip connections to retain spatial details.
- A Total Generator Loss Function combining Mean Absolute Error (MAE) and regularization to prevent overfitting.
- Training on the CelebA-HQ dataset, generating synthetic masked faces for evaluation.
Key Metrics:
- PSNR: 22.25
- SSIM: 0.874
- Performance surpasses traditional GANs, non-learning-based methods, and certain diffusion-based techniques.
Figure: Examples of Results (input | expected output | actual output)
Click the link 🔗 below to watch the project walkthrough on YouTube:
Watch the Project Walkthrough on YouTube
- Clone the repository and navigate to the project directory.
- Ensure you have Conda installed.
- If your system does not have an Nvidia CUDA device, comment out
tensorflow-gpu==2.2.0
in theenvironment.yml
file. - For MacOS users, replace
tensorflow==2.2.0
withtensorflow==2.0.0
in theenvironment.yml
file. - Create and activate the environment:
conda env create -f environment.yml conda activate mask2face
Download the Labeled Faces in the Wild (LFW) dataset and extract it to the data/
folder. Alternatively, use mask2face.ipynb
to automate this step.
For better results, use the CelebA-HQ dataset.
Modify configuration.json
to set:
input_images_path
: Path to the input dataset.train_data_path & test_data_path
: Paths for storing training and testing data.train_image_count & test_image_count
: Number of image pairs generated for training and testing.train_data_limit & test_data_limit
: Limits on training and testing pairs used.
Run a Jupyter notebook server:
📜 Citation
If you use this work, please cite:
A. Yeole, “Image Inpainting for Missing Facial Data Recovery in Security Settings,” Journal of Electrical Systems, vol. 20, no. 3, pp. 3165–3171, 2024. Available: https://journal.esrgroups.org/jes/article/view/4841