-
This repository contains codes to fine-tune, run inference and export your face detection model 😃.
-
In this example, we use the pre-trained model ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8.
-
Here are some inference on the model fine-tuned on the WIDER FACE dataset
- Several fined tuned model was made using different pre-trained, datasets and data augmentation.
- It also contain a script for converting the trained model to Tensorflow Lite. The TFLite modelV7 was deployed on Android in this repository 📱.
Step 1. Clone this repository: https://github.com/allankouidri/Face_detection_MobiNetV2_TF
Step 2. Create a new virtual environment
python -m venv tfod
Step 3. Activate your virtual environment (avoid using conda)
source tfod/bin/activate # Linux .\tfod\Scripts\activate # Windows
Step 4. Install dependencies and add virtual environment to the Python Kernel
python -m pip install --upgrade pip pip install ipykernel python -m ipykernel install --user --name=tfodj pip install -r requirements.txt
Step 5. Collect images using the Notebook 1. Image Collection.ipynb - ensure you change the kernel to the virtual environment as shown below
Step 6. Manually divide collected images into two folders train and test. So now all folders and annotations should be split between the following two folders.
\TFODCourse\Tensorflow\workspace\images\train
\TFODCourse\Tensorflow\workspace\images\test
Step 7. Begin training process by opening 2. Fine-tuning_detection_export.ipynb, this notebook will walk you through installing Tensorflow Object Detection, making detections, saving and exporting your model.
Step 8. During this process the Notebook will install Tensorflow Object Detection. You should ideally receive a notification indicating that the API has installed successfully at Step 8 with the last line stating OK. If not, resolve installation errors by referring to the Error Guide.md in this folder.
Step 9. Once you get to step 6. Train the model, inside of the notebook, you may choose to train the model from within the notebook. I have noticed however that training inside of a separate terminal on a Windows machine you're able to display live loss metrics.
Step 10. You can optionally evaluate your model inside of Tensorboard. Once the model has been trained and you have run the evaluation command under Step 7. Navigate to the evaluation folder for your trained model e.g.
cd Tensorlfow/workspace/models/my_ssd_mobnet/evaland open Tensorboard with the following command
tensorboard --logdir=.Tensorboard will be accessible through your browser and you will be able to see metrics including mAP - mean Average Precision, and Recall.