This project is part of the TabularConf2021 conference and video (spanish version) of it can be seen here.
This project aims to show the full pipeline of how to develop an object detector focused on animals. The different stages are:
- Setting up the dev Environment.
- Gathering data to train on.
- Labeling the images for object detection with labelImg.
- Training the model with Yolov3 And ImageAI.
- Evaluating the model and understanding what is the mAP.
- Using the model on real time with a camera.
To simplify the whole pipeline we use ImageAI Library
If you've never used Pyenv I stronglly recommend you to use it as your local python environment manager, it supports virtualenv and anaconda. 👍
Once you have installed Pyenv:
- Download this repo
git clone https://github.com/Matesanz/pet-detector.git
cd pet-detector
- Install anaconda3-2020.11, create a conda environment,use it as your local python version and install dependencies
pyenv install anaconda3-2020.11
pyenv virtualenv anaconda3-2020.11 tf
pyenv local tf
conda env update -f environment.yml
If you do conda list
you should see all the dependencies installed (such as opencv==4.5, cuda-toolkit==11, cudnn=8.0.5 and imageai==2.1.6)
- Download the pretrained yolov3 weights:
wget https://github.com/OlafenwaMoses/ImageAI/releases/download/essential-v4/pretrained-yolov3.h5
- Install the conda env in jupyter and run the project:
python -m ipykernel install --user --name=pet-detection
jupyter notebook
Done 😄👍, now you can start messing around!
This is a handmade step, you should search for images that match the classes you want to train on. Places such as Kaggle and Google Open Images should fit for most common datasets. Inside the Jupyter notebook there is a script that helps you downloading images using the google-images-download library.
Simply run on your terminal (inside the tf environment):
labelImg
Start labeling, more info on how to use labelImg can be found here.
Remember to store your images as shown in here. There is also an script inside the jupyter notebook showing how to do it. You can check how the model training is going by running this script in the terminal:
tensorboard tensorboard --logdir <path_to_your_dataset_folder>/logs
In object detection mAP is used as a benchmark to study the model performance. There is also an script inside the jupyter notebook showing how to get the model mAP.
There is an script inside the jupyter notebook showing how use your trained neural network to detect images on real time.