Skip to content

Quickstart

Takuya Takeuchi edited this page Dec 24, 2022 · 19 revisions

Import

Import FaceRecognitionDotNet into your .NET project. You can choice the following packages.

Common Prerequisite

  • CPU Supports AVX Instruction
    • Legacy CPU may not support it and program won't work

CPU

Intel CPU

GPU

Install dependencies

Windows

⚠️ WARNING for Visual Studio users

You must build source code using FaceRecognitionDotNet as x64 rather than AnyCPU.

for MKL

Deploy the following libraries from Intel MKL 2019 Initial Release

  • libiomp5md.dll
  • mkl_core.dll
  • mkl_def.dll
  • mkl_intel_thread.dll

for CUDA

Copy cuDNN library into your applcation directory. These library should be in %CUDA_PATH%\bin, e.g. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\bin.

Since CUDA 11

  • cublas64_11.dll
  • cublasLt64_11.dll
  • cudnn_adv_infer64_8.dll
  • cudnn_adv_train64_8.dll
  • cudnn_cnn_infer64_8.dll
  • cudnn_cnn_train64_8.dll
  • cudnn_ops_infer64_8.dll
  • cudnn_ops_train64_8.dll
  • cudnn64_8.dll
  • curand64_10.dll
  • cusolver64_11.dll

Before CUDA 11

  • cudnn64_7.dll

Linux

RHEL/Fedora/CentOS

sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo yum install libX11-devel openblas-devel libgdiplus

Ubuntu/debian

sudo apt-get install libx11-6 libopenblas-dev libgdiplus

OSX

Install X11

Install libgdiplus

brew install mono-libgdiplus

Deploy model files

Download the 4 model files from face_recognition_models Then, copy files to directory you want to deploy.

⚠️ WARNING

The above training program and functions are experimental. Deployment step and model file names may be changed.

Usage

1. Initialize

Create FaceRecognition instance. You must specify the model files directory path or model binary datum. Internal library of FaceRecognition can not handle the some non-alphanumeric character. Therefore, you should specify the encoding to have library understand path string.

FaceRecognition.InternalEncoding = System.Text.Encoding.GetEncoding("shift_jis");

string directory = "C:\models"
using (FaceRecognition fr = FaceRecognition.Create(directory))

or

var modelParameter = new ModelParameter
{
    PosePredictor68FaceLandmarksModel = File.ReadAllBytes("shape_predictor_68_face_landmarks.dat"),
    PosePredictor5FaceLandmarksModel = File.ReadAllBytes("shape_predictor_5_face_landmarks.dat"),
    FaceRecognitionModel = File.ReadAllBytes("dlib_face_recognition_resnet_model_v1.dat"),
    CnnFaceDetectorModel = File.ReadAllBytes("mmod_human_face_detector.dat")
}
using (FaceRecognition fr = FaceRecognition.Create(modelParameter ))

2. Load Image

Load image files from file. Here, these files contain 1 human face. Certainly, picture can contain multiple faces!

using (Image imageA = FaceRecognition.LoadImageFile(imagePathA))
using (Image imageB = FaceRecognition.LoadImageFile(imagePathB))

3. Find Face

Detect faces from imageA and imageB. Actually, the library may detect multiple faces.

IEnumerable<Location> locationsA = fr.FaceLocations(imageA);
IEnumerable<Location> locationsB = fr.FaceLocations(imageB);

4. Get Face Encoding

Compute human face encoding to compare faces. You must dispose FaceEncoding objects after you are finished using.

IEnumerable<FaceEncoding> encodingA = FaceRecognition.FaceEncodings(imageA, locationsA);
IEnumerable<FaceEncoding> encodingB = FaceRecognition.FaceEncodings(imageB, locationsB);

5. Compare face

Compare FaceEncoding objects to know whether detected faces are same or not. tolerance is threshold. If distance of compared faces is lower than tolerance, faces are same. Otherwise not same.

const double tolerance = 0.6d;
bool match = FaceRecognition.CompareFace(encodingA.First(), encodingB.First(), tolerance);

6. Age/Gender estimation

using (var ageEstimator = new SimpleAgeEstimator(Path.Combine("models", "adience-age-network.dat")))
using (var genderEstimator = new SimpleGenderEstimator(Path.Combine("models", "utkface-gender-network.dat")))
{
    fr.CustomAgeEstimator = ageEstimator;
    fr.CustomGenderEstimator = genderEstimator;

    var ageRange = ageEstimator.Groups.Select(range => $"({range.Start}, {range.End})").ToArray();
    var age = ageRange[fr.PredictAge(image, box)];
    var gender = fr.PredictGender(image, box);
}

💡 NOTE

7. Helen face landmark detection

using (var detector = new HelenFaceLandmarkDetector(Path.Combine("models", "helen-dataset.dat")))
{
    fr.CustomFaceLandmarkDetector= detector;

    var locations = fr.FaceLocations(image);
    var landmarks = fr.FaceLandmark(image, locations , PredictorModel.Custom);
}

💡 NOTE

8. Head Pose estimation

using (var estimator = new SimpleHeadPoseEstimator(Path.Combine("models", "300w-lp-roll-krls_0.001_0.1.dat"),
                                                   Path.Combine("models", "300w-lp-pitch-krls_0.001_0.1.dat"),
                                                   Path.Combine("models", "300w-lp-yaw-krls_0.001_0.1.dat")))
{
    fr.CustomHeadPoseEstimator= estimator;

    var landmark = fr.FaceLandmark(image, null, PredictorModel.Large).First();
    var headPose = fr.PredictHeadPose(landmark);
}

9. Emotion estimation

using (var estimator = new SimpleEmotionEstimator(Path.Combine("models", "corrective-reannotation-of-fer-ck-kdef-emotion-network_test_best.dat")))
{
    fr.CustomEmotionEstimator= estimator;

    var emotion = fr.PredictEmotion(image);
}

💡 NOTE

  • EmotionTrainingV2
    • corrective-reannotation-of-fer-ck-kdef-emotion-network_test_best.dat

Build and Run

Build your .NET project and run program!!