-
Notifications
You must be signed in to change notification settings - Fork 308
Quickstart
Import FaceRecognitionDotNet into your .NET project. You can choice the following packages.
- CPU Supports AVX Instruction
- Legacy CPU may not support it and program won't work
You must build source code using FaceRecognitionDotNet as x64 rather than AnyCPU.
Deploy the following libraries from Intel MKL 2019 Initial Release
- libiomp5md.dll
- mkl_core.dll
- mkl_def.dll
- mkl_intel_thread.dll
Copy cuDNN library into your applcation directory.
These library should be in %CUDA_PATH%\bin, e.g. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\bin
.
- cublas64_11.dll
- cublasLt64_11.dll
- cudnn_adv_infer64_8.dll
- cudnn_adv_train64_8.dll
- cudnn_cnn_infer64_8.dll
- cudnn_cnn_train64_8.dll
- cudnn_ops_infer64_8.dll
- cudnn_ops_train64_8.dll
- cudnn64_8.dll
- curand64_10.dll
- cusolver64_11.dll
- cudnn64_7.dll
sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo yum install libX11-devel openblas-devel libgdiplus
sudo apt-get install libx11-6 libopenblas-dev libgdiplus
- Please refer About X11 for Mac
brew install mono-libgdiplus
Download the 4 model files from face_recognition_models Then, copy files to directory you want to deploy.
The above training program and functions are experimental. Deployment step and model file names may be changed.
Create FaceRecognition instance. You must specify the model files directory path or model binary datum. Internal library of FaceRecognition can not handle the some non-alphanumeric character. Therefore, you should specify the encoding to have library understand path string.
FaceRecognition.InternalEncoding = System.Text.Encoding.GetEncoding("shift_jis");
string directory = "C:\models"
using (FaceRecognition fr = FaceRecognition.Create(directory))
or
var modelParameter = new ModelParameter
{
PosePredictor68FaceLandmarksModel = File.ReadAllBytes("shape_predictor_68_face_landmarks.dat"),
PosePredictor5FaceLandmarksModel = File.ReadAllBytes("shape_predictor_5_face_landmarks.dat"),
FaceRecognitionModel = File.ReadAllBytes("dlib_face_recognition_resnet_model_v1.dat"),
CnnFaceDetectorModel = File.ReadAllBytes("mmod_human_face_detector.dat")
}
using (FaceRecognition fr = FaceRecognition.Create(modelParameter ))
Load image files from file. Here, these files contain 1 human face. Certainly, picture can contain multiple faces!
using (Image imageA = FaceRecognition.LoadImageFile(imagePathA))
using (Image imageB = FaceRecognition.LoadImageFile(imagePathB))
Detect faces from imageA and imageB. Actually, the library may detect multiple faces.
IEnumerable<Location> locationsA = fr.FaceLocations(imageA);
IEnumerable<Location> locationsB = fr.FaceLocations(imageB);
Compute human face encoding to compare faces. You must dispose FaceEncoding objects after you are finished using.
IEnumerable<FaceEncoding> encodingA = FaceRecognition.FaceEncodings(imageA, locationsA);
IEnumerable<FaceEncoding> encodingB = FaceRecognition.FaceEncodings(imageB, locationsB);
Compare FaceEncoding objects to know whether detected faces are same or not. tolerance is threshold. If distance of compared faces is lower than tolerance, faces are same. Otherwise not same.
const double tolerance = 0.6d;
bool match = FaceRecognition.CompareFace(encodingA.First(), encodingB.First(), tolerance);
using (var ageEstimator = new SimpleAgeEstimator(Path.Combine("models", "adience-age-network.dat")))
using (var genderEstimator = new SimpleGenderEstimator(Path.Combine("models", "utkface-gender-network.dat")))
{
fr.CustomAgeEstimator = ageEstimator;
fr.CustomGenderEstimator = genderEstimator;
var ageRange = ageEstimator.Groups.Select(range => $"({range.Start}, {range.End})").ToArray();
var age = ageRange[fr.PredictAge(image, box)];
var gender = fr.PredictGender(image, box);
}
-
AgeTraining
- adience-age-network.dat
-
GenderTraining
- adience-gender-network.dat
using (var detector = new HelenFaceLandmarkDetector(Path.Combine("models", "helen-dataset.dat")))
{
fr.CustomFaceLandmarkDetector= detector;
var locations = fr.FaceLocations(image);
var landmarks = fr.FaceLandmark(image, locations , PredictorModel.Custom);
}
-
HelenTraining
- helen-dataset.dat
using (var estimator = new SimpleHeadPoseEstimator(Path.Combine("models", "300w-lp-roll-krls_0.001_0.1.dat"),
Path.Combine("models", "300w-lp-pitch-krls_0.001_0.1.dat"),
Path.Combine("models", "300w-lp-yaw-krls_0.001_0.1.dat")))
{
fr.CustomHeadPoseEstimator= estimator;
var landmark = fr.FaceLandmark(image, null, PredictorModel.Large).First();
var headPose = fr.PredictHeadPose(landmark);
}
using (var estimator = new SimpleEmotionEstimator(Path.Combine("models", "corrective-reannotation-of-fer-ck-kdef-emotion-network_test_best.dat")))
{
fr.CustomEmotionEstimator= estimator;
var emotion = fr.PredictEmotion(image);
}
-
EmotionTrainingV2
- corrective-reannotation-of-fer-ck-kdef-emotion-network_test_best.dat
Build your .NET project and run program!!