Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Unknown people detection #144

Closed
524c opened this issue May 31, 2016 · 138 comments
Closed

Unknown people detection #144

524c opened this issue May 31, 2016 · 138 comments
Labels

Comments

@524c
Copy link

524c commented May 31, 2016

Hello @bamos, how are you?

Congratulations on the project.

I wonder if you have plans to implement the detection of people still untrained and display as unknown.
I would like to collaborate with the project in this area, but do not know where to start.

Thank you.

@musicfish1973
Copy link

I'm also interested in this work, and I've tried using decision function value of SVM in scikit-learn. But I found this SVM implementation can't output (n_samples, n_classes) values for one-vs-others mode. Origianl LIBLINER implementation does work.

@bamos
Copy link
Collaborator

bamos commented Jun 13, 2016

Hi, sorry for the delayed response, I'm not actively working on unknown people detection but I'd be excited to look over and give some feedback on code and results so we can add this to the classification and web demos. I think the best starting point would be to create an "unknown" classification benchmark on the LFW that we can use to compare techniques. Let me know if you're still interested and I can provide more details about the baseline and benchmark.

Also @mbektimirov and @dista have asked about classifying unknown users in the past. Do you all have any results on this?

This is technically a duplicate of #67, but let's keep the discussion here and close that one when we find a good technique and add it to the web demo.

-Brandon.

@musicfish1973
Copy link

@bamos, I agree with you, I would like to take part in the exciting OpenFace and begin with this work.

@FRCassarino
Copy link

@bamos, This functionality is vital for the project I'm working on. As of today I've been working around this problems in various ways, but its increasingly clear that I'm gonna need some at least relatively reliable way to determine if a face is unknown.

So I welcome any suggestion you have as to how to approach this. If I find a decent solution I'll be sure to share it here afterwards.

@Vijayenthiran
Copy link
Contributor

I was playing around with the classifier demo and I am interested in unknown people detection. I have trained the classifier with 25 different people with 13 images of each person. The problem statement is to detect the person who is not among the 25 people (sort of intruder recognition).

When there is a known person standing in front of the camera, the person prediction is correct with the confidence level of around 0.25 - 0.50. So I was expecting that if there is a unknown person standing, the confidence level would drop below 0.25. But it didn't happen that way. The classifier tries to predict the unknown person as one among the 25 people with the confidence level always greater than 0.35.

I tried the different classifiers (Linear SVM, GMM, Radial SVM, DecisionTree; GMM and Decision Tree performed worst) and no fruitful result. I am a beginner in Machine Learning. So any advice would be highly appreciated.

@FedericoRuiz1
Copy link

@Vijayenthiran the confidence score is simply not very reliable. The workaround I've been using is to take the last 15 frames and do some math with all the predictions in them. It works relatively well, but not nearly as well as I need.

So I'm now looking for any alternative approaches.

@bamos
Copy link
Collaborator

bamos commented Jun 27, 2016

Hi all, apologies for the delays, I'm traveling now and have been for a few weeks. I think a great starting point is to create a benchmark based on the LFW dataset since the LFW contains realistic images that the neural network isn't trained with. This should include the accuracies on known and unknown people.

  1. Sort the LFW people by the number of images per person.
  2. Set knownAccuracies = [] (empty list), unknownAccuracies = []
  3. for exp in 1, ... , 5
    • Training Set: Randomly (but deterministically) sample N images from the first M people
    • Testing Set (of known people): Remaining images from the first M people.
    • Testing Set (of unknown people): Sample P images from the remaining people that aren't in the training set
    • Append accuracies from training and testing on the sets.
  4. Output: mean and stdev of the accuracies.

What do you all think of this?

image


Hi @musicfish1973 - would you still like to do the initial implementation of this? You can use my LFW classification experiment as a starting point. See our tech report for a description of the classification experiment.


On unknown detection techniques (\cc @Vijayenthiran), my intuition is that a probabilistic technique like GMMs should work well, but I haven't had much success. I've tried most of the techniques @Vijayenthiran mentioned as well as using PCA/LDA before classification to change the feature space. There are some interesting ideas in open set recognition that might work better here. Is anybody interested in further exploring open set recognition?

Also, another idea I've tried that didn't work well but might be worth further exploring is explicitly adding an "unknown" class.

-Brandon.

@musicfish1973
Copy link

I benefit much from the discussions above. I'm still focusing on statistical analysis of the output values of LIBLINEAR in one-vs-the-rest mode. I believe softmax may be a better solution although it has a much more expensive training procedure.
Particularly @bamos, begin to learn the things you provided.

@Vijayenthiran
Copy link
Contributor

@bamos Thanks for the update. Your message was helpful in understanding the basics of open set recognition.

I tried creating benchmark with lfw. Please let me know the procedure I followed is correct or not:

I took the following people from the lfw dataset ['Alejandro_Toledo', 'Ariel_Sharon', 'Arnold_Schwarzenegger', 'Colin_Powell', 'Donald_Rumsfeld', 'George_W_Bush', 'Gerhard_Schroeder', 'Gloria_Macapagal_Arroyo', 'Hugo_Chavez', 'Jacques_Chirac', 'Jean_Chretien', 'Jennifer_Capriati', 'John_Ashcroft', 'Junichiro_Koizumi', 'Laura_Bush', 'Lleyton_Hewitt', 'Luiz_Inacio_Lula_da_Silva'] which had large number of images. Among these people I split the first 90% of the image as known people train data and remaining 10% of the image as known people test data.

I trained the different classifiers with the train dataset. Then I tested the classifier with known people test data. For single iterations, following were the accuracies for different classifiers:

Linear SVM:
knownAccuracies = [0.975757575758]

GMM:
knownAccuracies = [0.127272727273]

Radial SVM:
knownAccuracies = [0.975757575758]

Decision Tree:
knownAccuracies = [0.818181818182]

Then I took person names starting with alphabet 'A' in the lfw (except Alejandro_Toledo, Ariel_Sharon, Arnold_Schwarzenegger - since they were already in the train set) and used it as unknown people test data.

For single iteration here are the following accuracies for different classifiers:

Linear SVM:
unknownAccuracies = [0.0]

GMM:
unknownAccuracies = [0.0]

Radial SVM:
unknownAccuracies = [0.0]

Decision Tree:
unknownAccuracies = [0.0]

unknownAccuracies were zero since the unknown people test image is not in the train image and there is not class as "unknown".

Accuracy was calculated with the following logic:
no_of_correct_predictions/total_no_of_predicitions

Is the procedure I am following is correct. If so I will continue doing it for 5 different set of known people data set and share the results here. Also, I tried creating a unknown class by creating a unknown folder with random set of actor/actress images (of about 2000 images). If I run the classifier (for my test data which was mentioned in my previous comment, not the lfw data) with unknown class, for any image input the classifier predict it as unknown. I guess may be due to the large number of images in the unknown class?

@luinstra
Copy link

@bamos I was looking over the open set project you linked, and it does sounds interesting, but I think the license agreement of their project would be prohibitive to most use cases that are not strictly research.

I have been considering the 'unknown' class as a possibility as well. In your implementation did you randomly pick an image from many different people to create a sort of average? Or did you have a different approach?

@luinstra
Copy link

@Vijayenthiran I have found that the GaussianNB also works well as a classifier. It's biggest down side (for the idea I have been pursuing anyway) is that the prediction probabilities are garbage, according to their docs.

@bamos
Copy link
Collaborator

bamos commented Jun 28, 2016

Hi @Vijayenthiran - great! This is a good starting point. A few comments:

I took the following people from the lfw dataset

This works for now, but I prefer sorting the people by the number of images so it's easy to change the number of known people to use.

90% of the image as known people train data and remaining 10% of the image as known people test data

Using a percent instead of a fixed amount here (as I previous said) makes sense.

GMM: knownAccuracies = [0.127272727273]

I'm surprised this is so low, I think this can be improved.

Then I took person names starting with alphabet 'A' in the lfw (except Alejandro_Toledo, Ariel_Sharon, Arnold_Schwarzenegger - since they were already in the train set) and used it as unknown people test data.

I think it's cleaner if you sort the LFW identities and then sample uniformly from the remaining images.

unknownAccuracies were zero since the unknown people test image is not in the train image and there is not class as "unknown".

If you use a probabilistic variants (of SVM and GMM), we could threshold the highest probability to identify unknown users. I think this is a reasonable approach and I wonder if we should also include some "unknown" images as part of the training set.

Accuracy was calculated with the following logic:
no_of_correct_predictions/total_no_of_predicitions

This is correct.

Also, I tried creating a unknown class by creating a unknown folder with random set of actor/actress images (of about 2000 images)

So it's easier to modify and share experiments, I don't think we should modify the LFW directory structure and add an known folder. Instead we should be able to easily do it in code.

If I run the classifier (for my test data which was mentioned in my previous comment, not the lfw data) with unknown class, for any image input the classifier predict it as unknown. I guess may be due to the large number of images in the unknown class?

Yes, this happened to me too, but I think some classifiers should be able to overcome this problem. What classifier are you using? I'd try a neural network with 2-3 (small) fully-connected hidden layers first, then potentially a RBF SVM.

@bamos
Copy link
Collaborator

bamos commented Jun 28, 2016

Hi @luinstra -

I was looking over the open set project you linked, and it does sounds interesting, but I think the license agreement of their project would be prohibitive to most use cases that are not strictly research.

We can use their code as a quick prototype to see if the technique works well. If it works well, we can implement our own version based on the paper and release it under the Apache 2 license.

I have been considering the 'unknown' class as a possibility as well. In your implementation did you randomly pick an image from many different people to create a sort of average? Or did you have a different approach?

Yes, I randomly sampled from a lot of unknown people. Classifiers will quickly collapse to labeling everything as unknown, but I think we can overcome this with a little more analysis.

@Vijayenthiran
Copy link
Contributor

Thanks for the feedback @bamos. I will work it on this weekend.

Meanwhile I worked on the following comment:

If you use a probabilistic variants (of SVM and GMM), we could threshold the highest probability to identify unknown users.

Instead of thresholding the probability to the highest (since in the lfw dataset, the highest prob was 1.0), I took the average of the all confidences and it came to around 0.9594 with nolearn-DBN classifier. Then I kept this threshold for the unknown people dataset and the unknownAccuracies was 0.929372197309.
So I applied this technique to my 25 people dataset (With DBN classifier) and it worked well (reasonable accuracy) in predicting the unknown people.

@FedericoRuiz1
Copy link

FedericoRuiz1 commented Jun 30, 2016

@Vijayenthiran I'm not sure if I understand. You switched the GMM classifier to the nolearn-DBN classifier, and its able to detect unknown people with 0.92% accuracy?

How did you do this exactly?

clf = DBN( learn_rates = 0.3, learn_rate_decays = 0.9, epochs = 10, verbose = 1)

Is that what you wrote on the classifier.py file?

@Vijayenthiran
Copy link
Contributor

@FedericoRuiz1 Intially I calcuated the average confidence from the known people data set. The avg. confidence came aroung 0.95. So I kept it as threshold for the unknown people data set. Means, if the confidence is less than 0.95, then consider the person as unknown (correct prediction). If the confidence is greater than 0.95, then check if the person name matches with the predicted person name. If so it is a correct prediction (which would not happen) and if the names didn't match then it would be incorrect prediction.

@luinstra
Copy link

@Vijayenthiran Are you testing this with the same size data set each time? From what I have seen the confidence value will be reduced as the number of classes grows, because each class gets a small percentage of the total probability. When I run tests on a data set with ~100 classes the highly confident predictions rarely get over the 0.5-0.6 range, IIRC.

@Vijayenthiran
Copy link
Contributor

@luinstra For lfw I used 17 classes of same size for known train and test. In lfw unknown test dataset I used 429 classes. In my own data set there were 25 classes, in which case the average confidence came around 0.55. I haven't tested with 100 classes. If I try, I will let you know my results.

@musicfish1973
Copy link

musicfish1973 commented Jul 4, 2016

@bamos Through learning the open set recognition I found that rejecting uncertainty is what we should care at first. So I run the code of Meta-recognition. The example data works, but my result data of SVM decision values not. See below:
(1)example data in wbl-test.c:
double tail[] = {0.74580, 0.67048, 0.65088, 0.64507, 0.64500, 0.64402, 0.64295, 0.64279, 0.64244, 0.64079};
double tail_fit[] = {0.67048, 0.65088, 0.64507, 0.64500, 0.64402, 0.64295, 0.64279, 0.64244, 0.64079};
=>the first item 0.74580 can be correctly regarded as an outlier through Weibull distribution.
(2)my data:
//FR SVM:-0.9134 0.3909 -0.7918 -0.8268 -0.7458 -0.7586 -->original values
//add 0.92 and sorted,values in [0,1]
double tail[] = {1.0, 0.1742, 0.1614, 0.1282, 0.0932, 0.0066};
double tail_fit[] = {0.1742, 0.1614, 0.1282, 0.0932, 0.0066};
=>the outlier 0.3909 can't be identified!! What mistakes I have made? Can you and anybody else give me some suggestions?

@FedericoRuiz1
Copy link

FedericoRuiz1 commented Jul 4, 2016

@Vijayenthiran Why did having more people in the database increase the confidence, rather than decrease it? Am I getting this right? When you had 25 people in the database,avg confidence was around 0.55, but when you had around 500 people the avg confidence was 0.95?

With the default classifiers confidence goes down as you add people to the database.

@Vijayenthiran
Copy link
Contributor

@FedericoRuiz1 I didn't test it with 500 people. I tested with 25 people (each of around 100-200 photos) in lfw. In that case the average confidence was around 0.95 (since it had lot of photos per person). When I tested with my own dataset of 25 people (with around 10 photos per person) the average confidence was close to 0.55. So I guess increasing the no. of photos per person will increase the avg. confidence. I am yet to clean up the code for benchmarking. Will send a PR once it is clean.

@luinstra
Copy link

luinstra commented Jul 6, 2016

I actually have a method for determining unknown people that is showing some promise. It's not quite where I want it to be yet, but it's a start. The approach I took to get a measure for the uncertainty was to create an ensemble of classifiers, each of which is trained on a random subset of the data for each person. Effectively combining the random forrest concept with classifiers that are not trees. I settled on using the GaussianNB classifier as my base type after testing out several of the others because it is very fast and accurate. Using the predict_proba method is normally not useful for this classifier, but when you average these values in the ensemble you effectively get a voting system that gives you a decent measure of uncertainty. So when the confidence for the top prediction was below 0.95, I give the data some extra scrutiny, and if it's below 0.6 I flag it as unknown.

Using an ensemble size of 15 and a sample ratio of 0.7 I was able to get some decent results. I used the LFW data methods baked into sklearn and set min_faces_per_person=15. This ends up giving me 96 classes in my data set, and I randomly pick 10 to use as my unknown set.

The table below shows my results of running tests on the test data set (all were in the training data) and the unknown data set (none were in the training data). I ran several iterations of this test and the numbers all come out in a similar range to the ones below. The numbers shown are all percentages of the respective set size. The 'baseline' method represents using highest confidence match from the classifier with no extra verification. The 'filtered' method uses the confidence thresholds I mentioned above and an attempt at some extra verification.

data set method Unknown False Positives True Positives
test baseline 0.0 7.42 92.58
test filtered 8.83 3.18 87.99
unknown baseline 0.0 100.0 0.0
unknown filtered 73.83 26.17 0.0

@FedericoRuiz1
Copy link

@luinstra So have you got any news on this front? I'd like @bamos or anyone else to pitch in? What does everyone think of this method? Anybody up to implementing it in a more general manner?

@bamos
Copy link
Collaborator

bamos commented Jul 19, 2016

I'm still very interested to see how well the open set classification techniques work. My intuition is that it models unknown people detection very well.

@Vijayenthiran can you send your code implementing the unknown people detection benchmark with your baselines in a PR?

-Brandon.

@luinstra
Copy link

I have continued to work at this a quite bit and have thoroughly convinced myself that the numbers above are as good as it will get with the current type of classifier being used. I agree with @bamos that an open set classifier is ultimately the way to go here. I will put some work into trying to get that setup and tested.

@luinstra
Copy link

I wanted to throw a link to this paper on here for reference. The results seem promising for an open set classifier implementation.

@bamos
Copy link
Collaborator

bamos commented Jul 28, 2016

Hi all, I've merged @Vijayenthiran's PR implementing an unknown user detection benchmark into the master branch. We should start using and building on it so that we have a reliable way to compare results and evaluate techniques. @Vijayenthiran sketched the usage information in the PR, #171.

@Vijayenthiran - thanks for implementing this and making some modifications for the PR! I noticed that your example uses the LFW funneled images. I think we should instead be using the raw LFW images on input so the accuracies are comparable to normal images that aren't funneled. Since your code does an alignment, I don't think any changes are necessary here other than using a different input directory.

Also, I've added a section to the OpenFace FAQ on unknown person detection. For now I've linked to this page.

-Brandon.

@Digvijay111
Copy link

No Problem @AdamMiltonBarker. And Thanks for your response.

@AdamMiltonBarker
Copy link

No problem mate.

@Digvijay111
Copy link

I need more clarification on your flow @AdamMiltonBarker. Suppose I collect the pictures of two persons and save it in a two different folder named it as Adam and John respectively. And Then I take the pictures of unknown persons and saves it in a folder named as unknown and then I trained it.
Am I doing it in a right way as you are doing?
sorry for my English.

@AdamMiltonBarker
Copy link

Yes I have roughly ten + images for each class and then I have 500 in the unknown class.

@Digvijay111
Copy link

Digvijay111 commented Jul 29, 2017

@AdamMiltonBarker Thanks for your quick response. Did you check with the lesser number of images for unknown class?

@AdamMiltonBarker
Copy link

Really cannot say much more than I have said mate, I have roughly ten + images for each class and then I have 500 in the unknown class. I linked to the dataset I used for the unknown class above.

@Digvijay111
Copy link

@AdamMiltonBarker Thanks for your precious time. You help me a lot.

@Digvijay111
Copy link

Digvijay111 commented Jul 29, 2017

@bamos is there is a way to train new faces in the existing known data set? please check #287

@AdamMiltonBarker
Copy link

@Digvijay111 there is an example in the docs:

https://cmusatyalab.github.io/openface/demo-3-classifier/

For those that are interested, I have now opened one of my versions of this project, it is designed for Intel Nuc and Realsense cameras, but can be used on any Linux box and can connect to multiple IP cameras....

https://github.com/TechBubbleTechnologies/IoT-JumpWay-Intel-Examples/tree/master/Intel-Nuc/DE3815TYKE/Computer-Vision/Python/OpenFace

@Digvijay111
Copy link

thanks, @AdamMiltonBarker.

@prateekmehta59
Copy link

prateekmehta59 commented Aug 7, 2017

Hey so I did a similar thing but on a much larger scale.. Took around 700 pics of each celebrity and made a database of around 1100 celebrities, that is around 700,000 images and added another classes Unknown with 70,000 random people in it of all kinds of ethnicity .
Trained it altogether now here the problem is sometimes while predicting an image, it tries to classify it as Unknown rather than the correct class.. and just to check the range of probability in that image, often the second highest probability is of the right class. This happens 4/10 times I have tried.
Now what should be the solution for this? Will decreasing the number of images in Unknown class (Finding a sweet spot ratio) help in this? or another idea what came to my mind is to first classify as two categories as Celeb and Non Celeb and then going inside the Celeb folder for further feature analysis .
What should be the appropriate solution, or please do tell if anything else could be tried
@bamos @Vijayenthiran @qacollective @AdamMiltonBarker Any suggestions?

@stale
Copy link

stale bot commented Nov 18, 2017

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Nov 18, 2017
@stale stale bot closed this as completed Nov 25, 2017
@DeadZen
Copy link

DeadZen commented Jan 31, 2018

has this issue ever been resolved?

@eleehan
Copy link

eleehan commented Feb 13, 2018

Does anyone know if there is a term for this? An already identified method of a classifier that is trained to recognize unknowns (does not have to be specifically for facial recognition)?

@luinstra
Copy link

@eleehan open set recognition/classification is how I've seen it discussed. From what I could find a year or two ago it is still an open area of research.

@eleehan
Copy link

eleehan commented Feb 13, 2018

@luinstra thank you!

@Shrek53
Copy link

Shrek53 commented May 26, 2018

@bamos is this issue really closed ??
what is out current situation with open set recognition ( unknown face identify as unknown ) ?
I am getting unknown face identified as a known face with higher confidence level ( like 0.68 ).
please redirect me towards some knowledge about this situation.

@xugaoxiang
Copy link

@Shrek53, yes, I have the same problem too. Add an unknown class with different face images can not solve the problem indeed @AdamMiltonBarker .

@prateekmehta59
Copy link

No concrete solution yet..
I was able to increase the accuracy of detecting unknown faces to above 80% just by finding a sweet spot between the number of known images to the number of unknown images.

@prateekmehta59
Copy link

@rexlow
Well Google vision api does much more than just facial recognition, web search, similar photos search, partial photos search, pages with matching images are some of those.
Try reading the JSON result from the image search, you'll get to know how many related info you can get just by an image..
As far as Face++ is concerned, just tried in few famous celebrity images from internet. It gave out a pretty poor result.
I am pretty sure getting a robust solution just by facial recognition will be really hard as lighting, angle of the face can change the result drastically.

@lautjy
Copy link

lautjy commented Oct 1, 2018

This may not be the answer you are looking for @rexlow , but using dlib was the answer for me.
The pretrained models we of quite good quality - even surpassed some commercial products in my tests. It gives large distance score for faces that are not enrolled. dlib's metric embedding gives faces vectors in N-dim ball. Cosine distance of 0.0 is perfect match, and 1.0 total unmatch. Thus it is easy to threshold unknown faces away.

(Starting point: http://blog.dlib.net/2017/02/high-quality-face-recognition-with-deep.html)

@AdamMiltonBarker
Copy link

AdamMiltonBarker commented Oct 1, 2018

I have also moved away from OpenFace, I have created a program using FaceNet and Intel Movidius: https://github.com/TASS-AI/TASS-Facenet and it is also part of my AI network GeniSys (A couple more versions to be added to the vision repo including Foscam integration and Intel RealSense: https://github.com/GeniSysAI, my OpenFace project is still online but I haven't updated it for ages.

Having said that those that still want a fix to this issue, use FaceNet! Run your classification as normal and then simply double guess classifications using Facenet, you can use the class name to index into your known dataset meaning you no longer need to loop through and compare every known image, only the one you know you want. Another useful thing would be to prepopulate a list of all of the embeddings at the beginning of your program instead of processing every time it sees an image.

@AdamMiltonBarker
Copy link

AdamMiltonBarker commented Oct 2, 2018

Hey so I did a similar thing but on a much larger scale.. Took around 700 pics of each celebrity and made a database of around 1100 celebrities, that is around 700,000 images and added another classes Unknown with 70,000 random people in it of all kinds of ethnicity .
Trained it altogether now here the problem is sometimes while predicting an image, it tries to classify it as Unknown rather than the correct class.. and just to check the range of probability in that image, often the second highest probability is of the right class. This happens 4/10 times I have tried.
Now what should be the solution for this? Will decreasing the number of images in Unknown class (Finding a sweet spot ratio) help in this? or another idea what came to my mind is to first classify as two categories as Celeb and Non Celeb and then going inside the Celeb folder for further feature analysis .
What should be the appropriate solution, or please do tell if anything else could be tried
@bamos @Vijayenthiran @qacollective @AdamMiltonBarker Any suggestions?

Yes 700,000 in the unknown class would probably be the issue as there is 700,000 images in all of the other classes combined and it seems like your model is leaning to the unknown classification due to it thinking everyone is unknown, it sounds like you have around 700ish images per class? You could try with say 1000 - 1500/2000 unknown images first and work your way up until you are happy.

@AdamMiltonBarker
Copy link

@rexlow
Well Google vision api does much more than just facial recognition, web search, similar photos search, partial photos search, pages with matching images are some of those.
Try reading the JSON result from the image search, you'll get to know how many related info you can get just by an image..
As far as Face++ is concerned, just tried in few famous celebrity images from internet. It gave out a pretty poor result.
I am pretty sure getting a robust solution just by facial recognition will be really hard as lighting, angle of the face can change the result drastically.

Regarding lighting using IR massively solved most lighting issues for me, specifically RealSense and Foscam cameras, angle is just a matter of training data, more difficult with Facenet etc but I have worked a way of using multiple images (angles) per person instead of a single image to compare against.

@rexlow
Copy link

rexlow commented Oct 3, 2018

@prateekmehta59 Thanks for your input!
@lautjy By dlib do you mean the python wrapper from Adam Geitgey? Github Repo

I have tried is some time ago but the performance wasn't great.

@AdamMiltonBarker FaceNet sounds like the best option now. Have you tried other implementation like SphereFace?

@DeadZen
Copy link

DeadZen commented Oct 3, 2018

@rexlow dlib is a separate library, that library is wrapper around it.

@AdamMiltonBarker Good info! Any thoughts about 3d morphological models? Have you decided primarily on siamese networks?

@ghost
Copy link

ghost commented Jan 29, 2021

do anyone get a proper solution for this problem???

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
Projects
None yet
Development

No branches or pull requests