Hacker News new | comments | ask | show | jobs | submit login
Show HN: OpenFace – Face recognition with Google's FaceNet deep neural network (github.com)
41 points by bdamos on Oct 14, 2015 | hide | past | web | favorite | 14 comments

Very interesting, i recently worked with OpenBiometric [1] and made a guide how to use it very easy [2].

How different is OpenFace vs. OpenBR?

[1] http://openbiometrics.org/

[2] https://github.com/kevinsimper/face-recognition

Summary: OpenFace uses fundamentally different techniques (a deep neural network) for face recognition that OpenBR currently doesn't provide.


As our initial ROC curve on LFW's similarity benchmark in https://github.com/cmusatyalab/openface/blob/master/images/n... shows, this approach results in slightly improved performance. The best point is an FPR of 0.0 and TPR of 1.0 (top left). You can see today's state-of-the-art private systems in the top left, followed by open source systems, then by historical techniques OpenCV provides like Eigenfaces. The dashed line in the middle shows what randomly guessing would provide.

OpenBR is going in a great direction for reproducible and open face recognition. They provide a pipeline for preprocessing and representing faces, as well as doing similarity and classification tasks on the representations. The techniques from OpenFace could be integrated into OpenBR's pipeline.

That sounds awesome, I don't quite understand it that about pipeline but maybe in future! hehe

Would it help somehow if I provided the authors with some of my private photos for training the network? I have thousands photos that have faces/names tagged on them in my Photos app (OS X). If that would be of any help, I would happily provide some of them.

Thanks for the offer! Our original model `nn4.v1` should perform OK on your data if you're interested in trying to automatically predict people in new images.

Training new models is currently dominated by huge industry datasets, which currently have 100's of millions of images. My current dataset is from datasets available for research and has ~500k images.

Without access to a crowdsourced database of faces-to-identities this library is next to useless.

Please don't post dismissive comments to HN, especially in response to new work.

You have a good point, but there are better ways to put it—e.g. ask if such data is available, or how the author tests the work (this is a Show HN, the author's here)—than a swipe like "next to useless".

This depends on what you want to use face recognition for. Maybe I should say more clearly in the README who this project is for. I could have released trained classifiers on 10,000 celebrities, but I focused the projects towards providing an easy way to train new classifiers with small amounts of data. I think this direction allows for more people to use and benefit from the library.

For example, check out our YouTube video of a demo training a classifier in real-time with just 10 images per person at https://www.youtube.com/watch?v=LZJOTRkjZA4. This demo is included in the repo and the README has instructions on running it.

Also note that there is a distinction between training the neural network, which extracts the face representations, from using the features for tasks like clustering and classification.


Problem is not all photos (profile photos especially) are close-ups of a person's face. Although they could use facial tagging and maybe come up with a composite? I'm probably wrong though.

You don't need all photos, you can run a basic face recognition algorithm like the ones used in cameras to identify good candidates and just filter all other photos.

Profile photos are good enough, and with the amount of selfies people are taking there should be more than enough candidates for a full facial recognition matching.

Also facebook isn't the only social network, LinkedIn profile pictures are usually much better for facial recognition Google+ profile pictures are also usually quite good because they crop your face into that silly circle.

Oh I see. thanks for the explanation. LinkedIn definitely makes more sense too since more users will have clearer profile shots.

We usually "region propose" and crop to a certain area (in this case the face area, usually at 256x256) then transform to align eye areas before passing to training. This is to standardize the data beforehand. I'm not sure if this lib does region proposal but you can easily write a pre-processor with openCV face plugins to identify face regions (if any, maybe your training image is a landscape not a face!) for cropping.

Yes, the processing pipeline first does face detection and a simple transformation to normalize all faces to 96x96 RGB pixels. Then each face is passed into the neural network to get a 128 dimensional representation on the unit hypersphere.

For a landscape, face detection would probably not find any faces and the neural network wouldn't be called.

And an image with multiple people will have many outputs: the bounding boxes of faces and associated representations.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact