Hacker News new | comments | show | ask | jobs | submit login
How I implemented iPhone X’s FaceID using Deep Learning in Python (towardsdatascience.com)
198 points by dsr12 6 months ago | hide | past | web | favorite | 36 comments



For those who are new to machine learning, this is like duct taping four wheels together and calling it a Lamborghini. Maybe a good start if you want to learn about basics of face recognition, but iPhoneX FaceID keywords here are clickbait. This is nothing like the technology used in iPhoneX.


It kind of reminds me of when George Hotz declared that he'd created a competitor to Tesla Autopilot by training an end-to-end neural network on dashcam footage and running it on an Android phone. Sure, he had something which technically worked in brief demos but a proof of concept doesn't come close to even an alpha version of a commercial product.


People have actually put this on their vehicle. It actually works pretty well from what I have seen.

https://www.youtube.com/watch?v=GzrHNI6eCHo https://www.youtube.com/watch?v=XYUHrI5-A9A https://www.youtube.com/watch?v=UkS-iJ5auD4


This is fake news, imo.

You can currently run Hotz' software Openpilot on your car and many do. The general consensus is that it's as good as Tesla's AP2. There are hours of footage of Openpilot driving by actual users (not PR videos) on Youtube.

It most certainly does "come close to even an alpha version of a commercial product".


Please do not normalize that term in contexts to which it does not apply. This is not "news," and can therefore neither be real nor fake.


'Can' and 'do' aren't the same as 'should'. I can't find any recent numbers but the general trend seems to be that OpenPilot averages about one disengagement/intervention per 5 miles, while Autopilot (when not limited by the 'hands on wheel' check) significantly further between disengagements.

It's one thing to have a few tech-savvy enthusiasts installing your system and using it a bit with a good understanding of its strengths and weaknesses. It's another thing entirely to let tens of thousands of random members of the general public use your system for months on end with minimal-to-no training and no technical insight into it.


Sorry about the uses of the term 'fake news' I guess.

But I was just making the point that it's definitely at least Alpha software.


Exactly. Not to mention that Apple most certainly uses conventional and well-established biometrics, along with precise 3d mapping. The article is not even in the right direction.


It's literal click bait, and should be flagged as such.


Can you please explain further? How does it work on the iPhone? How is it different from this method?


For one it uses a numbers of sensors: https://support.apple.com/en-us/HT208108 But also to generalize to makeup and occlusion and millions of faces you have to do much more. There are hundreds of papers on the subject.


Face ID sensor will projects 30000 infrared dots onto the user's face, and then it reads the projected pattern. The pattern is sent to a deep neural network in order to confirm a match with the phone owner's face.


The technique in the article does not seem to be using depth data (there is a picture in the article), only RGB image data.

So, he could have gotten the same results with a webcam.

My understanding is that these depth cameras create a 3d point cloud and use the rgb to map/overlay color onto the 3d.


The article does mention doing experiments using depth information from a Kinect. Also, the actual code on the github repo does appear to use depth info.


It specifically mentions RGBD (RGB+Depth).


I’d love to see some comparison between this prototype and real FaceID. How good are each at recognizing the same people with different appearances (haircut, glasses)? How good are each at rejecting different people with similar looks like siblings?


Even apple can't tell the difference between twins


Crazy to think that a ton of neural networks are running on live sensor data every time I unlock my phone. Truly amazing and seamless technology.


Applying the model is fast, training it is slow(er). Not that crazy.


For something as common as unlocking my phone, the algorithm needs to be extremely fast and power efficient not to mention this is a PHONE, so yes I still find this impressive. The fact that sensor data is being processed through a neural network using a dedicated chipset for neural net operations would seem like ridiculous overkill if you explained it to me ten years ago. Furthermore, the network is actually being re-trained on the fly to accommodate for changes to the users facial hair, etc.


Can someone who’s actually worked/implemented/published face detection/recognition please point to state of the art results for this task? Are neural networks better than hand crafted feature extractors?


I based my work on FaceNet, a quite recent paper that achieves state-of-the-art results and incredible robustness using very similar techniques to the ones I implemented here (siamese networks, contrastive/triplet loss).


Yes and it's not even close.


I would check out the recent arcface paper.


Duct tape PhD notebook. These prime examples of "Untitled.ipynb mentality" are what makes me shake my head. No annotations, no exploration of the dataset, no sanity checks, printf debugging, uncaught error messages. But putting just because it doesn't collapse under your feet, put your name upfront and it's good to go.


Very minor nitpick: Surface devices have combined infrared and regular cameras for face unlocking for years. It's a great tech but it didn't start with the iPhone X.


I don’t have the iPhone X but the Surface Face ID is a piece of shit.


Surface hello works for me nearly every single time fast and seemlessly. Love it.


Looks like it isn’t random per day but rather works well on some faces and terribly on others.


I have both: they're both excellent 99% of the time, the rest you're just weirded out why today you apparently don't look like yourself.


Correct me if I’m wrong, but no one claimed that the iPhone X was before Windows Hello


That article implies that iPhone X debuted the technique.


I believe it debuted using real depth sensing as opposed to image based unlocking from a standard webcam.


Hello uses an IR emitter (I said cam but I wasn't thinking)


> combined infrared and regular cameras

"infrared camera" is falling quite a bit short of a description. In fact some have used a single IR emitter to light the face up in the dark and/or as a trick to ignore surroundings but then it's not much better than a picture. Here it's an IR dot projector whose projection gets captured by an IR camera and subsequently extruded as a 3D volume, Kinect-style. I don't know about what Windows Hello mandates for face recognition though (also, it can be iris or fingerprint as well as face, which just makes things more confusing).


You're right, that was a poor description. But yes Hello is apparently based on Surface tech.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: