1) The ARKit "Face Mesh" seems to be a standard model that is scaled and skewed to fit your face (for example, it ignores glasses, still works if you put your hand in front of your face, etc). It is _not_ a 3D scan.
2) The "TrueDepth" data is not really all that granular. It seems similar to the depth map you get from the rear-facing cameras on the "plus" sized models. Here's what the sensor data spits out: https://twitter.com/braddwyer/status/930682879977361408
3) Apple is really good at marketing. It's been shown that, even if you cover the TrueDepth camera, features that "require" it still work fine (including Animoji and the apps that I've been developing using the front-facing ARKit APIs).
3.1) The lack of Animoji and front-facing ARKit seems to be a software limitation made for business reasons rather than a hardware limitation. See: Google's Pixel 2 portrait mode photos done using a single front-facing camera that have stacked up well against the ones from the iPhone X.
4) The scary part, which is vast dystopian databases of facial fingerprints is already being done with normal photographs. The depth data is not needed.
I agree with the author that the privacy implications of all-encompassing databases could be scary. But I disagree that this has anything to do with the iPhone X or its TrueDepth camera.
As @braddwyer himself notes, you can probably get a much better mesh integrating over time. It depends how long it takes to capture a single frame, but I imagine that's not long, so getting an order of magnitude improvement is probably quite easy.
> 4) The scary part, which is vast dystopian databases of facial fingerprints is already being done with normal photographs. The depth data is not needed.
And yes ... after all, humans are quite capable of identifying people with high accuracy from 2D photographs. Depth maps are not required for there to be serious privacy issues with such databases.
It gives you about 15 fps of depth data
Given the small size of the laser projector, I imagine natural movement from the phone being hand-held would result in significant displacement of the projected dots over a 1s interval? Have you tried integrating the 15 frames to see what it looks like?
We submitted a game about 3 weeks ago using front-facing ARKit as its core game mechanic and it hasn't been approved by Apple yet.
I'm waiting to see if they're going to allow us to use the new technology in novel ways or not before I invest a lot more time in it.
Yes and no. We’re mostly good at that (with exceptions — I have to see someone a lot before I remember their face), but we evolved for small groups and therere are now enough people that doppelgänger is a profession.
On the other hand, databases are still a problem because collections of timestamped photos can reveal far too much about us once an identity is properly confirmed.
Which would be true if they were actually storing this on the cloud in a form they could access. As far as we know, they are not. That's the point of the "Secure Enclave".
This is how most (all?) state of the art acquisition methods for standard objects (faces, hands, etc) work. By warping a high res template to fit the data in some optimal manner, you get guarantees on the output topology without having to do tons of messy cleanup.
I'd be curious to know if face unlock had problems with vision impairment, i.e. no gaze vector.
Weren't they specially created to make use of that 3D cam?
It’s possible to generate reasonable 3D models of faces from a single photograph. 
The highest resolution 3D scans I’ve seen are produced by aligning data from multiple high resolution photographies.
The big problem with that aproach is that it requires a lot of detail in the source material. Smooth surfaces, blurry images, or noise because of poor lighting makes it impossible for the algorithm to find features to align.
This is where the dot matrix projector comes in: by projecting a bunch of dots on your face, you get features that the algorithm can align, making the scan faster and more robust in low light.
Pretty neat stuff, though I've never found any actual artistic or practical use for it.
Does it also work in the dark without the depth camera?
I was really surprised how well it does even when covering up the IR sensor prior to opening the app.
I don't doubt that they are using the IR data to improve things. But it does "good enough" without.
> The reason for the misconception comes from the implementation: The IR system only (currently) fires periodically to create and update the depth mask. The RGB camera has to capture persistently to track movements and match expressions. In other words, cover the IR system and the depth mask will simply stop updating and likely, over time, degrade. Cover the RGB, and the tracking and matching stops dead.
"...likely, over time, degrade."
1) He doesn't know.
2) It's Animoji, so why would it matter if it did degrade? There is already a stock 3D image of the Poop. It simply needs the RGB camera to track where your facial features are.
The A11 chip has dedicated “neural engine” hardware which is used for Animoji and other facial recognition tasks.
How much could be done in the standard CPU on other devices I’m not sure.
Well, lets just say there's the kind of scary fact that nobody, trustworthy, has audited this thing.
Like, should a company that didn't run a "doesn't let root login on first-try" test be allowed to be making such wide-ranging decisions as face-scanning?
What if I don't want to have my face scanned, but nevertheless need to pick up somebody's lost-phone/detonation-device? Shall I just wear a mask?
The point is that we have moved beyond a zone where 'disagree/-agree' means anything, any more. Our data is out there.
Not so sure I want my face involved where, preferably, my hands should be..
Well...yeah. If you're out in public and your face is visible, you don't have a reasonable expectation of privacy.
Does that make it even the tiniest bit more acceptable, or does that mean it's really high time to stop it? Being an apologist kind of hinges on that, and no need to put anything in quotes.
Many privacy-conscious settings that I've been in prohibit phones entirely.
Like, its a cool technology - sure. But has nobody thought of the militarisation of it? Sheesh.
Exactly what I've said from day one - that Apple's "FaceID is 50x more secure than Touch ID" claim, based on the False Acceptance Rate, was total bullshit. That only works if you're going to throw random data at the authentication mechanism.
But someone who's going to target you isn't going to do that. It's going to you a 3D profile of your face from your online photos or from CCTV cameras (to which not only the government has access, but hackers, too).
In practice, it's much more difficult to gain a clone of someone's fingerprint than it is to gain a clone of their 3D face.
Good thing I'm not Jason Bourne, and the most likely scenario for someone trying to get into my phone involves my sister's kids.
Now thinking about co's like Facebook that not only has access to far more imagery of faces tied to sentiments and moments but have shown time and time again that privacy is a secondary concern AND that they're willing to use any and all that data to actively pursue vulnerable populations , I get quite nervous.
That's a pretty low bar. Yes, Apple at least gives lip service to security which other companies don't even bother to do, but Apple has had some pretty major security screwups lately (three in the last few weeks). You might be less worried about Apple than the competition, but you'd do well to be somewhat worried about them nonetheless.
What I'm about to write does not invalidate your statement and concern, but to me there's a huge gulf between a bug and a willfully-designed feature that explicitly follows a security anti-pattern.
- MAC address tracking (RTS packets thrwart MAC randomization, not immediately clear if WiFi is fully off)
- All-or-nothing access to photos on the phone (append-only for apps that request it)
Discord in particular is bad for the photo permission, as I don't trust China's Tencent with access to all the photos on my phone (which can include lots of frequent location information!), but pasting images doesn't work in the app.
I had assumed the "Photos" permission meant "permission to prompt this dialogue" and "permission to save to Camera Roll".
"To protect user privacy, an iOS app linked on or after iOS 10.0, and that accesses the user’s photo library, must statically declare the intent to do so."
It's notable Discord was developed as an American startup, and it's not clear what Tencent's involvement is. Regardless, for me it's too much access for a chat app to have in exchange for the convenince of sharing a photo from my phone.
Why do I need to give snapchat access to all photos ever just to post from my camera roll?
If you want to try it out install the Wire messenger (if you make the account with a web browser you don't need to provide a phone number), and try to attach a photo but deny Photo library permissions. (Here's the buttons to press: https://imgur.com/a/gc5Iq). Other apps work this way on iOS 11 but this is the one that came to mind.
This is partly why I am not installing Facebook or Messenger on my iPhone X. Additionally, both apps are massive time wasters for me and, last time I checked, collectively take up a nontrivial amount of space on device. While I haven't been able to entirely exterminate Facebook from my life, restricting myself to accessing it from a "real" computer only has been most refreshing.
It's only a secondary concern in the sense that they're trying to find new and innovative ways to violate it. If it weren't for that, it wouldn't be a concern for them at all.
It's probably not abused, but how can we really know in the post Snowden world.
“Face ID data doesn’t leave your device and is never backed up to iCloud or anywhere else.” (https://support.apple.com/en-us/HT208108)
“[Your fingerprint data] can’t be accessed by the OS on your device or by any applications running on it. It's never stored on Apple servers, it's never backed up to iCloud or anywhere else, and it can't be used to match against other fingerprint databases.” (https://support.apple.com/en-us/HT208108)
These days, I don't trust any tech company.
Oh but they won't is not any kind of safeguard at all. We have no idea who will own apple in 10 years time, the entire management team could be changed. Even if you trust apple now, which is probably naieve, you're trusting all possible future apple. That's just crazy. Insert any company large or small in the place of apple and it is precisely and exactly the same. Apple are no different.
As it stands, you can start one of these ads and then turn your phone upside down, or look away. How long until an advertising provider makes use of the attention API and makes it so that you can't look away? Seems bleak.
Is there anything stopping them from doing this with the front-facing camera on existing phones?
Face ID really doesn't change that.
look at this model for example, it has medium amount of mesh: http://image.shutterstock.com/display_pic_with_logo/279553/1...
If your face is now a security feature and the data is being shared with apps, that sounds like a security leak.
If an app collects your face data, can that data be subpoenaed by a court to attempt to unlock your phone? If a court could work with an outside party to use subpoenaed face data to 3D print your face, could they try to use that to unlock your phone?
I’m satisfied with the fingerprint scanner on my phone. I don’t feel like I need the change in tech. I understand if you’re really concerned about security use a passcode only, but it’s still true that the new face unlock is “differently secure”.
Animoji uses all of the front sensors - RGB and TrueDepth.
> the TrueDepth camera system captures a crude depth mask with the IR system and then, in part using the Neural Engine Block on the A11 Bionic processor, persistently tracks and matches facial movement and expressions with the RGB camera.
Access to the live face data stream is provided for features like silly Snapchat filters ("turn your face into a lion" type crap) which the user may opt in to.
This has some similarities to the location permission dialog, which was updated in iOS 11 to differentiate between "allow always" and "allow while using the app." Perhaps the camera permission dialog could be updated to "allow continuous access" and "allow when taking photos."
I think a better solution is when app is specifically requesting face data, there is a 2-3 seconds mandatory decision time with the default option to be off and prompt this decision in a different UI from the classic permission request dialogue. In this way the user knows the request is different And is given time to actually decide before accidentally tapping agree.
Of course it will, which is why this is another very troubling development in phone software. It's bad enough that most (all?) interactions are logged and data mined in a modern app. Soon emotional data will be too.
But for Android there is at least f-droid.org repository made of apps automatically compiled from the source code. I prefer using these apps. And the fact you can install what you want. In iOS you are locked into a censored AppStore and have to pay to get a certificate to run your own app.
There is also a small chance for someone to develop an Android phone with fully open software in the future. There is 0% chance to have an open iPhone.
Google and Facebook and other such companies treat personal data as an asset and try to collect as much as they can.
Apple treats personal data as a liability and wants to have no more than they need to operate.
>I remember Steve talking on stage about how Apple would always make the operating system retain control of sensitive sensors and only give the result to user-approved apps(asking for permission every time). That's all thrown out of the window now?
Why would you think that? Apps do request permission for this, and it can be revoked.
According to the Post "Once you give it permission, an active app keeps on having access to your face until you delete it or dig into advanced settings. There’s no option that says, “Just for the next five minutes.”"
So a random developer can (kind of) look at us when we turn the light off. Not only that, those cameras work also in the sunlight. You can go on YouTube and search "clothes transparent to infrared". Not much, especially considering that the app is not getting a real image, but still somebody will be uncomfortable with that.
However there also legit and interesting applications for infrared cameras and they've been on sale for a long time. What's changing is the number of those cameras that are getting around.
Isn’t the definition of an active app, one that is open and in the foreground?
The minute the phone sleeps, or the app is put into the background, access becomes more at Apple’s discretion.
I’d expect an app running in the foreground to be able to keep using the camera as long as needed..
Or is your point more: allowing once allows it every time the app opens, with no option to repeatedly prompt?
I think the visual-spectrum cameras provide _a lot_ more sensitive data than the rough facial depth data. What does your phone “see” on a daily basis? Have you taken your phone into the bathroom with you lately?
Being able to handle the dark doesn't seem like that big a deal — most of the time people use their phone in well lit areas, and if they aren't, their phone generally lights up their face.
In practice, fingerprint readers should still remain significantly more secure than any facial recognition technology.
It did a pretty good job of scanning my face though: https://twitter.com/braddwyer/status/930594896523567104
Am I being an alarmist or is it reasonable to be concerned about this?