Hacker News new | past | comments | ask | show | jobs | submit login
Apple is sharing your facial wireframe with apps (washingtonpost.com)
556 points by lisper on Dec 4, 2017 | hide | past | favorite | 141 comments



I've been playing with the TrueDepth Camera APIs on the iPhone X. Some things I've noticed:

1) The ARKit "Face Mesh" seems to be a standard model that is scaled and skewed to fit your face (for example, it ignores glasses, still works if you put your hand in front of your face, etc). It is _not_ a 3D scan.

2) The "TrueDepth" data is not really all that granular. It seems similar to the depth map you get from the rear-facing cameras on the "plus" sized models. Here's what the sensor data spits out: https://twitter.com/braddwyer/status/930682879977361408

3) Apple is really good at marketing. It's been shown that, even if you cover the TrueDepth camera, features that "require" it still work fine (including Animoji and the apps that I've been developing using the front-facing ARKit APIs).

3.1) The lack of Animoji and front-facing ARKit seems to be a software limitation made for business reasons rather than a hardware limitation. See: Google's Pixel 2 portrait mode photos done using a single front-facing camera that have stacked up well against the ones from the iPhone X.

4) The scary part, which is vast dystopian databases of facial fingerprints is already being done with normal photographs. The depth data is not needed.

I agree with the author that the privacy implications of all-encompassing databases could be scary. But I disagree that this has anything to do with the iPhone X or its TrueDepth camera.


> 2) The "TrueDepth" data is not really all that granular. It seems similar to the depth map you get from the rear-facing cameras on the "plus" sized models. Here's what the sensor data spits out: https://twitter.com/braddwyer/status/930682879977361408

As @braddwyer himself notes, you can probably get a much better mesh integrating over time. It depends how long it takes to capture a single frame, but I imagine that's not long, so getting an order of magnitude improvement is probably quite easy.

> 4) The scary part, which is vast dystopian databases of facial fingerprints is already being done with normal photographs. The depth data is not needed.

And yes ... after all, humans are quite capable of identifying people with high accuracy from 2D photographs. Depth maps are not required for there to be serious privacy issues with such databases.


@braddwyer is me :)

It gives you about 15 fps of depth data


Haha, cool! Hi :)

Given the small size of the laser projector, I imagine natural movement from the phone being hand-held would result in significant displacement of the projected dots over a 1s interval? Have you tried integrating the 15 frames to see what it looks like?


I haven't yet.

We submitted a game about 3 weeks ago using front-facing ARKit as its core game mechanic and it hasn't been approved by Apple yet.

I'm waiting to see if they're going to allow us to use the new technology in novel ways or not before I invest a lot more time in it.


Getting minute, subpixel movements can ironically give you MORE resolution if you process it over time, though you'd probably need some sort of "anchor" points


That doesn’t seem ironic to me.


I think the irony being implied is that normally when you're shooting video and your camera is jittering, you're effectively losing resolution compared to a static camera because of motion blur, whereas this depth mapping benefits from minute movements. Though looking at individual frames of video is different than combining them into a single sharper image, I get the counterintuitive feeling they were driving at.


Could you stabilize this before integrating? Using feature points and matching them up, perhaps?


I imagine something like that would be necessary. The techniques would probably be ones related to those used in SLAM [1].

[1] https://en.wikipedia.org/wiki/Simultaneous_localization_and_...


> And yes ... after all, humans are quite capable of identifying people with high accuracy from 2D photographs. Depth maps are not required for there to be serious privacy issues with such databases.

Yes and no. We’re mostly good at that (with exceptions — I have to see someone a lot before I remember their face), but we evolved for small groups and therere are now enough people that doppelgänger is a profession.

On the other hand, databases are still a problem because collections of timestamped photos can reveal far too much about us once an identity is properly confirmed.


> And yes ... after all, humans are quite capable of identifying people with high accuracy from 2D photographs. Depth maps are not required for there to be serious privacy issues with such databases.

Which would be true if they were actually storing this on the cloud in a form they could access. As far as we know, they are not. That's the point of the "Secure Enclave".


The Secure Enclave doesn’t do much good if you are providing the face map to any developer that wants it.


> 1) The ARKit "Face Mesh" seems to be a standard model that is scaled and skewed to fit your face (for example, it ignores glasses, still works if you put your hand in front of your face, etc). It is _not_ a 3D scan.

This is how most (all?) state of the art acquisition methods for standard objects (faces, hands, etc) work. By warping a high res template to fit the data in some optimal manner, you get guarantees on the output topology without having to do tons of messy cleanup.


How does that work with people with non standard features? Think glass eyes, acid burns, missing fingers, etc.


It doesn't.

I'd be curious to know if face unlock had problems with vision impairment, i.e. no gaze vector.


The “require attention" feature can be disabled.


I just tried using Animojis with the TrueDepth camera covered. After a second the frame rate drops significantly (to roughly 10 fps) and the character's eyes glitch out. I'm convinced Animojis are doing something with the TrueDepth hardware. It still tracks head movement with just the camera, but its significantly slower and more error prone.


Initials reports suggested Animoji worked with the TruDepth camera covered, but detailed reports of subsequent experimentation have revealed that TruDepth is required at intervals, just not 100% of the time.


> I'm convinced Animojis are doing something with the TrueDepth hardware

Weren't they specially created to make use of that 3D cam?


Yes, but the parent comment was arguing that Animojis are a marketing gimmick, suggesting they could be enabled on other phones without the depth sensing hardware. I was sharing my experience as a counterpoint.


Re 3) I don’t know any details of Apples implementation, but typically computer vision algorithms integrate data from multiple sensors to generate a 3D model. The more data you have, the more robust the output will be.

It’s possible to generate reasonable 3D models of faces from a single photograph. [1]

The highest resolution 3D scans I’ve seen are produced by aligning data from multiple high resolution photographies.

The big problem with that aproach is that it requires a lot of detail in the source material. Smooth surfaces, blurry images, or noise because of poor lighting makes it impossible for the algorithm to find features to align.

This is where the dot matrix projector comes in: by projecting a bunch of dots on your face, you get features that the algorithm can align, making the scan faster and more robust in low light.

[1]: http://kunzhou.net/2015/hqhairmodeling.pdf


And if you're interested in building 3D models from multiple photographs, try Helicon Focus. You take a focus stacked set of images (basically get a macro lens, open it up pretty wide, and take a picture with the focal plane 1cm (etc.) apart until every part of your subject is in sharp focus), and it will look for the sharply-focused parts to infer depth information for the stack. It can then build you a 3D model.

Pretty neat stuff, though I've never found any actual artistic or practical use for it.


Is there an iPhone app to capture such images, e.g. using the dual camera?


A skilled human with a decent tool can also make a pretty good 3D model from one face image in less than 5 minutes. https://youtu.be/Eq0tTzCwXNI


That is not a 3d model, much less a pretty good one.


That is a wireframe but Blender can turn a wireframe into a mesh and vice versa as the video title states.


Can it really? It thought he intended to use this as a reference when (manually) doing the actual modelling.


It looks like one of those old conversation pieces that had the grid of needles you could push your face or hand into to make a 3D image.


3) Apple is really good at marketing. It's been shown that, even if you cover the TrueDepth camera, features that "require" it still work fine (including Animoji and the apps that I've been developing using the front-facing ARKit APIs).

3.1) The lack of Animoji and front-facing ARKit seems to be a software limitation made for business reasons rather than a hardware limitation. See: Google's Pixel 2 portrait mode photos done using a single front-facing camera that have stacked up well against the ones from the iPhone X.

Does it also work in the dark without the depth camera?



I've done some experimenting with it in my own apps.

I was really surprised how well it does even when covering up the IR sensor prior to opening the app.

I don't doubt that they are using the IR data to improve things. But it does "good enough" without.


I take anything Rene says with a very large grain of salt.

> The reason for the misconception comes from the implementation: The IR system only (currently) fires periodically to create and update the depth mask. The RGB camera has to capture persistently to track movements and match expressions. In other words, cover the IR system and the depth mask will simply stop updating and likely, over time, degrade. Cover the RGB, and the tracking and matching stops dead.

"...likely, over time, degrade."

1) He doesn't know.

2) It's Animoji, so why would it matter if it did degrade? There is already a stock 3D image of the Poop. It simply needs the RGB camera to track where your facial features are.


>The lack of Animoji and front-facing ARKit seems to be a software limitation made for business reasons rather than a hardware limitation.

The A11 chip has dedicated “neural engine” hardware which is used for Animoji and other facial recognition tasks.

How much could be done in the standard CPU on other devices I’m not sure.


The iPhone 8 and 8+ have the same A11 chip as the iPhone X.


Well that's not what I was sold at the keynote. Has this been verified? I thought my face was being scanned with 30k dots and all that.


There's a big difference between what Apple is using to perform FaceID scans and the APIs it exposes to developers. Apple has historically been very cautious about the access it gives to developers, especially where matters of privacy are concerned.


Ahh. I overreacted. This must be it.


Sure. But 30,000 is only 150x200. And not all of them are going to hit your face.


> .. could be scary. But I disagree that this has anything to do with the iPhone X or its TrueDepth camera.

Well, lets just say there's the kind of scary fact that nobody, trustworthy, has audited this thing.

Like, should a company that didn't run a "doesn't let root login on first-try" test be allowed to be making such wide-ranging decisions as face-scanning?

What if I don't want to have my face scanned, but nevertheless need to pick up somebody's lost-phone/detonation-device? Shall I just wear a mask?

The point is that we have moved beyond a zone where 'disagree/-agree' means anything, any more. Our data is out there.

Not so sure I want my face involved where, preferably, my hands should be..


> What if I don't want to have my face scanned, but nevertheless need to pick up somebody's lost-phone/detonation-device? Shall I just wear a mask?

Well...yeah. If you're out in public and your face is visible, you don't have a reasonable expectation of privacy.


If you are an apologist for what is the equivalent of having hundreds of people in trenchcoats following every person on the planet, detailing their every public move and storing it forever, you don't have any reasonable expectation of your part in the documentary being underlaid with anything but sinister music.


I don't know if I'm an "apologist" for anything, but what you're describing has been the case for decades by now; it's an inherent property of cell phone networks. By 2005 we already all carried personally identifiable devices with microphones, geolocation, and cameras and a persistent connection to a network.


> what you're describing has been the case for decades by now;

Does that make it even the tiniest bit more acceptable, or does that mean it's really high time to stop it? Being an apologist kind of hinges on that, and no need to put anything in quotes.


"People should stop carrying mobile phones" is an interesting proposition, but a fairly tangential one.



So... How do I turn it off?


What does this have to do with "out in public"? I have by law, custom, and common sense an expectation of not being 3D-scanned in a gas station bathroom, even if I pick up the phone that the previous visitor had dropped on the floor.


And if the previous visitor was on a Skype call with someone while in that bathroom, dropped the phone on the floor, and walked out, and you walked in and picked up the phone, you would have the same issue.

Many privacy-conscious settings that I've been in prohibit phones entirely.


You don't have anything of the short by law. And not even by custom. It's just that such technology wasn't available in a popular device before.


Honestly, I can't fathom why you're being downvotes, and this is exactly the predicament I'm most concerned about, personally.

Like, its a cool technology - sure. But has nobody thought of the militarisation of it? Sheesh.


In most countries you have no right not to be photographed (by anyone) when in public (by definition being in public is not being in private). I fully support this with the exception of the homeless (and I think it does somewhat support the right to cover your face in public if you wish).


And in some states, like Virginia, it's not legal (for the most part) to wear a mask in public if you're over the age of 16.


While I love the idea of everyone walking around in masks, do that at the moment and you are likely to be arrested or shot.


> The scary part, which is vast dystopian databases of facial fingerprints is already being done with normal photographs. The depth data is not needed

Exactly what I've said from day one - that Apple's "FaceID is 50x more secure than Touch ID" claim, based on the False Acceptance Rate, was total bullshit. That only works if you're going to throw random data at the authentication mechanism.

But someone who's going to target you isn't going to do that. It's going to you a 3D profile of your face from your online photos or from CCTV cameras (to which not only the government has access, but hackers, too).

In practice, it's much more difficult to gain a clone of someone's fingerprint than it is to gain a clone of their 3D face.


> It's going to you a 3D profile of your face from your online photos or from CCTV cameras

Good thing I'm not Jason Bourne, and the most likely scenario for someone trying to get into my phone involves my sister's kids.


I'm honestly less worried about Apple than others. They've at least made some measures to prove that they are willing to go some distance to protect privacy, even losing ground to competitors in voice recognition.

Now thinking about co's like Facebook that not only has access to far more imagery of faces tied to sentiments and moments but have shown time and time again that privacy is a secondary concern AND that they're willing to use any and all that data to actively pursue vulnerable populations [1], I get quite nervous.

[1] https://www.theguardian.com/technology/2017/may/01/facebook-...


> I'm honestly less worried about Apple than others.

That's a pretty low bar. Yes, Apple at least gives lip service to security which other companies don't even bother to do, but Apple has had some pretty major security screwups lately (three in the last few weeks). You might be less worried about Apple than the competition, but you'd do well to be somewhat worried about them nonetheless.


Apple has had some pretty major security screwups lately

What I'm about to write does not invalidate your statement and concern, but to me there's a huge gulf between a bug and a willfully-designed feature that explicitly follows a security anti-pattern.


Two other issues I'm worried about with iOS:

- MAC address tracking (RTS packets thrwart MAC randomization[0], not immediately clear if WiFi is fully off[1])

- All-or-nothing access to photos on the phone (append-only for apps that request it)

Discord in particular is bad for the photo permission, as I don't trust China's Tencent with access to all the photos on my phone (which can include lots of frequent location information!), but pasting images doesn't work in the app.

I had assumed the "Photos" permission meant "permission to prompt this dialogue" and "permission to save to Camera Roll".

[0] https://arxiv.org/pdf/1703.02874v1.pdf [1] https://support.apple.com/en-us/HT208086


In iOS 11, there is a new permission that an app can ask for, which grants write-only access to the photo library. [0]

"To protect user privacy, an iOS app linked on or after iOS 10.0, and that accesses the user’s photo library, must statically declare the intent to do so."

[0] https://developer.apple.com/library/content/documentation/Ge...


Apple changed the WiFi in Control Center behavior in iOS 11.2 to be more clear: https://www.macrumors.com/2017/11/13/ios-11-2-beta-3-control...


That's actually a very good point about the frequent location information (or just location metadata in general). I've never thought about it, but I guess giving an app access to my photos gives them full access to the location metadata, and would allow them to put together a pretty accurate model of where I've been and where I live.


Does Tencent not have an extension that pops up in the sharing sheet for images?


Just checked again: not for Discord. No idea about other apps owned by Tencent.

It's notable Discord was developed as an American startup, and it's not clear what Tencent's involvement is. Regardless, for me it's too much access for a chat app to have in exchange for the convenince of sharing a photo from my phone.


A better solution would be for Apple to provide a decent photo picker that functions at the system level, and require a separate (special) permission to access all photos with the appropriate warning if that app needs a fancy dancy photo picker.

Why do I need to give snapchat access to all photos ever just to post from my camera roll?


It already does work like this in iOS 11. Apps can present the System photo picker to you and receive only your selected photo while having their Photos access set to "Never".

If you want to try it out install the Wire messenger (if you make the account with a web browser you don't need to provide a phone number), and try to attach a photo but deny Photo library permissions. (Here's the buttons to press: https://imgur.com/a/gc5Iq). Other apps work this way on iOS 11 but this is the one that came to mind.


For the longest time that was how I thought it worked :/


Indeed. Intent counts for nothing if the capability to secure the data is lacking. All companies claim to be secure, to respect your privacy yadda yadda...


> Now thinking about co's like Facebook ...

This is partly why I am not installing Facebook or Messenger on my iPhone X. Additionally, both apps are massive time wasters for me and, last time I checked, collectively take up a nontrivial amount of space on device. While I haven't been able to entirely exterminate Facebook from my life, restricting myself to accessing it from a "real" computer only has been most refreshing.


> co's like Facebook that not only has access to far more imagery of faces tied to sentiments and moments but have shown time and time again that privacy is a secondary concern

It's only a secondary concern in the sense that they're trying to find new and innovative ways to violate it. If it weren't for that, it wouldn't be a concern for them at all.


Are you sure. I mean, I know it's tinfoil and the refusal to open phones for the FBI is debunking it, but we've given Apple a database of our fingerprints and now our facial ids as well as constant real time location data of our whereabouts.

It's probably not abused, but how can we really know in the post Snowden world.


Apple has repeatedly stated that Touch ID and Face ID data does not leave the device, including for backups.

“Face ID data doesn’t leave your device and is never backed up to iCloud or anywhere else.” (https://support.apple.com/en-us/HT208108)

“[Your fingerprint data] can’t be accessed by the OS on your device or by any applications running on it. It's never stored on Apple servers, it's never backed up to iCloud or anywhere else, and it can't be used to match against other fingerprint databases.” (https://support.apple.com/en-us/HT208108)


Well, 10 years ago I wouldn't have believed that my turned off computer would be used to film me by my own government.

These days, I don't trust any tech company.


Sure I'm sure. I trust Apple more than Facebook or Google, and I still don't want Apple to have my facial ID info.


The point that the article makes is that Apple aren’t doing much to give users control over third-party access to their “faceprint”, or what it is ultimately used for.


Bait.

And switch.

Oh but they won't is not any kind of safeguard at all. We have no idea who will own apple in 10 years time, the entire management team could be changed. Even if you trust apple now, which is probably naieve, you're trusting all possible future apple. That's just crazy. Insert any company large or small in the place of apple and it is precisely and exactly the same. Apple are no different.


You know how some F2P games will allow you to watch video ads in order to e.g. earn extra in-game currency, activate a point multiplier, etc?

As it stands, you can start one of these ads and then turn your phone upside down, or look away. How long until an advertising provider makes use of the attention API and makes it so that you can't look away? Seems bleak.


"15 Million Credits" from Black Mirror vibes there.


This came to mind to me, too. haha


As far as I know there's no "Attention" API. It also seems intentional that Apple has omitted an eye-tracking API as well (even though it seems like that would be trivial for them to add from a technical perspective)

Is there anything stopping them from doing this with the front-facing camera on existing phones?


Apple would probably reject apps that ask for camera access just to track user attention, and even if they didn't, asking for camera access in an app that doesn't have a reasonable reason to want it is a huge red flag and will get the app uninstalled by a lot of people.


So come up with some legitimate use for camera access (e.g. simple AR level in game) and also use it for ads?


Are any apps already doing this? Facial recognition isn't exactly exotic technology, and pretty much all phones has a front facing camera these days.

Face ID really doesn't change that.


I'd like to see them try really. I am pretty sure that people would just not play these kind of games as it would be super annoying.


Reject such F2P games. There's lots of F2P games out there.


I called this on the day they announced all this stuff. It's definitely going to happen.


Clickbait headline. Apple isn't doing what the headline implies. The headline's hype also does not remotely match the contents of the article. Which leads me to discredit the article in general and look for a more reliable source on what is actually happening.


Let’s not jump to conclusions here based on assumptions and ignorance. What is likely being shared is a generic wireframe with pose and expression information. Not a face fingerprint as some are breathlessly calling it.


a precise wireframe could be as precise as a fingerprint... If it has a very high mesh count, it is just a high resolution 3D model of your face... much more information then what is required to face-id you

look at this model for example, it has medium amount of mesh: http://image.shutterstock.com/display_pic_with_logo/279553/1...


Not sure why this is being downed. I haven't looked into the details of this case, but the more detailed the wireframe mesh the faces are being fit to, the more possibilities there are, and the lower the likelihood of collisions between two people. Think of it as a hashmap with the number of buckets being determined by the mesh quality. At a certain point, a few collisions won't hinder apps from capitalizing on the information.


Apple is very popular among the HN crowd. Any comment critical of Apple or skeptical of their intentions will be immediately rejected by a substantial number of readers.


So you mean to tell me that an app that gets camera permission can get at the things seen by the camera? Oh no!


Isn't this a bit like saying that Apple is sharing your photograph with apps [that use the camera]?


What’s interesting is that on the previous security method - fingerprint - no data was shared with apps.

If your face is now a security feature and the data is being shared with apps, that sounds like a security leak.

If an app collects your face data, can that data be subpoenaed by a court to attempt to unlock your phone? If a court could work with an outside party to use subpoenaed face data to 3D print your face, could they try to use that to unlock your phone?

I’m satisfied with the fingerprint scanner on my phone. I don’t feel like I need the change in tech. I understand if you’re really concerned about security use a passcode only, but it’s still true that the new face unlock is “differently secure”.


The thing is, I don’t think the 3D scanner really introduces that much functionality beyond what could be done with any normal front-facing camera and software. I’d bet that a few seconds of normal movement would provide enough parallax to build a decent 3D model. In fact, the Apple APIs may not even use the 3D scanner, given that Animojis apparently work with the scanner covered up (and don’t work in the dark).


>the Apple APIs may not even use the 3D scanner, given that Animojis apparently work with the scanner covered up (and don’t work in the dark).

Animoji uses all of the front sensors - RGB and TrueDepth.

From https://www.imore.com/yes-animoji-uses-truedepth-camera-syst...:

> the TrueDepth camera system captures a crude depth mask with the IR system and then, in part using the Neural Engine Block on the A11 Bionic processor, persistently tracks and matches facial movement and expressions with the RGB camera.


But face data is not shared with apps for security/unlock purposes.

Access to the live face data stream is provided for features like silly Snapchat filters ("turn your face into a lion" type crap) which the user may opt in to.


The problem is, apparently, the opt in question is "do you want to allow this app to use the camera" not "do you want to allow this app to use your face 3D measurements data."


I have a feeling that the average lay user might actually find the camera demand more invasive than 3D measurements. In one they'd feel that the application is getting their picture while in the other just some face contour blah blah.


The scariest part about this tech is not what my phone will do with my face, but what other people's phones will do with my face. Facebook, Snapchat, and Instagram were bad enough. Am I going to have to start wearing a mask in public just so I don't have my face tracked, sold to the highest bidder, and left in unsecured databases for hackers to obtain?


I can appreciate the desire to limit the proliferation of permission dialogs, but this seems like a case where the implications are different enough to warrant a separate one from the camera dialog.

This has some similarities to the location permission dialog, which was updated in iOS 11 to differentiate between "allow always" and "allow while using the app." Perhaps the camera permission dialog could be updated to "allow continuous access" and "allow when taking photos."


The problem in here is more or less awareness of sharing or how to prevent accidental sharing of face data. I can see sharing to some photo editing app in anonymized fashion being useful for certain things and better than say, using a flat photo from Instagram or Facebook, but sharing faces on social media is a lot more intentional than a small and hidden agree to share with app button.

I think a better solution is when app is specifically requesting face data, there is a 2-3 seconds mandatory decision time with the default option to be off and prompt this decision in a different UI from the classic permission request dialogue. In this way the user knows the request is different And is given time to actually decide before accidentally tapping agree.


This article has an image comparing iPhone X mesh resolution to other 3D depth-sensing cameras, https://www.linkedin.com/pulse/warby-parker-should-you-worry...


I mean, didn't they demo the wireframe masks for Snapchat during the keynote? Didn't that imply that apps can see your facial wireframe?


WHaaaaat? Does Apple explain details about this? When the moment they share those data to other apps?


Is it controlled by app permissions?


Yes, same as the camera record permission (NSCameraUsageDescription).


This is 100% likely to be sent to advertising companies who will 100% attempt to use it to track you near their billboards et al.


And to develop their own new "fingerprints" and "big data" bases about who "owns the eyeballs".


Notice the deafening silence.


As I understand it, apps will have access to emotions and other abstracted info about the user, but not the "face" as a reproducible element. This will be interesting for engagement analysis.


> This will be interesting for engagement analysis.

Of course it will, which is why this is another very troubling development in phone software. It's bad enough that most (all?) interactions are logged and data mined in a modern app. Soon emotional data will be too.


Would you please not post unsubstantive comments? There's no more information content in this than silence has.


Give it a moment, everyone is in scrum.


Back from my scrum. It will take a lot to move myself away from Apple products, but the continuation of issues from last week isn't helping at all. I don't own the iPhone X, but now it's not even a thought. The entire pitch of face id was that it was x times more secure. You can't just give away that data.


I'll certainly have to give this more thought, but I do agree that last week's issues, and well, iOS and MacOS in general, are not up to the quality that Apple users (myself included) expect nor that Apple themselves portray.


Time to switch to Android.


Situation is quite bad for all smartphones.

But for Android there is at least f-droid.org repository made of apps automatically compiled from the source code. I prefer using these apps. And the fact you can install what you want. In iOS you are locked into a censored AppStore and have to pay to get a certificate to run your own app.

There is also a small chance for someone to develop an Android phone with fully open software in the future. There is 0% chance to have an open iPhone.


Not 100% true. On iOS you can install your own apps from Xcode for free, you have to pay the $100/year developer fee to distribute it on the app store.


So let me get this straight: you can unlock your iPhone with your face and Apple is giving away a free high-quality 3D scan of your face to anyone who wants it, who may or may not also own a 3D printer?


You don’t have it straight. That isn’t what’s happening.


Something alarming occurred to me: Apple has to be saving all of the faces of people who come into Apple stores and do the Face ID demo. There's no reason for them not to do that, they'd be leaving good data on the table otherwise.


On the contrary, Apple explicitly would never in a million years want to do that. That's a massive liability with no upside whatsoever.

Google and Facebook and other such companies treat personal data as an asset and try to collect as much as they can.

Apple treats personal data as a liability and wants to have no more than they need to operate.


what would be so good about the data?


So instead of asking Apple API to authenticate a user via FaceID, apps are going to obtain this data themselves and record it forever? I remember Steve talking on stage about how Apple would always make the operating system retain control of sensitive sensors and only give the result to user-approved apps(asking for permission every time). That's all thrown out of the window now?


This is not FaceID data, nor is it a replacement for it - the data available to apps is less granular than that used by FaceID, and is designed to give app developers the ability to create things like Animoji or any other creative uses of the 3D sensor data.

>I remember Steve talking on stage about how Apple would always make the operating system retain control of sensitive sensors and only give the result to user-approved apps(asking for permission every time). That's all thrown out of the window now?

Why would you think that? Apps do request permission for this, and it can be revoked.


What is this 'facial wireframe' this Jeff Bezos owned newspaper is speaking about? Looks invented just to write this article.


It's unclear whether he's talking specifically about the ARSCNFaceGeometry[1] or the ARFrame.capturedDepthData[2] or some combination of both.

[1] https://developer.apple.com/documentation/arkit/arscnfacegeo...

[2] https://developer.apple.com/documentation/arkit/arframe/2928...


The FaceID camera is infrared and can see in the dark [1].

According to the Post "Once you give it permission, an active app keeps on having access to your face until you delete it or dig into advanced settings. There’s no option that says, “Just for the next five minutes.”"

So a random developer can (kind of) look at us when we turn the light off. Not only that, those cameras work also in the sunlight. You can go on YouTube and search "clothes transparent to infrared". Not much, especially considering that the app is not getting a real image, but still somebody will be uncomfortable with that.

However there also legit and interesting applications for infrared cameras and they've been on sale for a long time. What's changing is the number of those cameras that are getting around.

[1] https://www.macrumors.com/2017/09/13/how-iphone-x-face-id-wo...


"Once you give it permission, an active app keeps on having access to your face”

Isn’t the definition of an active app, one that is open and in the foreground?

The minute the phone sleeps, or the app is put into the background, access becomes more at Apple’s discretion.

I’d expect an app running in the foreground to be able to keep using the camera as long as needed..

Or is your point more: allowing once allows it every time the app opens, with no option to repeatedly prompt?


I have no direct experience of iOs but according to the post it's as you wrote. It could be an app that needs the camera once for some sensible reason, then it can use it forever when it's active even if it wouldn't need it from the point of view of the user.


Users might notice the battery penalty.


App developers aren't given raw access to the IR camera though.


The permissions model is the same as the cameras.

I think the visual-spectrum cameras provide _a lot_ more sensitive data than the rough facial depth data. What does your phone “see” on a daily basis? Have you taken your phone into the bathroom with you lately?

Being able to handle the dark doesn't seem like that big a deal — most of the time people use their phone in well lit areas, and if they aren't, their phone generally lights up their face.


Tell me again how FaceID will be "so much more secure" than TouchID - when Apple itself is sharing those 3D facial profiles with third-party vendors (and governments).

In practice, fingerprint readers should still remain significantly more secure than any facial recognition technology.


Apps get a coarse depth map - this cannot be used to replicate your face in perfect 3D, nor can it be used to bypass FaceID.


I'm not a security expert but this seems like are giving out a coarse approximation of my username (that has no password protection). I would think it is significantly easier to break into FaceID starting with the coarse depth map than no depth map at all. But again, I could just be out of my depth on this topic and the difference isn't that big of a deal?


It's possible but I 3D printed the face mesh from their API and it didn't work to unlock Face ID on my phone.

It did a pretty good job of scanning my face though: https://twitter.com/braddwyer/status/930594896523567104


I admit that I am pretty ignorant to the really technical aspects of security but it seems that using authentication data for anything other than authentication is poor practice. I can understand the desire to use facial stuff to make interesting technology but I would prefer knowing that it is only used for the purpose of unlocking my device and nothing else. It seems fingerprints are less interesting for other apps so there wasn't the same motivation to share it.

Am I being an alarmist or is it reasonable to be concerned about this?


There's a missing link between "third-party apps have access to the depth sensor data" and "depth sensor data is used to identify your face for FaceID". The iOS biometric authentication API is basically just a call for the OS to check your information and return whether or not it succeeded [1]. Third parties can't just take the biometric data and use it to bypass your authentication without having physical access to your phone.

[1] https://developer.apple.com/documentation/localauthenticatio...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: