This is what has been missing from every discussion of this issue that I've seen so far.
The face scan isn't "insecure" even if you're worried about border searches. Just turn off your phone when you get in the security line! Pin will be required on start.
Pin is also required when plugging into a new computer.
The rest of the time when you're going about your daily life, and are not worried about a government agent spoofing your face or pointing the phone at your face, you can use this nice feature.
Most people will be _less_ secure without it. They don't want to punch a pin every time they want to tap their phone to pay for coffee. So without the face scan feature, they will opt for no security at all.
The reboot/plug-in pin requirements change the discussion quite a bit, but are usually ignored, seemingly so bloggers can state the obvious "but someone can spoof your face!"
As far as border searches go, border officers have the authority to request your PIN just as they have the authority to request your thumbprint/faceprint/etc. If you don't give it to them, you can be detained and/or your phone confiscated . Rebooting your phone won't help.
Personally, I would refuse to unlock my phone. My privacy and upholding civil liberties is worth being detained for a few hours (or even days).
Of course, how you prove you didn't cross the boarder is an open question. But you can in fact refuse the search on that claim. I suppose they may detain you then.
to clarify, US law pushes police right up to the edge. there is a preference for false positives rather than false negatives. it's more important to catch all of the criminals than it is to inconvenience some innocent people. The risk of letting one criminal go is much more than cost of detaining a doctor for a couple of hours.
Now, we're in this weird time where that doctor can have 20 years of hippa protected medical records in their pocket that they might be forced to disclose. Historically, that doctor may have some records in a briefcase, but not tens of thousands.
They can even nab American citizens for drug possession:
"One of the people arrested was a U.S. citizen who fled the checkpoint and led the police on a five-mile chase. The unnamed man was arrested and charged with three felonies, including reckless driving, possessing a controlled substance, and endangering the welfare of a minor."
But even if you're right, this doesn't change my argument. Most people are less secure most of the time in the absence of biometric authentication. Because without it, they will opt for zero security. You can always use the pin in addition, for whatever that's worth.
regular face: regular unlock
right eyebrow raised a tiny bit: hide my sensitive stuff from a casual search*
*and after a few minutes, if I don't deactivate it, start deleting.
Your phone can be easily restored from an iCloud backup once you've got access to wifi. You can always restore from another device or backup if there's data you don't want seen if somehow you can be compelled to do a restore.
"iTunes backups are still there for iOS devices, but performing a restore won’t transfer your apps from your Mac, but will instead download them over the Internet from Apple, which is, of course, likely to be slower."
My Android forces PIN entry after 5 wrong fingerprints.
So if you chose only 1 finger and you chose it randomly, they will have a 50% chance of getting in after 5 attempts.
I am not saying that I wouldn't mind at all if my phone was searched. But I can't think of anything in particular that I would be concerned about if it was. Sure in theory the agent could remember some personal information and come back later and use that info or pass it to someone for some nefarious purpose. But that's a pretty small chance event. It's more or less the equivalent of thinking you will get sick because you find a hair in your food at the restaurant. You don't like it and you send it back or demand a refund but the actual harm is more mental in nature.
Or what am I missing here?
How do I know the agent isn't downloading the naked pictures of my wife? (There are lots of reasons to not keep such pictures on my phone, but "Because border agents may see them" should not be one of them)
And there's just the futility of it -- if I really had something to hide from the government, I wouldn't keep it on my phone (or if I did, I'd keep it hidden).
the actual harm is more mental in nature.
That doesn't make it any less real - the government should not make me feel violated.
Bikkannavar says he was detained by US Customs and Border Patrol and
pressured to give the CBP agents his phone and access PIN. Since the phone
was issued by NASA, it may have contained sensitive material that wasn’t
supposed to be shared. Bikkannavar’s phone was returned to him after it was
searched by CBP, but he doesn’t know exactly what information officials
might have taken from the device.
I think the reason people focus on border searching so much is that it's presumed to be one of the easiest entry points for the government to spy on you.
Many security researchers, such as people who work on Tor and related technologies, have had their electronics searched and sometimes seized at the border. That makes it particularly relevant to the HN crowd.
For me, I seriously doubt the government would ever be interested in my data. But imagine that the CEO of my company is suspected of some kind of crime. Perhaps when I am at the border they try to image my phone, in order to get access to things which may allow them to control other company assets (servers, etc), which would get them closer to their target.
I doubt it would ever happen to me, but I'd rather take precautions. Being searched at the border seems much more likely to me than, say, the NSA/FBI targeting me remotely with malware.
The border agent accused her of prostitution. He wanted to get into her phone to see her latest Facebook Messenger and Tinder correspondence. She let him because, of course, it's easier than canceling her entire vacation plans.
Now, the tips in this thread wouldn't have helped her. But do you see how it's not just "I don't have anything to hide?" What about not letting some border agent power trip all over you? Reading your Tinder messages for fucks sake?
One such instance included a British guy who had shared a Facebook post recommending giving aid to refugees, Dubai police put him in jail over it - that was over a year ago and I believe he's still there. One instance included a microscopic amount of cannabis found on the sole of a passengers shoe, he got almost 4 years in prison.
There are many many other instances, if locking your phone stops them from even chancing it, it might be worthwhile.
Most first level TSA/DHS people measure the cost/benefit of my assertion and back off.
I'd personally much rather the State fundamentally respected a right to private and family life, much like article 8 of the ECHR, but fat chance of seeing that in the USA any time soon.
TouchID moved the ball forward quite a bit, and FaceID will probably go even further.
Obviously neither provide ultimate security, but Apple is in a strategic advantage since they make the hardware and software to make the barn walls and roof super secure, but it does nothing if the front door is left open.
With TouchID, I have a complex passcode that I have to enter a couple times a week. It's less secure than some hypothetical setup where I have a complex passcode I have to enter every time I unlock the phone, but it's far more secure than what I was actually doing before.
I think that strikes a nice security median. If someone does get procession of my phone, I only need to stall for less than 24 hours.
The rest of the time, the fingerprint scanner works near perfectly. It's actually faster to use the fingerprint scanner than the standard slide to unlock, which is all I ever had setup on my previous phones.
Can I configure my iphone to require TouchID, FaceID, and a PIN for each unlock of my phone every time?
They are though, if bad == insecure. Customs can make you unlock with fingerprint or face. If you can't lock yourself out, it's not secure.
Because they fall in the category of "will be used" as opposed to perfect security, which almost always falls in the category "won't be used"
(a) the world was only US citizens.
(b) nothing unconstitutional ever happened to the first group.
Allow me to repeat myself,
If you can't lock yourself out, it's not secure.
For example, some banks have time locks. Nobody gets into the vault, unless it is in a certain window of time.
It's not secure if you can lock yourself out either.
A court could hold you in contempt for failing to unlock or intently locking.
And a state actor or even mugger could just hurt you or even kill you, in frustration if you don't open it for them.
This has been a trend of a vocal minority on the internet for as long as I've been connected to it. Remember "No wireless. Less space than a nomad. Lame"?
I hate linking to that site but this will always be golden: Apple's New Thing (a post from 2001) 
Fortunately Sony already has a patent for this exact scenario. Maybe that will deter Apple from building such a feature (lol).
I want to be able to set touch or PINs for certain apps, so that I can have multi level security. Why is there so much emphasis on one master password/touch ID/face ID instead of having multiple security checks?
- Your username is who you think you are.
- Your second-factor (faceprint, thumbprint, keyfob) is who you claim to be.
- Your password is your proof.
But even with a 4 digit PIN most people didn't have PINs set up on their devices, so increasing friction will not result in more security.
If you want max security now, you can simply turn off TouchID or FaceID and us a very long alpha numeric password.
Would be interesting to see whether username-less public systems suffer fewer intrusions. Some users would record their passwords in unsafe places, but they couldn't reuse a password eliminating password stuffing as an attack surface.
Of course, if you allow password resets you'd still have a kind of "backup" username presuming a person's email address could be used for the reset. But that wouldn't significantly weaken the security, arguably.
How bout: Touch ID + Face ID. Hahaa
I think the only way for Touch ID to come back is for it to cover the whole display, so it can authenticate every touch.
The phone could constantly monitor both Face ID and full-screen Touch ID at the same time, and if it detects anything funky going down it can panic and lock. Would close down attacks where someone grabs your phone off you while it's unlocked and quickly disables security.
Touch ID only works on the first try about half the time for me, I would love to bring that experience to all my interactions with my phone.
That being said, I do think that there could be a legitimate use case here. One could set up a particular "emotion" (a face pattern) associated with someone forcing them to unlock a phone using their face. I mean, if someone pulls a gun or a knife on me, I'll probably just do as they say and look at the phone, rather than risk an additional hole in my body. But unlocking a phone and sending a distress call is something I could live with.
I think that could be a nice feature but would add stress to the situation when you should just be focussed on staying alive trying to remember how to do that special thing or enter an alternate code.
You are right, though, that this requires some "friction" and probably some self control.
Might not be completely universal though...
Much appreciated to the original author - it takes a good deal of time and effort to write something that lucid. Thanks.
If someone has the PIN and the phone, they can get in without the person (without their biometrics.) Fingerprints and Face recognition increase the chances that an abusive spouse needs the other person every time they access the phone.
Parents who have their childrens passwords are in the same situation -- they can't snoop on their kids biometrically secured phone (like reading a kids diary in the old days.) They have to have the kids open the phone, which means the kids know that it's happening.
This is such a stupid attitude. It is your way or nothing, right?
It's also useful to have a norm against doing these things even if you have 100% trust, because it's probably the case that people are too readily fooled (by themselves or a manipulative partner) into thinking they have 100 % trust when they actually don't. So having a norm against sharing everything protects the people who need it, even if you're not among those people.
No they don’t - you have always been able to use a PIN a instead of touchID, so knowing the pin still works just as well with or without it.
http://nfcring.com is an example of what I have in mind.
What I'd like to see is this tied into an identity system, such that the ring (or other very-hard-to-misplace, but replaceable and discardable) token is not itself an identity, but rather an access token to an identity store which can present any given identity to any given system.
That might be a consistent identity across multiple sessions or unique identities on each session. The identity might be tied to some central certifying agency (e.g., a motor vehicles department or national pensions fund), or not.
There are several elements of this which I'd like to see developed further, including how keys might be reconstructed or recovered using a quorum system of trusted sources (divide your key into pieces, share those amongst friends, family, or some local authority, such that key loss need not equal data loss), and possibly via law enforcement.
I'm also looking at the possibility of a public ledger system which might allow for both workfactor requirements and public disclosure of keys being revealed. This may be a viable application of crypto, though I'm not entirely sure of this.
(The feature might also be optional -- you could take the risk of key loss, or allow for recovery. But the present situation with PKI of losing access to all previously-encrypted data in the event of key loss would be mitigated.)
There's also the requirement for devices to have support for near-field readers. I'm told this is alreadly largely a reality, though my reading of specs for various mobile devices suggests otherwise.
The biggest challenges through all of this are not the technology itself, but the adoption, requirement, and enforcement of standards, including availability of tokens at low or no end-user price. Trust of the information ecosystem overall might be a suitable incentive for this to happen.
Stealing physical data itself is far harder than password appropriation.
And a PIN or password / passphrase, plus rate limiting, might still thwart scale attacks.
You're raising attack costs significantly.
Possible to countermeasure as well.
The way I'm envisioning it:
1. Physically touch the object you want to authenticate to. (E.g. Computer, payment terminal, smart lock, etc.) Watch uses capacitive coupling to bootstrap a Bluetooth connection to that device.
2. Device requests authentication & authorization from Watch.
3. Watch either authenticates you instantly (for lower security applications), or requests you to confirm the transaction with your fingerprint/face/PIN (higher security applications)
This method would also enable a lot of other neat tricks to further increase security, like checking your heart rate and refusing to authenticate if you're asleep or the watch isn't strapped to your wrist anymore, requiring additional authentication methods to unlock your watch after you take it off, displaying the dollar amount for monetary transactions on your watch when asking for approval, etc.
The problem with longer ranges, even just a few cm, is the prospect for snooping or triggerring unintended authentications. My preference would be mm range.
The advantage of using a watch (or another device with a built-in screen) is that it avoids exactly that problem. When you authenticate, you have to physically press a button on the device, and the screen tells you precisely what it is you're authorizing.
Using capacitive coupling as the initial communication channel would also help with that, since you'd have to actually touch the object you want to authenticate to with your bare skin.
Do you have any refs on capacitive coupling? Is that essentially touchscreen devices? How does that fare in exposed / outdoor environments? I'm thinking of wide applications, and something which wouldn't operate at, say, Tokyo Subway levels of use and demand aren't particularly amenable.
(That's tabling the discussion of whether or not you'd want to have per-use charges for transit use or want to offer that as a public service, or only filter based on individuals, etc.)
Field range might be set by speed-of-light delays. Roughly a nanosecond per 30cm (about 10 foot).
Given a 4 GHz clockspeed, your time resolution is about 0.25 nanosecond, or 7.5 cm -- call it 3 inches.
There are a few other flags raised about that particular implementation, though the concept itself is the key point. The idea of a signet ring to authenticate, sign, access, pay, claim, and/or decrypt seems useful.
Maybe this use case isn't common, but my wife frequently needs access to my phone. Usually while driving, to change GPS routing, or playlist, or respond to SMS. With TouchId, she can do so without my PIN. With FaceId, she needs my PIN.
This strikes me as both less secure and quite annoying. Now, I have to repeat my PIN out loud while she types it into the device. Or, force her to memorize it (in addition to her own PIN, and I have to remember hers for the reverse situation).
Have you used it?
edit: how could I forget, it's the little known app called Alipay.\s
This is a serious question. Because of there was noting wrong with Touch then why is it removed from new phone and replaced with Face ID.
Im also concerned about the data Apple will collect. I assume information about your face have to be very detailed so that this FaceID is secure. Is it really that of a push to see article in 6 months: FBI got a copy of whole Apple FaceDatabase and was able to identify and find a very dengerous criminal.
> Will Apple continue progress and built in PinchID - a tiny needle that sting you to test if you are you based on your blood/DNA?
I struggle to believe you when you say that's a serious question...
> Im also concerned about the data Apple will collect.
The FaceID data will be stored in the secure enclave locally on your phone, just like TouchID data was. Apple will not collect it.
The Pixel handset has the fingerprint sensor on the back of the phone. It appears to work quite well. Much of this needless outrage could be obviated by allowing multiple simultaneous biometrics for auth; while taking the phone out of your pocket, place your finger on the sensor to initiate FaceID.
How would I use my phone on a desk without lifting it up? I can do that with TouchID, but if the sensor is on the back, I can’t.
That's very un-Apple.
Twins have different fingerprints.
You have to decide which type of biometrics to use when you choose your model, but that's fine for a lot of cases (such as those who wear burkas or those who have no fingerprints).
Hopefully the return period gives people enough time to be sure they've made the right choice.
I haven't tried training it with a wet fingerprint.
I find this humorous because for me, an Australian, San Francisco is quite cold and I wore gloves there.
And the touchscreen?
why would you unlock your phone if you're not going to look at it? I don't understand this argument.
I easily read any messages by glancing at the phone, never coming into any decent imaging range.
My phone can see the very corner of my head and about half of my eyebrow. I do not want that to be enough to unlock my phone.
Compare this to Touch ID, where you hold your phone to that box with your finger on the sensor and half a second later you've paid - very convenient.
I would take a guess that Apple has not yet tested any iPhone X with a European payment terminals and will get a lot of angry emails once people try that combination.
PINs as discused are not directly comparable.
you also wouldn't have to look so paranoid while entering the pin. and pin by itself would be of little value.
For 'extreme' security situations, you might as well just have a long secret PIN and no biometrics.
longpin = unlock
I think you're right that the number of people who would use such a system is trivial (compared to sales).
Apple need to create a system where stolen phones can be reported to them, Apple can then contact the owner/verify they are stolen. And then add them to a stolen list and disable calling/apps on those phones. And display an overlay on the screen THIS PHONE IS STOLEN.
Every iphone would come with an validate phone feature that is accessible even when locked that can authenticate the iPhone for anyone thinking of buying it.
The potential buyer can check if the iPhone is stolen by using the feature that is allowed to connect to the internet and validate the phone.
They need to make it where stolen iPhones are worthless so when you are getting mugged criminals won't even want it.
Obviously have an option setup where you can transfer ownership of your phone. Maybe with a 7 day waiting period.
Even a DFU restore of the device won't help a thief, as the activation process will simply ask for your iCloud login and will display a "Message From Owner" that you can set at icloud.com indicating the device was stolen, making it much harder for someone to purchase and claim ignorance about the origins.
This way if you're on the subway you can have your phone out without worrying about getting robbed.
It your house gets robbed they would leave your phones because they can't be sold or reused.
You might not buy a phone from an internet cafe that sells 2nd hand phones, but you might buy a replacement screen from an internet cafe that does repairs.
Short of visibly, physically destroying the phone, though, there's nothing you can do to prevent this. A criminal doesn't care that the phone is a brick; they'll sell it to someone and be miles away with the money before the buyer realizes the phone is useless.
Then again, this might create lots of other problems.
The phone won't do anything until the person who turned on the lock turns it off again.
Why would I want to give a third party the ability to brick my phone? If Apple has that power, then a disgruntled Apple employee can use that power and a government can compel Apple to use that power.
My computing devices are my computing devices. Only I should have the ability to do anything to them.
> It's alarming not just because the number is so low, but because Dropbox holds such valuable information for so many people.
I'd suggest that Dropbox users somewhat self select for those not as concerned about security as others. And more concerned about availability.
Dropbox does not encrypt your data server side (or at the very least, can easily decrypt it). And they have proponents of warrantless surveillance on their board:
I think claims like this need to be backed up.
Now, obviously a biased source, but Dropbox itself says this:
"Each file is split into discrete blocks, which are encrypted using a strong cipher. Only blocks that have been modified are synced. Each individual encrypted file block is retrieved based on its hash value, and an additional layer of encryption is provided for all file blocks at rest using a strong cipher. Both dedicated internal security teams and third-party security specialists protect these services through the identification and mitigation of risks and vulnerabilities. These groups conduct regular application, network, and other security testing and auditing to ensure the security of our back-end network. In addition, our responsible disclosure policy promotes the discovery and reporting of security vulnerabilities." 
So we have files that are broken apart, each part encrypted, then the whole combination encrypted again, then lots of security auditing in-house and outside, and with incentives for people that discover flaws to report them. That seems pretty industry-standard to me, but I'd like to know more.
I really have some difficulty imagining a company like Dropbox, which knows how important the documents it stores are, being careless with security. Not saying they may not be, but it's going to take more than an HN comment that includes some politicized perspective about the Bush administration to convince me.
Furthermore, this article  claims that Dropbox encrypts files on the server even stronger than Google does. It also points out that user behavior is usually the main security hole, which will always be true with any service.
The issue with Dropbox is that they also have access to your encryption keys, which means they can easily decrypt and access your files, at their discretion.
According to Drew Houston (Dropbox CEO), they need access to your files to offer features like search, to be able to better understand how you're using the service, ability to integrate with third-parties, and for law enforcement. Some of these "trade-offs" are mentioned by Mr. Houston himself in this interview when responding to criticism from Edward Snowden a few years ago:
More to the point, giving Dropbox (and their affiliates and trusted third-parties) permission to access to your files is a key provision of the Dropbox terms of service:
Our Services also provide you with features like photo thumbnails, document previews, commenting, easy sorting, editing, sharing and searching. These and other features may require our systems to access, store and scan Your Stuff. You give us permission to do those things, and this permission extends to our affiliates and trusted third parties we work with.
As Mr. Houston said in the article referenced above, if you want better encryption there are alternatives.
Disclaimer: I work at Sync.com
I would rather say that Dropbox is being used by many people without tech knowledge. And while they might be concerned about security, they often just don't know how improtant 2 factor authentication is. At least that's what I can see for some friends & family.
I don't keep confidential stuff in DB because, I know that the company effectively has access to everything. Nonetheless, 2 factor sounds interesting. So I look at this:
Right. Now I understand why so few people have it enabled.
I use two-factor/two-step verification on every single service I have, including all social media accounts, email accounts, etc. Most major services/sites these days provide it. The way Dropbox does it is no different than any others; it takes 2 minutes to set it up. What did you find difficult about it?
GP didn't say _they_ can't enable it after reading the help page. I think they are implying that the very detailed help page looks long and complicated to a non-techie (who might not even understand the benefit of going through such a hurdle in the first place).
Do any of your other systems handle recovery like that?
thats just a security layer against phishing or password leaks...
don't get me wrong, i'd advice everyone to use it for anything remotely critical, because its pretty easy to setup and live with, but it really doesnt help against state actors or hackers that compromised the data servers.
As a personal user of no special interest, no one would use a potential Dropbox vulnerability to just get your data. If you secure your end you'll be fine. That's different for big corporations or people with a public profile (e.g. politicians). In this case you have to ensure that malicious actors with a lot of money and knowledge cannot gain access. But then again, self hosting is likely less secure than Dropbox or Google.
Because if someone steals your DB password, they still won't be able to login to your account.
Maybe they won't have to, if they also stole your data and found a way to decrypt it, but since those are different things, it is plausible that there could be a leak of login information without a leak of data, in which case your two-factor authentication would keep the attackers out of your data.
It doesn't have to. For 99.999% people, the two most realistic threats are:
* There is a keylogger on some computer where you access your Dropbox, for example at a print shop,
* You use the same password on many sites, one gets compromised, and automated bots try to access your Dropbox account.
This is actually quite interesting, since it is a bit like CAP theorem. When you increase confidentiality and integrity, you might be affecting availability in a negative way. Take Dropbox as an example. Since they don't have efficient end-to-end encryption offering, they can offer you password resets (=availability of data is good). Add in secure end-to-end encryption and password resets won't be enough, you need to have in addition good backups for the encryption keys to ensure access to the data.