Hacker News new | comments | ask | show | jobs | submit login
How Apple and Amazon Security Flaws Led to My Epic Hacking (wired.com)
446 points by malachismith on Aug 7, 2012 | hide | past | web | favorite | 253 comments

For the people that want to turn on two-factor authentication on their Gmail account, here's how to do it: http://support.google.com/accounts/bin/answer.py?hl=en&t... I highly recommend it.

Some of the common misperceptions I see:

Myth: But what if my cell phone doesn't have SMS/signal?

Reality: You can install a standalone program called Google Authenticator, so your cell phone doesn't need a signal.

Myth: Okay, but what about if my cell phone runs out of power (added: or my phone is stolen)?

Reality: You can print out a small piece of paper with 10 one-time rescue codes and put that in your wallet.

Myth: Don't I have to fiddle with an extra PIN every time I log in?

Reality: You can tell Google to trust your computer for 30 days and maybe even longer.

Myth: I heard two-factor authentication doesn't work with POP and IMAP?

Reality: You can still use two-factor authentication even with POP and IMAP. You create a special "application-specific password" that your mail client can use instead of your regular password. You can revoke application-specific passwords at any time.

Myth: Okay, but what if I want to verify how secure Google Authenticator is?

Reality: Google Authenticator is open-source: http://code.google.com/p/google-authenticator/

Hmm. Maybe I should throw this up on my blog too.

I was reluctant to setup two-factor for a long time, perceiving it to be an unnecessary hassle. Then somebody tried to gain access to some of my accounts through my Apple ID. They were unsuccessful (I don't have any common passwords these days so managing to send a password reset to my GMail wasn't terribly helpful) but it certainly made me paranoid enough to switch.

I currently have two-factor setup on two accounts. I won't say there's no hassle involved--authorizing a machine via SMS so that I can login and generate an app-specific password is a chore--but the peace of mind is well worth the hassle.

Did you also know that you can re-assign your Apple ID to an existing e-mail address? Like, say, one you have two-factor enabled for? Now people can socially engineer Apple's flawed policy all day but they'll need to steal my phone, too.

EDIT: Also, if you don't have backups, you might as well just delete everything yourself right now and use that as motivation to prevent the same thing from happening again. Hackers, tornadoes, spontaneous combustion and ghosts are all conspiring to destroy your data sooner or later.

Have you encountered any other sites that allow you to use Google Authenticator to generate OTPs?

Part of the reason I think two-factor authentication is a usability burden is because each "identity provider" wants to use its own protocol. Google uses an Android app. PayPal sent me a card. My brokerage has a keychain token available. Other companies use a "soft" RSA token that runs on Windows. But if everyone agreed on a protocol, then I could have everything in one place, which would make two factor authentication significantly more enjoyable to use. (I know there are standards: the question is, who other than Google follows them? :)

Good question. I have seen http://drupal.org/project/ga_login for Drupal, for example. Likewise, here's a write-up about using a YubiKey with Gmail's two-factor authentication: http://static.yubico.com/var/uploads/pdfs/Howto_GmailYubiKey...

I believe Google Authenticator is based on open standards and open source, so people could standardize on it if they wanted too.

> I believe Google Authenticator is based on open standards

The standards implemented are bonafide RFCs - HOTP (counter-based) was published in 2005, no less. I'm not sure how much lower the barriers for integration could be.

Lastpass also uses Google Authenticator. You can get server-side code so you can use Google Authenticator on your own server, or use it to offer two-factor authentication to a web service you are running. It's an open protocol, with an open source implementation. All you have to use it.

Amazon AWS supports the google authenticator app.

It is an open standard protocol, but I don't know in practice how many companies have compatible implementations.

A lot of the tokens(Google Auth included) follow http://www.ietf.org/rfc/rfc4226.txt. Problem is everyone has a different way of provisioning the secure key. Some of which are unsafe (I am looking at you Google Authenticator!. Yes a qrcode is kool idea but theres no guarantee that you are the only one who provisions the key using the image.)

I designed Authy with that in mind. I wanted a way to have a 1 token for all accounts. Maybe we will add support for Google Authenticator, so you could import your Google Auth token into the app.

I use pam_google_authenticator to login (via SSH) to my Linux server, and so can you.

Meraki (full disclosure: my employer) has two-factor authentication for their network config/admin web interface; whatever the tool used (I haven't worked on that part of our codebase), it is compatible with Google Authenticator.

There's a Google Authenticator WordPress plugin: http://wordpress.org/extend/plugins/google-authenticator/

DreamHost use Google Authenticator.

The bitcoin exchange MtGox supports securing your account with the Google Authenticator app.

Lastpass for one supports Google Authenticator. I am not sure about any other services.

As far as I can tell, this offers no way to use Google Authenticator. Only the Facebook for Android app.

FB doesn't use Google Authenticator.

Not quite the same as two factor auth (almost the opposite in fact), but I was extremely annoyed when gmail started relentlessly asking me to add a backup email address for password resets. Had the author not had an insecure backup email address, this wouldn't have happened either. Of all the passwords I'm likely to forget, gmail ranks near the bottom. The password to login to who knows where to get the gmail recovery email? The top.

That's absolutely true. I was horrified last week when I discovered my Gmail account (with a unique 30-character long password, 2-factor enabled, NEVER used unless on my MacBook at my house) had a "backup" email address to my Yahoo account from 8 years ago, with the nice password '1123581321'. I could've killed myself.

That's amazing. I've got the same combination on my luggage.

Why do you use the term 'application-specific passwords' although these passwords are not application-specific at all?

They are Google-generated passwords with a user label. And if you use 2-factor authentication, they are the weakest link in the chain since they provide full access to a google account except for access vectors with 2-factor authentication. In addition, every app can use such a password, not just the app you created the password for …

2-factor authentication is great and certainly recommendable but you should not fool users by using the false term 'application-specific passwords'. In addition, more complex Google-generated password would be appreciated.

Application-specific is just a friendly name.

Also this password doesn't give you full access to your Google account. You cannot log into Google web apps this way (AFAIR). Thus you won't be able to mess with account settings (passwords etc). Before you can change critical account settings Google asks you to provide your traditional password again.

Your comment is a bit harsh if not FUD.

'Application-specific' is IMHO not just a friendly but a misleading name. They are simply not application-specific.

Using one of these so-called application-specific passwords, you can delete calendars, mails and contacts. That is critical enough for most users.

An additional concern is the usual 30-day authorization you give in order to avoid entering your 2-factor token again and again. Is there any way to de-authorize such a 30-day authorization?

Anyway, I don't rule out that my perspective might be too strict. For must users, the whole Google 2-step authentication system is probably a very important step towards improved security.

There's something that bugged me about two-factor the moment I activated it. The application specific passwords are stored in plain-text. How does Google know that it's actually e.g. Chrome accessing my mail with a given application-specific password?

If a hacker gets a hold of an old backup of mine, which includes a Pidgin configuration file I forgot to delete, which holds a plaintext password, can he get into my account with that?

If you didn't revoke the application-specific password. And you should be revoking them when you're no longer using them.

If you weren't using application specific passwords, of course, it would have been your actual Google password in that Pidgin config.

It's a little disappointing that the app-specific password isn't more secure, but it's certainly not less secure than foregoing them.

App-specific passwords are bypasses of 2-factor auth. Use them selectively and with care, revoke ones you're not using anymore. And replace them from time to time.

Two-factor authentication is as secure as you make it.

If you value security more than convenience don't use application specific passwords. Google certainly allows for that by only consuming their services through a secure web interface. Is there another mainstream e-mail provider who supports that?

However, if you would like to use apps, gtalk, pidgin, etc. Google will still let you and it will still be more secure than before (revoke specific passwords, etc.).

If one of your application specific passwords (ASPs) is compromised your e-mail content will be compromised but NOT your account as long as your phone / token generator are under your control, allowing you to recover access by your own means. That is a big difference.

That's exactly why I don't use 2-factor. It's only as secure as a single complex token. I use a password manager with complex passwords. I fail to see the added security of enabling 2-factor in this case.

Since I've been downvoted without a response, let me elaborate on my concerns. I haven't seen the threat of application specific passwords (ASP) addressed properly. If an ASP is sniffed or somehow extracted from a device it seems like it's practically equivalent to a single-factor authentication password. I couldn't determine from Google's docs if an ASP will allow you to change a master password or not. Or if it could be used in the place of a master password. It seems like it can only be used through apps that utilize Google APIs. Does anyone know if one of those APIs allows for changing of the master password?

A quick search shows that others share my concern about ASPs: http://webapps.stackexchange.com/questions/13317/how-does-ap... http://tech.kateva.org/2011/07/massive-security-hole-in-goog... https://groups.google.com/forum/#!topic/chromebook-central/z...

When you need to make any changes to the 2-factor settings, you must enter your account password and the 2-factor password. The ASP cannot be used for this purpose.

I'm kind of wary of the ASP. It feels very backdoorish. And by the very nature of ASP, it is meant to be saved/stored on the computer system.

Me too. It seems to me that since ASP allows me POP/IMAP access to a gmail account, it probably gives me enough access to run the "send a password reset" attack. I can't use that to 0wn the two-factor protected google account, but I can easily use it to 0wn the AppleID/Anazon/eBay/PayPal account using the gmail account for the recovery address…

Off the top of my head, I don't even know if it's possible to ensure my iPhone or GalaxySII is using SSL/TLS for it's POP/IMAP connections (or if an attacker with brief access to my device could switch it to plain-text-passwords then sniff it's authentication on the local wifi).

Gmail supports restricting POP/IMAP access to specific labels. Couldn't you set up filters to move password reset emails/all emails from services you care about out to non-POP/IMAP folders, skipping the inbox.

A kludge, sure, but a possibility.

Then we should probably move towards (something like) OAuth in all our apps, be it web, desktop or mobile. This way a potential evildoer is left only with a very limited token. Even more, we can invalidate tokens based on time and device used.

Having OAuth or Two factor in Google Talk/Chrome or any application based login system shouldn't be THAT hard. Along with your password, there should be another text field stating "OTP" or some other friendly name.

2FA is equivalent to changing your password every 30 seconds. Yes, it's "only" as secure as that single token, but the entire point of doing 2FA on a separate hardware device is that even if you have every piece of malware installed on the machine you're logging in to, there is still an analog transfer of information from your phone(/keyfob/whatever) to your computer via your brain. It keeps your "password" in two parts, one of which lives in your head/password manager/muscle memory, and the other of which lives on your phone.

ASPs are certainly less secure than 2FA, but they only permit services access, rather than account administration access.

2FA is slightly more hassle for the knowledge that you are effectively protected against a wide range of attacks. It's very worth it.

That reminds me of discussion on desktop security where often root access is seen as they holy grail – while the potential for damage with simple user access gets ignored …

2-step authentication is great but the (wrongly) so-called application-specific passwords are definitely the weakest link in the chain. They do not allow for a full takeover of a Google Apps account but there is still a lot of damage possible.

The two big players here (Vasco and RSA) both do this in hardware and software. So there are soft devices that run on desktops, mobile, etc.

Two factor limits the time window within which a password is useful. If one of your complex password's hashes gets exposed, someone would need to also know your ssl-only two factor auth cookie, and then reverse/bruteforce your password within the 30 day window the cookie makes it valid for - that makes the "current model" of releasing the hashes on pastebin and crowdsourcing the hash-cracking much more time critical.

Are you asking if a hacker could use the application-specific password to access your email account? I'm pretty sure the application-specific passwords are only good for the service using them (e.g. the first service to use a newly generated password is the only one allowed to ever use it), but that would be trivial to test for yourself.

That can't work - how would they identify the service using them? Ip address is useless for this purpose and other identifiers ar either not available or very easily spoofed and thus equally useless. If somebody gets hold of the application specific password, he'llbe able to use said application.

You're right, I'm not sure how I thought that could work.

I just used the same password to login to my Talk account in Pidgin and later for my Android. This is really insecure, especially when Pidgin saves passwords in plain text.

You're perfectly welcome to use the same password multiple times-- it's as secure as you yourself make it. When Google generates an App password, it displays it once-- and then trashes it. On that same page is a list of the identifiers you've given all of your apps, and the last time that identifier's password was used to log into your account. At any point, you can trash a generated password, and anything using that password can't log into your account anymore.

If you use the same generated password for every application, then you're not in that much better a situation than you would be otherwise. If you use different identifiers for each application, you can identify exactly where a breach happened from by last login time, and disable that password entirely.

Kind of. This is the reason 2-factor makes me uncomfortable too: instead of having 1 username/1 password, it's actually 1 username/lots of passwords. And the passwords generated are all lowercase alphabetic characters (I assume to make them easier for users who don't/can't copy-paste).

It'd make me a lot more comfortable if I could lock each password down to a specific Google service (for instance: generate an OTP for Pidgin, and enable it only for Google Talk), they were a lot longer, and had special chars + numbers in them.

ASPs are 16 characters. That's 26^16 - 26^15 = 4.19315 * 10^22 (41.9 sextillion!) cominbations. If you aren't confident in the size of that keyspace, I'm not sure what to tell you.

The point about permitting a password to be used for only certain services is absolutely a valid one, though.

You are right and that's what I'm going to do. But it's no less secure until you don't "remember" that token anywhere. A remembered password in Google Talk (desktop client) is just encrypted and can be easily recovered.

Edit: To add to my reply, I have never used the remember password for Pidgin in the past, however enabling 2 step auth require me to do that. It'd be great if Google could somehow allow only the first associated app with a password for subsequent uses. Technically that looks very challenging (if at all possible).

Sounds as if Pidgin is the problem here.

I know that at first look it sounds like a Pidgin problem, but they do justify their situation quite well on their site: http://developer.pidgin.im/wiki/PlainTextPasswords

this is crazy, doesn't every current linux distro already have a keychain-like-thingy which is protected by the user password? Adium (which is also based on libpurple) uses it on the mac, which means there is no plaintext password lying around.

There is - GNOME has Keyring, KDE has Wallet. But apps need to support it or have a plugin system for that. Pidgin has plugins for GNOME and Windows: http://developer.pidgin.im/wiki/ThirdPartyPlugins#Securityan...

They have a point - if Google really allows other apps to access the account via Pidgin's password, perhaps they should fix that.

That is not possible with the current 2-factor authentication system. App-specific passwords are only called 'specific', they are not really app-specific, i.e. you can use them with any app you like if 2-factor authentication with a token has not been implemented yet.

If Pidgin is your use-case, the only solution is not using Pidgin.


The inflexible account ordering in Authenticator is bugging me, since I recently added a 5th account (not 5 google accounts) and my phone only shows 4 at a time. I don't know if you have any influence over the people maintaining the app, but it looks trivial to fix, given the comments.

That's good feedback--I'll pass it on.

I was wary to set up 2FA until I learned that you can set it up even without a cellphone or your own computer. You can have the second factor be a voice call, so it can call a landline or dumbphone without an SMS plan. Plus, if you ever lose your phone or cancel your number, you can set up backup phone numbers. I set up my fiancee's phone number as a backup number in case I ever lose my phone.

Someone big got hacked this way recently - the attacker managed to social-engineer a call forwarding change, then used a "landline 2FA auth call" to gain the foothold then needed. (I think it was Cloudflare?)

It's similar to @mat's problem - Amazon assumed the CC last 4 digits was "non identifying", Apple assumed they were.

How much effort do you suppose your phone company expends securing your voicemail or call forwarding? I'll bet it's less than would be considered "industry best practice" for securing your corporate dns records…

I heard about this hack too (http://blog.cloudflare.com/the-four-critical-security-flaws-...), but I disagree with you - the last 4 CC digits should be considered non-identifying.

First, let me explain a little bit of background on this "hack". From the article, they had 4 problems with their process that allowed them to get hacked badly:

1. AT&T was tricked into redirecting my voicemail to a fraudulent voicemail box;

2. Google's account recovery process was tricked by the fraudulent voicemail box and left an account recovery PIN code that allowed my personal Gmail account to be reset;

3. A flaw in Google's Enterprise Apps account recovery process allowed the hacker to bypass two-factor authentication on my CloudFlare.com address; and

4. CloudFlare BCCing transactional emails to some administrative accounts allowed the hacker to reset the password of a customer once the hacker had gained access to the administrative email account.

I'm not really sure what #3 is and #4 is irrelevant to our discussion, so I'll concentrate on points #1 and #2.

From #1, it follows that the attackers were able to obtain the phone number associated with the two factor auth. How did this work? My assumption is that it was a very targeted attack.

The article starts the attack at June 1st, 2012, but I believe (read: assume) that the attackers probably met the target of the attack beforehand and obtained his/her business card (with a cellphone number), which allowed them to perform #1 above.

So, given that this attack required a physical piece of paper (i.e. a business card) to be acquired from the target, it is not a stretch of imagination to say that if another attacker wanted to obtain the last 4 digits of someone's credit card, all they need to do is follow the person to a restaurant or gas station and get a payment receipt -- every receipt has the last 4 digits written on it. Some have the first four.

Therefore, I don't believe it was Amazon's fault for assuming the last 4 digits are non-identifying, but rather Apple's fault for assuming they are. To be clear, hindsight is 20/20, so I think it was relatively reasonable for Apple to assume that. However, I do expect Apple to change this policy in the future.

I think we're vigorously in agreement here - the last 4 CC digits are certainly not identifying, and I'm perhaps a little less forgiving that you in letting Apple off for thinking so. I'm also not happy with Amazon's assumption that they should be displaying them quite so easily (although at least they don't display them until you're far enough "in" to an account - and it's not easy to work out an alternative way to distinguish between several different CC's when you can have more than one linked to your account).

(And, there are many alternative and easier ways to acquire most people's cellphone number - no need to meet someone or get them to give you a business card… But your point still stands…)

My apologies, I somehow glossed over the Apple part of your post. I agree with everything you said, perhaps even about being forgiving of Apple using the last 4 digits as a verification mechanism.

It's true about a cellphone number -- from what I heard about his attack outside of this article, this a very targeted attack, and the attacker knew exactly what kind of data to expect in the GMail account, which is what led me to conclude that the attacker probably knew or met the victim, but likewise, good point about obtaining the cell in other ways.

See above for Matt Cutt's answer: you can also use a yubikey (a tiny USB device that pretends to be a keyboard, and enters the code when you press a tiny button on top).

How well does this work if you are in, say, South-east Asia?

I have been using it in Vietnam for over a year without any major problems and I have suggested a few family members (who are not tech savvy at all) to use it too.

The app works very good on both Android and iOS (I personally use the iOS app because I flashed new ROM all the time and sometimes Titanium Backup of the app doesn't work). I tried the SMS a few times, the message normally arrives within one minute.

I am in Singapore and have been using it for a couple of months now. Works great.

There seems to be something funky with "application-specific passwords" (ASP) on Chrome. Let me explain the problem (that I documented to a friend ~1 month ago): I just revoked all Google Chrome keys, cleared out all of my history / cookies / passwords / forms, etc.

I went to a different computer that had previously had Chrome synced using ASP, switched to my account, and went to settings. At the top, I get this error message: "Account sign-in details are out of date. Sign in again" That's good since ASP was revoked. But then I go down to advanced settings => passwords, and they're all still visible?!?! That's just WRONG! If the ASP login credentials have been revoked, access to all locally stored passwords need to be revoked too! 

Any idea what's going on? This seems like a flaw. The 2-factor authentication is still great though.

Did you also file it with Google? Definitely a nasty bug in implementation.

The friend was a GOOG employee, but not a member of the Chrome team. Indeed, I should file an official bug report... but the Matt Cutts bug reporting system is usually so much more responsive. ;-)

Myth: Right, but what happens in the very common scenario of my Android phone-- logged into Google with the Authenticator installed-- getting lost / stolen? Surely then 2-factor auth is basically useless?

(insert your answer below)

Reality: You go into Google account security and choose 'Clear the phone info and printable codes' and 'Forget all other trusted computers. Require a verification code the next time I log in from any other computer'.

I think the answer is that you can print out one-time codes on paper and put them into your wallet.

I would also recommend putting an unlock pattern on your phone to protect in case your phone is stolen.

I just set up 2-factor auth

1) You still need to enter your password every time you log in.

2) You can add backup phones that can be called/texted with the verification codes

3) You can print out back-up codes that will always work (once)

4) If your phone is stolen and is using an application-specific password, you can revoke that password for that application.

What are you trying to protect against in this case?

If your phone gets stolen and it's logged in to your google mail without a lockscreen pin/code, then yeah - the thief can read your mail, 2fa won't help. They can also run your Authenticator app and see the current 6 digit number, but that's not useful without the password as well.

(I'm not sure how easy it is to extract the Google password from an Android or i phone - I wonder if you can just switch them to non-TLS POP3 or IMAP and have them send a cleartext password over an unencrypted wifi connection?)

If the phone is rooted(as is the case with mine) then an attacker could change the list of trusted Certificate Authorities on the phone and then perform a MITM attack to get any passwords being passed over the air.

However, I think google services use XMPP if I'm not mistaken. In which case the password is never actually transmitted over the air. XMPP uses Digest access authentication[1]. Short version: the server would first send a challenge to the client. The client hashes the challenge with a hash of the password and returns the result. The server performs the same operation and compares. So even with a MITM you'd get nothing. Furthermore, the client itself would never need to store the password either.

[1] http://en.wikipedia.org/wiki/Digest_access_authentication

If you have two-factor auth enabled, your Android phone stores a app-specific password -- even if extracted, it wouldn't be terribly useful (assuming you revoke it).

You should be using POP or IMAP only on your phone so that you can revoke permission after it is stolen (when you log in to your account from a desktop and use a backup key from the printout). If you are logged in to your main Google account on your phone you are asking for trouble. While I will be sympathetic after it gets stolen and someone ruins your life, I won't be surprised.

If you're logged into your Google account from your Android, it's roughly the same amount of control as an IMAP account would give.

They can wreck havoc, but they cannot change your password and steal your account.

What if i lost my phone and didn't print backup codes? Will i lose my google account forever?

There are other backup measures, including a trusted friend's phone as backup. But being totally locked out is a feature, not a bug.

Yes. The aim is that someone without authentication can't get into your account.

If you regain control of the phone number before someone finds the phone and uses it to seize control of the Google account, you wouldn't lose anything more than the phone.

Although enabling two-factor auth in gmail is great, I still fail to see how it would have protected his iCloud account. Sure his gmail account wouldn't have been compromised, but what about his his iCloud and twitter?. Why doesn't apple and twitter provide two-factor authentication? Why doesn't everyone do it this days?

If I'm reading the blog post correctly, his Twitter account was compromised via GMail. If his GMail account had not been compromised, they wouldn't have gained access to his Twitter feed (which was the true target of the attack).

They would still have been able to compromise his iCloud account and thus destroy the data stored on his computers.

Unfortunately, I think that in this particular case, having two-factor auth on GMail wouldn't have helped. His account was compromised by having a password recovery email sent to his iCloud address. Presumably, password recovery bypasses two-factor auth.

In this attack and the earlier CloudFlare attack, the attacker took advantage of inappropriate recovery email settings. While it's reasonable for consumers to enter recovery emails, I think that professionals should avoid enabling them. When a @gmail.com sends recovery mail to @me.com (or vice-versa), the attack surface is greatly increased.

Password recovery does not bypass two-factor auth.

Two-factor Google authentication would have had two benefits. First, the Gmail and Twitter accounts wouldn't have been hacked.

Secondly, the Wired article made this claim: "Because I didn’t have Google’s two-factor authentication turned on, when Phobia entered my Gmail address, he could view the alternate e-mail I had set up for account recovery. Google partially obscures that information, starring out many characters, but there were enough characters available, m••••n@me.com."

I don't know for sure whether that's true or not. But assume it is true. If two-factor authentication had been enabled, then the hackers would have had a much harder time guessing Mat's email address for iCloud and whether he had a @me.com email address at all.

I have two-factor authentication turned on and I can see this much (in a different web browser) without entering anything: "Choose how to get back into your account. Get a password reset link at my recovery email: uch•••••••@c••••.com"

The problem may be that "me.com" is so short that Google might display the full domain name. If that's the case, Google should fix it.

> ..whether he had a @me.com email address at all

I disagree. I think most iCloud users (%80 of iOS users by Apple's count) have @me addresses when they upgraded to iOS 5 or Lion. I can use both my @gmail.com and my @me.com in App Store to purchase, or to login to icloud.com.

"Hackers would have had a much harder time"? No: mhonan@gmail.com mhonan@me.com

Gmail was not really needed to guess the name at @me.com.

Moreover, in his case, it seems he would be better off not having the secondary e-mail address for recovery at Google. It turned out to be anti-security measure.

It's not the mhonan part the would've been hard to guess but the @me.com. A secondary email account could be anything. It could also very well not be enabled. Knowing that it is enabled and that is an @me was definitely something that helped the attackers.

Good point. But even two-factor auth wouldn't have saved him because the hacker got the customer support people to issue a temporary password. Apple (and others) need to implement better controls on how you reestablish identity once you've lost access.

Wouldn't the hacker still need the temporary password AND the Google Authenticator code? Or are you assuming that the customer support people could and would turn off two-factor auth while resetting the password?

Yes I'm assuming I call a company having lost all access. Would a company have a different way of establishing identity for someone who lost all access if they've implemented TFA?

Que the "Apple locked me out of my OWN DATA!" screed in 3, 2, ...

I discussed the answer to your first question here: http://news.ycombinator.com/item?id=4348537

Sounds like a lot of effort, or at least a lot of things to consider. I wonder how many people were completely locked out of their accounts because they enabled 2-factor auth and didn't do all the right things.

The root problem in this story is that things are just too damn interconnected these days. And we're encourage to interconnect them even further (using cellphones to authenticate email, in this case).

Edit: I think that my reply is not harsh enough. After reading the comments more closely, I see that this a typical IT response to IT failure. 1. Ignore the root cause. (Interconnectedness.) 2. Blame the user. (Implicitly, for not enabling two-factor auth.) 3. Suggest a workaround and dismiss any concerns real-life scenarios. (E.g. loosing your wallet and cellphone in an emergency. Emergencies like that happen more often that you'd think.) 4. Feel smug.

I've tried turning it on multiple times by using an iPod Touch, but I can't figure it out. The support article says the device is supported but the actual page only asks me for a phone number. Either I'm being obtuse or the steps for non-phone users aren't very clear.

There's no Google Authenticator for Windows Phone, but there's an equivalent: http://www.windowsphone.com/en-US/apps/021dd79f-0598-e011-98...

"Myth: I've heard two factor authentication doesn't work in IMAP and POP"

I've found this to be true - to a certain extent. I had two factor authentication turned on and found it to be a nightmare in OSX Mail. Failures to retrieve mail, asking for my password constantly, etc. I was resetting the application passwords every two days. I tried to research a fix, but in the end it became less of a hassle just to turn it off and have my email work 100% of the time.

It's probably a bug in Mail.app in Lion, but I can't say my experiences with two factor auth have been positive.

Definitely a bug in Mail.app, not 2FA. I had no problem with 2-way on IMAP on Lion, Mountain Lion, iOS 5 or iOS 6. But as I use Mail.app constantly I can assure you that it's desperately buggy. At times I had it laying around downloading gigabytes and gigabytes of Gmail mail again and again and again until I took pity and kill it.

Make sure you're using an application-specific password for any application that doesn't support two factor authentication - eg, Mail.app on Lion.

Thats what they're for.

I remember having to generate the 2 or 3 times, but it did eventually work for me, on several gmail accounts. That is with both Lion and Mountain Lion.

Matt, your list of objections and counter-objections is great – too good to be hidden away in a Hacker News comment thread and on your blog. It belongs somewhere within the sign-up flow itself:


Well that answered all my misgivings about 2 factor with Gmail. I'm setting it up now. Thanks!

convinced - i will set it up in a minute :)

I'm sorry for the journalist who lost all of his digital information, but I think/hope that this article will have a huge impact in terms of how the security practices for all large companies with an Internet presence, will behave.

The fact that they pieced together all this information from multiple sources, including Amazon's ability to add credit cards over the phone, to getting the billing address through domain name registration, to hacking into Apple iCloud really makes me feel... I guess depressed is the word.

We really have no control over our own data security. I've been super paranoid about things like identity theft, and I got my identity stolen, which is something I've been dealing with over the past 2 years or so. Somehow, my birthdate, addresses, etc were all wrong, and I had to jump through hoops to get it changed. As well, I currently have an unpaid credit card linked to my account, and the credit agencies and the collection agency won't remove it. The collection agency required me to submit 3 copies of my signature, a police record, copies of my identification, etc, before they'll remove it, even though THEY were the ones who made the mistake. I went to the police station to file a report, but they needed documentation that I didn't have, since I had already changed most of the information through the credit agencies. At this point, I froze all my accounts through the credit agencies, and I've given up.

The safety of my email, etc, is something that I also take extremely seriously, and now I'm being told that there's a possibility of being hacked via clever hackers piecing together information from various sources, each of which have different security procedures. We literally have no data security except "security through obscurity", meaning that the likelihood of being randomly hacked is low, but if someone wants your account, they can and will get it, pretty easily it seems.

The industry NEEDS to standardize on very rigid set protocols on things like what information they give out, how accounts are reset, how things like credit cards are added to accounts, what information they leak, etc. This is ridiculous.

Last time HN discussed this story, I said "turn on 2-factor authentication for your Google account".

Unsurprisingly, I got the exact reaction I'm seeing here when it has been suggested: lots of questions about how it works, people who think their situation is unique so it won't work for them, and people complaining than SMS is insecure.

1) Don't ask anymore questions. Try it out, if you hate it turn it off.

2) Your situation almost certainly isn't unique. You get 10 codes to print out, you can have (revokable) application-specific passwords that don't require the token. Try it!!

3) Use the smartphone application.

Don't ask any more questions - just try it out!

This seems like poor advice. If people have questions, they should be addressed, not "oh don't worry your pretty little head, smart people came up with this." Like the discussion about app-specific passwords above was very informative to me... all it takes is one of those getting sniffed or read off disk and someone can suck down all your email. Not exactly "fire and forget" security.

I'm not arguing that it is perfect.

I'm saying that people should turn it on, and try it. Most of the questions are the kind of things that would be solved by just trying it!

>Don't ask any more questions - just try it out!

Not even these question:

Aren't we as tech people completely and utterly failing the world at large when the best possible response to this story is to turn on 2 factor auth on one of the many accounts a person has?

Is a very slight reduction to the attack surface really the best we can do?


Is a very slight reduction to the attack surface really the best we can do?

At the moment, yes, I think it is.

At some point in the future perhaps this may improve (although I wouldn't count on it).

Definitely print the codes! As a newbie I didn't, and as luck (or a 1/30 chance) would have it, I forgot my phone at home the same day my 30-day login window expired at work. Not a huge deal but a bit of a PITA, and there really was no way to log in until after I got home (which is obviously the point).

Now I have some codes squirreled away in a couple key locations.

Same here, I changed my phone provider and the details on how to activate my new subscription were sent to my gmail account. When I connected to my GMail account the session had timed out. This was some kind of cyclic dependency: I needed my phone to get the details on how to make the phone working. I felt incredibly stupid. So... print the codes!

(I managed to get the information by visiting GMail from my Linux box where the login didn't expire, but still)

Print 2 copies of the the codes and take a screenshot of that page.

Then type "gpg -c sensitive.png" and use the same password as your gmail account to secure it. Then put "sensitive.png.gpg" in "~/.ssh" or another place out of the way and forget about it, until the day comes that you'll need it.

Given how central (for better or worse) of a role email plays in safeguarding other accounts, the hassle of 2-factor auth for it is feeling like less and less of an annoyance.

About a month ago, one of my credit card accounts got hacked and was used to send money to someone else - the number itself wasn't compromised, it was the actual account. No doubt, the attackers tried to login and change my email password, but had to settle on the next best thing - spamming my email address with hundreds of emails per minute in an attempt to cover up the emails sent by my CC company.

Fortunately, the spamming wasn't very sophisticated and it only took me 30 seconds to filter it all to trash. I was on the phone with my credit card company within 10 minutes of the attack, which mitigated some of the damage.

I'm sure at some point weaknesses will be found in the 2-factor auth solution, but for now, it feels almost mandatory for important email accounts.

I've been using google two-factor auth for the better part of a year now, and the annoyance comes down to, once every 30 days or so, having to take 5 extra seconds during login to enter a code sent to my cell phone.

I can't _think_ of anything less of a hassle.

To reduce the risk of inconvenience, Google should really put an indicator on gmail that tells how many days left until the next authentication and allows you to renew the lease early to avoid having to pull out your phone at random. I only hope I don’t get kicked out of gmail at the most inconvenient times.

Google has actually made it even less of a hassle by instead trusting a computer forever instead of having the session last 30 days [1]. This can be seen two ways though: less of a hassle for the user, and less secure. I wonder why Google doesn't give the option for the session lasting 30 days or forever.

[1] http://i.imgur.com/A9Wu5.png

That's horrible. It was already very easy, I don't see the need.

You don't have as many computers as I do, or as long a password as I do, I suspect. Having to type a random long passphrase with special characters on the weird keyboards of multiple devices every month was a pain. Even worse, for devices I infrequently use, I ended up basically having to do this every single time I wanted to use the device.


Then use dropbox to keep the .safe file synced across machines

While I appreciate the concept, I don't think Dropbox is the company to trust for this kind of thing: http://arstechnica.com/security/2012/07/dropbox-confirms-it-...

so if someone wants access to all your passwords, he just needs to compromise your dropbox.

dropbox plus either passphrase brute force (or guessing), or one of (keylogger, compelled disclosure, shoulder surfing, ...) + dropbox.

I consider the 1Password file sensitive enough that it shouldn't be online, especially not with dropbox. I'd prefer if there were physical protection for it somehow, too (like a smartcard or FIPS module, which wouldn't allow bulk-export normally, and which might impose other rules on use like 5 passwords per hour when outside my home network, etc.) Same way you handle high-security private keys.

(Ultimately I'm not going to be happy until I have a trusted tablet of some kind, but building that either requires being Apple or waiting for WP8 hardware to come out and investing about $5mm in some serious security upgrades. Maybe worthwhile, though, since it solves the general problem of trusting client devices.)

Now sure if PasswordSafe allows using key file, as sbov mentioned above for Keepass, but if it's properly implemented, and you didn't put the key file into Dropbox, it would be pretty much impossible to brute force.

If it works at all like 1Password…no. Not if you have even a half-decent master password.

Something that has happened in the past on more than one occasion.

I like to use keepass with a long, memorable password and key file that I need to manually copy across machines.

So even if they get my password database and my password they don't have the key file. And if they can get the key file, they could have gotten the password database anyways.

Until you really need to check your email for something urgent and of course then the code will take some time to be received by your phone, your phone will be out of battery, etc

    "...but had to settle on the next best thing - spamming my email address with
     hundreds of emails per minute in an attempt to cover up the emails sent by
     my CC company."
Here's an article from just last month about this very technique: https://krebsonsecurity.com/2012/07/cyberheist-smokescreen-e...

Flooding someone's inbox (or telephone) is now available as a very affordable a la carte service.

2-factor auth has been cracked before [1] and will be again until there is a standard on how to implement it. With implementations differing between companies, a cracker can play one org's weakness off another org. Like in this case, using the freely-available trailing 4 digit CC code from Amazon to get into Apple.

If both companies agreed to a standard that made it obvious such practices were non-compliant, this wouldn't have happened.

1: http://blog.cloudflare.com/post-mortem-todays-attack-apparen...

> 2-factor auth has been cracked before

I wish people would stop bandying this about as if there was an actual flaw in the 2-factor app or the protocol or crypto algorithms used. The linked breach was likely due to a social engineering attack on phone company support staff. Yes, it's concerning, and something Google and the phone companies should be investigating, but no, 2-factor auth wasn't "cracked."

Someone who's more informed than I, is there really a weakness in smart-phone based two factor authentication if you choose NOT to use the SMS or voice based backup, out-of-band authentication option? I'm pretty sure that is an optional feature of Google's 2-factor auth, with printed backup codes being the alternative.

Given your iCloud account and/or root on the PC paired to your iPhone, I think it would be possible to compromise your Google Authenticator app. At the limit, jailbreak the connected phone, but I think it could be done more simply (all you need to do is run the Authenticator app and see the screen within 60 seconds, which should be possible from a connected, paired Mac).

On Android, way way easier, due to lack of secure device storage. Just get a copy of the disk image and then you've got the seed too.

It's still better than a password, but not as good as an actually secure independent factor. Sadly the SecurID sucks.

> On Android, way way easier, due to lack of secure device storage.

FYI: you can encrypt the boot flash drive in ICS and Jellybean; you need to type a passphrase to unlock/boot the phone.

Right, but the weakness vs. iOS and Blackberry is that it's all software encryption. You can get an encrypted image and then search the relatively short feasible password length (people use shorter passcodes, and often numeric, on mobile devices, vs. desktops or online, due to the limitations of the input device, and the need to unlock the device fairly frequently).

On iOS and Blackberry, you're authenticating to a security chip which has a device-specific key (long, random). On an iPad 2 or iPhone 4S or later, you can't make attempts without being physically on the phone, and this is limited to no more than 8 per second on the fastest iPad 3 CPU. This makes a 4 digit passcode on iPhone 4S (with wipe after 10 tries) potentially more secure than an 8 character random alphanumeric on Android. Online (well, device-online) vs. offline attack. I'm not sure about the latest Blackberry OS security chip status, but a few years ago it was similar, so I hope it hasn't gotten worse.

(There are ways, even on the latest devices, to prevent the device wipe on 10 tries, but no known public ways to do attacks without doing them on the device itself, or physically tampering with the device (which isn't impossible, but requires physical access and chip-level attacks. If your passcode is long enough, you'd have time to detect your loss and presumably invalidate any credentials stored on the iPhone)

You're talking way past the problem that's being solved by Authenticator. If your device is physically compromised with 2-factor auth you need to change your password, end of story. There's no point in discussing how the system holds up to a scenario the system is not intended to address. Password reset is assumed if you lose your damn smartphone.

When someone steals your credit card, you just cancel the card and move on with your life. You don't call the credit card's data protection technology a failure. And changing your iCloud/Google/really any password is an orders-of-magnitude better user experience than canceling a credit card.

> It's still better than a password, but not as good as an actually secure independent factor.

Companies like Google have real, statistically significant data on how much 2-factor auth reduces account compromise. Your claims to the contrary seem to be rooted in an academic (at best) perception of weakness in the technology.

The problem with Authenticator is that it is usually paired to a personal computer, on a personal phone (or a phone used for everything; the BYOD trend is a lot bigger with phones than computers). If your work accounts use Authenticator, even on relatively secure machines, compromising your personal laptop becomes enough to compromise the phone and thus work accounts, even if the personal laptop isn't used for those accounts. This is a bigger problem with the iPhone due to iTunes -- people pair with a machine which has a lot of music, may be used for general downloading, shared in a family, etc.

The attack can be done by pwning your personal computer, waiting for you to connect your iPhone via wifi or cable to it, and then remote-proxying the display on your phone to the attacker via the compromised personal computer. This would all be undetectable to the user.

Even a bad two factor system is better than passwords from a large service provider's perspective. Two factor using a phone isn't as secure as fully independent two factor for enterprise use.

Add to this that many high security environments don't allow phones, or that people carry only a single device (phone or maybe phone+tablet, often), and the "phone as two factor" becomes a lot less useful.

The big problem is having to carry multiple tokens, the cost of physical tokens (including replacement/management costs), and that no one makes a decent physical token at present.

iOS + some kind of "secure device-local mode" for the OS (which couldn't be remote-accessed for display, and which doesn't get pushed in backups (keystore-like), would make something like Authenticator much closer to a physical token in security.

The funny thing is WP8 actually has the tools to build this, and Enterprise (i.e. huge windows deployments with good device management) is the environment where it would be useful.

This is the most detached-from-reality crypto comment I've come across. Google Authenticator works. I really hate to break it to you - but it's actively working right now to protect millions of real users and saving enormous enterprises real money.

It seems like you refuse to accept any of that because if someone roots my laptop and proxies my phone's display over the internet then Google's 2-factor might as well be ROT13.

Maybe - just maybe - those two things don't need to be in conflict. But you're insisting on that conflict, not me.

I use Google Authenticator for my gmail/google account, but it's not an adequate replacement for hardware tokens, for the reasons I outlined above.

Wordpress, Lastpass, and a few other sites seem to support Google Authenticator as well, but it has very little adoption in the enterprise (compared to physical tokens, x509 certs, and passwords).

My point is that it doesn't matter how many factors your authentication system has, it is still broken if someone spinning a yarn via telephone can get full access.

> I wish people would stop bandying this about as if there was an actual flaw in the 2-factor app or the protocol or crypto algorithms used.

FUD will never go away. Don't let it get to you.

the weakness with 2-factor auth is that almost all of us with a smartphone use that phone for email. And that phone is the same one google sends the sms to...

Don't use SMS, use the Google Authenticator app. It's available on every mobile platform and implements open, RFC-specified OTP algorithms. And obviously works with Google's 2-factor implementation.

Edit: forgot to mention, also open-source.

On Google's "Enter your code" screen, if you click the "Don't have your phone?" link, you get a pop-up that gives you the following options:

   * Use a backup code. Learn more
   * Send to your backup phone number ending in ##
   * I cannot access any of my phones Learn more
I presume option #2 is the one cubicle67 is referring to.

So yes, if someone gets my phone, they can then gain access to my Google account. Grrrrr...

> So yes, if someone gets my phone, they can then gain access to my Google account. Grrrrr...

Only if they also know your password. That's why it's called "two factor authentication". Simply compromising your password is not enough. They also have to capture your phone.

Now, if you're stupid enough to write your password on the back of your phone.... you deserve everything that's coming to you.

There are plenty valid theoretical cases being made in this thread that the phone is not a fully-independent second factor from the password. Syncing phones to laptops is a big one. If your phone is compromised and you're concerned at all, you really should just reset your password.

> I presume option #2 is the one cubicle67 is referring to.

I'm pretty sure the backup phone number is someone else's phone, not yours. I use my wife's phone number.

If you lose your phone (and thus access to the Google Authenticator app), you can send a code to the backup phone (for example, my wife's phone) allowing you to login.

Then why are you not using POP or IMAP with a separate password? What are they going to do with the auth code when they don't have your original password?

I'm not trying to defend their stupid choice of offering option #2, but rather trying to offer a solution to your current problem.

I find option #2 to be very useful, not stupid.

If my phone becomes unavailable (eg lost/ stolen/ dropped in a toilet) then I need a backup option to login. The backup options Google provides are: * Use a backup code * Use a backup phone number * None of the above, I still need help!

1. The backup codes are suggested to be printed and stored in a wallet; however you can put them anywhere you like.

2. The backup phone number can be somebody else's number. Your best friend, your partner, whatever.

3. If you still can't get a backup code, the third option is to go through Google's support team and recovery process. Selecting this option results in an advisory message stating the process could take from 3 to 5 days.

These options appear to be very sensible to me.

I guess that's fair, but since it seems like that's how it got gamed, they should definitely be more strict and send only to your primary or backup number.

> So yes, if someone gets my phone, they can then gain access to my Google account. Grrrrr...

If someone gets your wallet, they can use your credit cards to gain access to your credit. You cancel the credit cards. Move on with your life.

Your phone already matters as much as your wallet now - that's just reality. Secure your phone as best you can. Use full-disk encryption wherever backups are stored. Don't give your phone to people you can't trust. Change your password if you lose it or it gets stolen. Move on.

These are tools to solve problems, not shrines to worship.

>"the very four digits that Amazon considers unimportant enough to display in the clear on the web are precisely the same ones that Apple considers secure enough to perform identity verification"

I don't see how this is an Amazon security flaw. The last four digits of my credit card is printed on receipts from just about every merchant I transact credit card purchases with. Treating such public information as if it is a PIN places the flaw clearly in Apple's court.

> First you call Amazon and tell them you are the account holder, and want to add a credit card number to the account. All you need is the name on the account, an associated e-mail address, and the billing address. Amazon then allows you to input a new credit card. (Wired used a bogus credit card number from a website that generates fake card numbers that conform with the industry’s published self-check algorithm.) Then you hang up.

> Next you call back, and tell Amazon that you’ve lost access to your account. Upon providing a name, billing address, and the new credit card number you gave the company on the prior call, Amazon will allow you to add a new e-mail address to the account. From here, you go to the Amazon website, and send a password reset to the new e-mail account. This allows you to see all the credit cards on file for the account — not the complete numbers, just the last four digits. But, as we know, Apple only needs those last four digits. We asked Amazon to comment on its security policy, but didn’t have anything to share by press time.

> And it’s also worth noting that one wouldn’t have to call Amazon to pull this off. Your pizza guy could do the same thing, for example. If you have an AppleID, every time you call Pizza Hut, you’ve giving the 16-year-old on the other end of the line all he needs to take over your entire digital life.

This part seems relatively bad:

> Amazon will allow you to add a new e-mail address to the account. From here, you go to the Amazon website, and send a password reset to the new e-mail account.

What I would say is that Amazon's security is in keeping with the recourse their customers have in regards to the transactions Amazon conducts - i.e. credit cards have fraud protection and disputed charges can be challenged and the money refunded when fraudulent charges are made. Amazon has balanced costs, risks and benefits for their stockholders.

The wiping of the author's devices was purely due to the level of Apple's security - a level which Apple established based upon the interests of their stockholders. To hold Amazon to a standard which protects Apple's customers (as the article implies) just doesn't hold water - Apple implemented remote wipe, Amazon didn't.

You are absolutely right, the blame here really does fall on Apple. As the article mentions, the information they got from Amazon could have been obtained from a local pizza joint as well.

Even so, this seems like a decent way to compromise amazon accounts. Even though the danger involved when that happens is pretty minimal for the reasons that you mention, it should nevertheless be something that concerns them. Even just things like revealing purchase history is an issue, though of course unlikely to be a lifewrecker like the Apple situation. I can't imagine this process will work with them in a few days. All I meant to say is that they have something to fix, not that they share significant blame.

After some additional thought, I suspect that Amazon has an additional layer of security in the form of algorithms which flag suspicious account activity just as credit card companies do.

Based on the account, it appears that Apple does not - customer support call + password recovery + wipe iPhone + wipe iPad + wipe Macbook did not raise a flag.

Any user can take over my Amazon account in five minutes. That's a security flaw, period.

Yes this is 80% Apple's fault, but Amazon doesn't have the right to give up my credit card digits. They are not public information as suggested earlier; they are only public if I choose to make it so (e.g. by my usage patterns).

They aren't giving up enough information for anyone to use the credit card (which is your card provider's and Amazon's concern). They are only giving up information which Apple foolishly accepts as top-secret. The final four digits are printed on pretty much every receipt I get, and even using a shredder won't often separate them. TBH, Apple's reliance on the credit card number at all (let alone the last four digits) is pretty silly.

the fact that anybody can steal somebody's Amazon account and publish their private purchases is reason enough.

Yes, it's a flaw that you can get into someone's account. I was just saying that the credit card information being that available is not a big problem in my mind. Amazon clearly think the credit card should be kept more secure than the account, otherwise the whole number could be shown rather than just the last four digits, and I agree.

My bank and a few other companies I deal with require some sort of pin/password in order to speak to someone over the phone. When I call, the conversation usually goes something like

    "Hello Mr 67, before we start I'll need your pin"
      "I have a pin?"
    "Yes, when you set up this account you were given a pin required for phone access"
      "Really? I have no idea what it is..."
    "That's ok. If you can just answer these other few questions.         
     What's your mother's maiden name"
    "and your birthdate"
       [also redacted]
    "thankyou Mr 67, now how can I help you today? ..."

These "security" questions are usually, IMHO, the weakest link.

That's why you make stuff up when initially providing the answers to be used.

and then immediately forget them, and discover that they weren't really necessary anyway and your <service> lets you in with other questions.

My bank has a password - I never use that one anywhere else, but sometimes the bank calls me out of the blue to confirm some actions / bigger transactions and then I need it.

Turns out, when I can't remember it they tell me the first 2 letters!

They must have some advanced crypto where the customer support person can only see the first 2 letters but the rest of password remains securely hashed...

:) Even if it were securely hashed, giving out the first 2 letters reduces the range of guesswork substantially, especially when combined with wordlists. Also, the service-guys have to see my password, after all, they are immediately able to tell me whether I "guessed" the right password.

I don't believe there is any hashing going on, after all, the bank in question is ANZ, they don't even use TANs for online-banking.

It turns out, a billing address and the last four digits of a credit card number are the only two pieces of information anyone needs to get into your iCloud account.

This is scary.

I have actual work to do, work that I have been putting off too long, so let's try crowdsourcing this question on HN:

What should one try to do to protect against this?

Hypothetical actions to take:

Make sure that an email address that's doing double-duty as a login identifier for a given service is unique to the service and appears nowhere on the web or in outgoing mail.

Take particular care to have a "recovery" email address that is used for nothing else. Don't forward it to your regular mail, naturally.

Enable two-factor auth for email if you possibly can.

Have a credit card that is only used for online stuff.

Can one get a second address that is used only as a billing address? How would one do that? (A P.O. box? Expensive! A friend's house? I fear that credit card companies will leak this address like a sieve no matter what I do.)

EDIT: Startup wizards, here's a Minimum Viable Product: a credit card that can only be used for online accounts - which you must whitelist as you add them, via two-factor auth with your phone - and that features two billing addresses: The real one where the bills go and a dummy one that still validates. (Is that even legal under the CC rules? Sigh.)

The other suggestions in the article: Disable Find my Mac, reduce coupling between your accounts… was there something else?

Alas, nobody who isn't crazy paranoid is going to bother jumping through all these hoops. (I have tried to fight that paranoia but I think I'm losing that battle.)

The most surprising thing I see out of this isn't the need for more robust authentication but for services that aren't so damn quick to do whatever you want.

Website: "Hey Bill, glad to see you today, what do you want to do"

Bill: "Delete _everything_ I've ever done on every system I have"

Website: "Of course! Let's get this started... beep boop bip and done!"

What about this:

1 - Kill request sent

2 - 48 hours is set on the clock so you can choose to cancel

3 - You can choose to pay $50 via credit card to have it happen immediately

4 - You are reimbursed $45 after a couple weeks

That might make it a little harder to have such hacks like this happen in the future.

I can just imagine the HN article when someone tries to delete his Facebook account because he disagrees with some new feature, and they won't do it for 48 hours. I've been on the receiving end of "DELETE MY ACCOUNT!!!1!!1" requests, and I know those people wouldn't respond well to "wait two days or pay up."

Can you delete your Facebook account instantly? When I deleted mine, it went into limbo for like two weeks before being permanently deleted. (I don't remember if there was no option for instant-delete, or if I just didn't care to.)

Facebook doesn't even allow you to delete your account. If a site were to remove the account from the public view, but delay the actual deleting by a few days (I'd prefer a week or more actually), you wouldn't notice the difference unless you were malicious. But I don't understand what the money is for actually.

> I don't understand what the money is for actually.

I was thinking about remote storage and devices. For example, a backpack is stolen with your phone/tablet/laptop and you need to issue a wipe to it NOW before they are compromised.

Requiring a credit card at least leaves a paper trail of some sort.

Once I moved and realized I forgot to cancel my phone and DSL. On the road, I used my cell to call the phone company to cancel. They did it for me immediately.

I was relieved it was so easy, but unnerved at how easy it was.

Maybe they had the cell associated with my land line, but I doubt it, since I got the line before I ever had a cell.

Standard procedure when setting up MDM for a company is to disable iCloud. All remote wipe/etc. done by your own servers, not by Apple.

Apple is really bad at running online services. It's a shame that they short-sightedly decided to go to war with Facebook and Google (who are good at services and bad at hardware) rather than playing more nicely together.

More importantly, standard procedure is also to keep multiple backup copies of important data. Because no matter who manages the infrastructure, the only reason the kill switch exists in a corporate environment is because there are scenarios where you plan to use it. Not to mention the very finite lifespan of all forms of modern mass storage, the relative ease of accidental deletion in most file systems, and so on.

For every person who loses "irreplaceable" data to malice, many more lose it out of simple incompetence.

Oh, sure, but I still thnk striking down with great vengeance and furious anger upon those who remote-wiped your devices would be justified, even with backups. If nothing else, it costs time to restore from backup, but if you were on a trip or something and had no convenient high-speed access to backups, it would be quite unpleasant. Especially if you had, say, iPad iPhone MBA, and assumed at least one of them would be likely to survive the trip, and thus had no other local backups with you.

But yes, backups -- and not just online backups, but also offline backups.

An obvious thing to improve is for Amazon to stop handing out customers' accounts to identity thieves who apparently just need to call with name, billing address and e-mail to add a new credit card number, then call again with name, billing address and the new credit card number to gain access to the account. I'm sorry Amazon, but name, billing address and e-mail shouldn't be sufficient to hand over my account to a stranger.

Have a billing phone number. If a stranger tries to get access, call that phone number before you can get access.

This massively raises the bar for social engineering.

Not a solution, but I have a feeling Apple is going to be tightening their policy very soon. This story has gotten a lot of attention.

Honestly, the credit cards are the easiest part. I'd much rather someone get my credit card number than my email account.

I agree with you but I think the idea of having a credit card just for online stuff is to limit the ways in which someone can get access to any information about your credit card, like the last 4 digits. For example, someone who finds your credit card receipt at a restaurant would get the digits of a different card and couldn't get access to your AppleID.

It's actually a good idea. I'll think about doing that…

I wouldn't do anything too crazy. If you're the 1 in a billion (7 billion, actually) who gets targeted like this, they'll probably still get in so you're just wasting oodles of time and adding a good dose of constant aggravation for essentially nothing.

There's many good reasons to make backups however, not just the chance that an advanced hack like this can occur. Definitely not wasted time.

Give your parent's email as your recovery email address. If you need to reset, you can call them and talk them through clicking the link and resetting the password.

The bad guys then have to crack your parents email account too, and being older they will be less likely to have daisy-chained Google, Apple and Amazon accounts.

They will also more likely have picked a password that's easy to brute force.

run your own server.

...until the hacker calls up your registrar and convinces them to reset your domain management password.

...in your closet

A lot of receipts contain the last 4 of the card used, it looks like most iCloud users are one garbage bag away from being compromised.

Especially given that that information is available to anyone you've ever used that credit card with.

"Pay with your iCloud password" just doesn't have the same fuzzy feel, does it?

I think a lot of people are missing the forest from the trees in this discussion. The real interesting question is not how he got hacked, it's why it doesn't happen more often? None of the tricks listed in the article are particularly time sensitive, the fundamental patterns behind this hack go back at least several years and they relate to fundamental design interactions between complex systems that are difficult to impossible to change. So given all this, why him? why now?

The answer doesn't have anything to do with how he should have set up 8 factor authentication or how he should have had a Swahili-numeric password. The answer is that his hacker had extremely atypical motivations and that's the reason his life got destroyed.

The goal of this hacker was to pwn this guy's short, valuable twitter account. It's unlikely there's really any other hacker in the world who has that goal which is why such attacks are so rare. For most hackers, there's some sort of rational ROI calculation and if the ROI is negative, the hack isn't worth doing.

Nerds often have a hard time seeing that security is a holistic system. It's often comprised of many flawed layers that are layered in depth to provide a statistically secure system. In real life, security comes from being able to push down the ROI through institutional mechanisms rather than personal ones. Credit cards are designed to be stolen and recovered from, investigations are able to target key players in the field and tough penalties means that the negative effects outweigh the positive gains.

All this has lead to a black market rate of merely $2 - $3 per stolen credit card, meaning that there's not much motivation to hack in the first place.

Nerds naturally have a libertarian bent which makes them more inclined to believe encryption and technology is the solution to the problem when, in reality, it's a beefed up police state and American hegemonic decisions that can span the globe.

>it's why it doesn't happen more often?

It does. It happens all the time. Most victims don't have the luxury of writing a wired article about it and are stuck picking up the pieces on their own.

A smart hacker just steals the info, doesn't wipe out the data and reveal themselves.

Most people are nice?

"Moreover, if your computers aren’t already cloud-connected devices, they will be soon."

I disagree. You can and will (for the foreseeable future) be able to choose a computer/configuration that doesn't allow some remote third party to run arbitrary code on it or wipe it.

His devices were all wiped because he let a third party have that level of access.

I wonder, do any of these company send defensive communications when people try to unlock things like this?

Yes, I made that phrase up. So here's what I mean:

- "Amazon then allows you to input a new credit card." <-- Amazon should then send an email confirming this to your email address, a txt to your phone, and a smoke signal to your Tipi.

- "Next you call back, and tell Amazon that you’ve lost access to your account.", email, phone, Tipi. And a waiting period.

- When you call Apple's tech support, again: email, phone, Tipi.

Maybe I'm missing the obvious flaw in this plan, but since customer support (humans) seems to be one of the main weak links, it would make sense for presume that's where people will attack, and to then attempt to reach out with all communication mediums possible to make sure you're talking to the real deal.

We need people to be able to regain access after losing a password, and we need only the right people to have that. This is a very hard problem.

One thing that we should have is a "cool down" period. If you want to regain access to, say, your GMail account, then it will take 48 hours of waiting, and phone calls and emails will go out to your contacts before that is completed, so the real person has a chance to protest.

I don't understand how the MacBook data was permanently lost. Even if the files were deleted in the OS, they are recoverable by disk utilities. Unless they were encrypted. Which just goes to say that when you think the solution to your problem is encryption, you don't understand your problem.

If you ever reach the point that your account is so hard to recover that it requires human customer service intervention, the recovery process needs to be tedious and thorough.

"Okay, I'll need a notarized copy of a photo ID and once we have that, we'll give you a call to the number we have on file to confirm the change."

It's not perfect, but it would require an extremely dedicated and targeted attack to bypass, as opposed to "Hi, I'm your pizza delivery guy. I took a look at the receipt before I delivered your pie, and now I know the last 4 on your CC, your billing address, and your name. Let's go iCloud fishing!"

I agree, if you get locked out and need to regain access it should as hard as possible to get back in.

On the flip side, we perhaps need to come up with something better than usernames and password for authentication. There are plenty of services where I simply cannot remember my password and/or username. I'm getting better about writing them down inside a password protected master file. But for many of those services I rely on the account recovery procedures; a vast majority of which are vulnerable once the attacker has access to my e-mail inbox.

The problem is simply that if things are easy enough to remember, they're easy enough to crack or brute force. If they're too hard to remember, people will forget them and have to recover them.

I use LastPass and just generate a new random password for every new account. If I ever forget my LastPass password, I am boned (since it's the encryption key for my data!), but I don't worry about forgetting passwords anymore, and I don't worry about RandomSite getting hacked and my password being leaked. It's not perfect, but it's good enough.

On Apple devices, I believe that remote wipe is "Change the encryption keys for the block storage. It's as good as random data now."

That might also be why a PIN is available for "stopping" the wipe. (As an aside: the group got what they wanted, and one of the members even seems remorseful: they have the PIN necessary to unlock the device, but this was never touched upon.)

At the very least, that is why iOS devices take split seconds to "wipe", as opposed to the time it would take to write thirty-two billion nul bytes to flash.

I assume the remote wipe zeros out the data a few times, otherwise the feature is somewhat useless.

If you're trying to remote-wipe your computer so that a thief doesn't access your sensitive data, wouldn't you want the data to be lost permanently?

Could be. But that's a very different problem.

Old-school computer security breaks things down into the CIA categories:

Confidentiality is for things you want secret. Integrity is for things you want to not be altered. Accessibility is for things you want to be able to reach.

Honestly, very little of data requires confidentiality. Yet that's what encryption is usually used for. I would, by an order of magnitude, rather have a hacker gain access to my family photos than have them deleted beyond my ability to recover.

I hate whole-disk encryption. In nearly everything in my life, the threat of losing access to my data is vastly worse than someone else accessing it.

Losing a laptop can and will happen at some point. At that point, if you don't have a backup you will lose access to your data period. Full disk encryption means nobody else will get to that data.

Keep an unencrypted backup in a secure location, not on a device you are bound to lose in a coffee shop or airport.

I encrypt the entire disk of my laptop. That can contain potentially important information, and it also has the best chance of being stolen or lost. I can keep relatively important piece of information on my laptop now after I installed TrueCrypt and encrypted my entire disk. It makes hibernating my laptop about 20x slower, so I stopped doing that, but it's completely worth it.

Interesting - hadn't heard that CIA thing before.

I run a business. A good deal of what is on my laptop I would put in the confidentiality category. I guess apps and settings would come under integrity.

Yeah, one of the points of CIA is to help you identify which problems you want to fix. A lot of business assets do require confidentiality, and you have to spend correspondingly more money and time dealing with it.

I don't have a blog and I don't know the proper convention for those "Show/Ask HN" posts so I suppose a comment here is the next best thing because my question is related.

After reading the "Yes, I was Hacked. Hard." post I updated several of my passwords and found that Netflix enforces a 10 character limit on their passwords. Does anyone have an idea why or how this could be the case? I would find it very ironic if they did this to save a few bits per user in their database considering they're a media streaming company.

Very likely its just some sort of limit imposed by a security API or library call. Definitely not a way to save space. Its really idiotic - they should be extending it out to longer than that, but there are still some banks around that impose shorter limits than this (8 chars) so they are in good company.

Two-factor authentication is important for online security (and not just email accounts), but there are other lessons to be learned from Mat Honan's misfortune. I'm probably more extreme in my practices than most people, but I'm OK with the inconviences.

- You can't rely on companies providing online services to have your best interests as their best interests. - Take security seriously because if you don't you won't know about an attack until it's done. - Don't use a vendor's all-in-one services. - Don't use "the cloud" as a backup source. - Back up frequently. - Don't use one email account for everything. - Have an email account that is used for recoveries and nothing else... and keep it obscure. e.g: x90x90recovx@someotherhost.com - Don't use personal credit cards for online purchases. - If it's an option, don't store credit card details against your account; choose to manually enter it every time. - Don't use the same credit card for multiple sources of online shopping/billing/etc. - Don't give real answers to "security questions", such as your mother's maiden name or the name of your first pet. - Don't provide real personal information (address, contact number, etc) to online services when you create an account. - Don't use Facebook, Twitter, etc irresponsibly. - Shutdown if you're not at your computer. - Encrypt your data.

Can somebody explain to me how it is that two-factor authentication would have prevented the hacker from seeing the author's recovery email address? Why would Google allow anybody to see your recovery email address without a password, and why would two-factor authentication prevent it. The author never explained this.

Some banks provide a service which allows you to create unique credit card numbers without actually having to get separate physical credit cards. Kind of like application-specific passwords, but for credit cards.

See here: https://www.citibank.com/us/cards/gen-content/messages/van/i...

Separate credit card numbers for Amazon and Apple would have prevented this hack.


This would be much more effective than the "Verified by Visa" theatre. "Virtual credit cards" with a limit and maybe even vendor limited (for example, create a virtual card and add some sort of vendor id for Amazon)

Too bad it can't be used for anything, for example, some airlines require you present your CC when traveling (if it's your cc and you're traveling)

Everyone focuses on Gmail 2-factor, but that should be added as an option for any online service. It's trivial for any web developer to use the Google Authenticator to offer 2-factor auth for your own service in just a few minutes. I made a demo a while back in less than an hour, all open source. http://dendory.net/twofactors

Useful advice via http://notes.kateva.org/2012/08/net-security-is-completely-b...:

'We need to give Schneier a few drinks and get him to talk about this again. Failing that:

Backup for Darwin's sake. Don't enable remote wipe of Mac OS X hardware. Just encrypt it. Use Google two-factor (two-step verification) if you are a geek and can stomach it. Fear the Cloud. Keep the data you value most close to you. Don't use iCloud. Don't trust Apple to get anything right that involves the Internet and/or Identity.

Not being Schneier my advice isn't worth much, but fwiw I suspect the "solution" is:

Get rid of the secret security question. Strictly limit password resets. If someone lost last access, charge them $50 to go to bank, post office or notary to establish their identity. Incorporate biometrics (thumb print and speech probably).'

Some regions have data protection laws. This means in some places the standard security questions (like "What's your mother's maiden name?") are not enough to protect people's personal data. (Which is good).

However such laws also include access. You cannot use disproportionate means to require access. Biometrics would probably not be legal to protect things like photos etc.

The scariest part of this article IMO is how there now is a recipe posted for getting into any amazon account. Imagine all the damage/harassment they could do once in there, buy all kinds of stuff and have it sent to you. Spin up 20 EC2 instances and use them to perform illegal activites etc, while burning up cash on your credit card.

That to me seems much worse than having an imac wiped.

The fact is that Apple and Amazon have far more confused customers than targets for social engineering attacks. They are always going to have an "I forgot everything about myself and my account, please let me in!" option. All cloud service providers are going to have this.

With this in mind, it may not be wise to remotely link your MacBook such that it can be wiped by Apple Central Command. Do people seriously do that? A phone is maybe kind of reasonable for this kind of thing (only kind of), but your actual laptop? Is this a requirement of new versions of OS X or something? I don't know who would set this up willingly.

Any local data that you want to keep from attackers should be stored as ciphertext. Your secret key should be encrypted with a strong passphrase. Most thieves, even high-level corporate espionage-type thieves, won't know how to use GPG in the first place, but if they do, if you've done it right they won't be able to get in.

From the perspective of keeping ourselves safe in a world where all data is kept on (or hooked up to a remote control at) the server of a big faceless corporation, all plaintext should be considered public info. Just because they haven't published or leaked it yet doesn't mean they won't, and it doesn't mean that anyone with an interest can't go in and take it, or that they won't wreak havoc for an ultimately minor goal (like access to Twitter).

Encryption and backup. The two constantly repeated, never honored mantras whose inconveniences have plagued computer users for decades now. If people did these things correctly, hacks would rarely matter or jeopardize significant amounts of data. This is a field that is ripe for system-level disruption; Time Machine kind of helped with the backup, but we still don't have anything decent for layman's crypto (perhaps because the business models of companies are now so dependent on reading our information and selling it back to interested parties).

"Encryption and backup. [...] If people did these things correctly, hacks would rarely matter [...] This is a field that is ripe for system-level disruption"

Encryption and backups set in stone. An attacker may not be able to read your encrypted backups, but if he can delete them, you still won't be happy.

I think the only feasible solution is that of online, write-only backups. They need to be online so that devices can backup themselves when they deem that necessary; you cannot trust users to do any manual backup task. They need to be write-only because, otherwise, with online backups, an attacker could wipe all your backups. Semi-write only, in the form of "deleting backups older than a year" or "delay any deletes by a month" (to give the user time to report his phone to be stolen) or "delete only after three-factor authentication" probably is acceptable.

"perhaps because the business models of companies are now so dependent on reading our information and selling it back to interested parties"

I think it is because online backup looks too pricey. People keep comparing the price of online storage to that of hard disks. For example Dropbox is about $1 per GB of storage per year. You can buy a SSD disk or a laptop for less than $1 per GB of storage. As this example shows, current solutions also do not protect well against attacks.

I am not sure that the options of having your own cloud, or of making a cloud with others (peer-to-peer backups) will make sense to Joe consumer. Users may not want yet another device at home, likely will not have the upload bandwidth (yet), and are a risk factor with respect to operations on such a device. A home device probably would have to be a custom device, not a PC. Users cannot be trusted to operate it in ways that keeps their data secure, so you must make it impossible for them to operate it.

"Epic Hacking"?

A whole lot of damage was done, yes - but a "epic hack"? Don't think so.

  epic: heroic; majestic; impressively great

"The disconnect exposes flaws in data management policies endemic to the entire technology industry, and points to a looming nightmare as we enter the era of cloud computing and connected devices."

This disconnect is unfortunately not limited only to tech industry. Every receipt you get while you pay with your credit card offline, will display some part of your credit card number. The crazy thing is that there is no standard for it and everyone picks different numbers! If you collect your receipts and then throw them all at once without destroying them - anybody can put the numbers together.

I would say this is a much bigger problem and has been around here for ages!

How can he not press charges against 'Phobia' and any of his stupid script kiddy friends? Maybe the police is too stupid to do anything and the FBI has too much other shit do to but isn't there any legal way to get these bastards?

He probably doesn't want to risk further attacks. If he did want to get the police involved, I bet the attacker wouldn't be too difficult to find with all the services they logged into and phone calls they made. I mean, they could have used Tor for all of their Internet activity, but I bet they still used their home land line to make the phone calls.

it boils down to "who do you trust?" Ultimately you have to take some responsibility in ensuring the safety of your data and be cognizant of the weaknesses of each link. I backup my data onto an external HD. In the event of fire or that HD being lost or stolen, I have online backups of everything but video. I also have an older external HD backup stored at my parents house 2 hours away. I trust myself to an extent and the cloud to an extent, but never either absolutely. My life is not Google or iCloud or Dropbox or Drobo.

When you perform a remote hard drive wipe on Find my Mac, the system asks you to create a four-digit PIN so that the process can be reversed. But here’s the thing: If someone else performs that wipe — someone who gained access to your iCloud account through malicious means — there’s no way for you to enter that PIN.

That sounds more like remote encryption to me. And a four digit PIN is easy to brute force (assuming that it isn't asking apple for the decryption key once entered (which means you need internet access do reverse it)).

Most of the "security questions" can be answered by looking at the Facebook profile (of the person or his/her friends -- at least some have the info public). A motivated hacker can possibly crack even bank accounts using the facebook profile. The account/security is indeed in a big mess.

>If I had some other account aside from an Apple e-mail address, or had used two-factor authentication for Gmail, everything would have stopped here.

Are you sure? Do you trust the minimum wage customer service reps of your phone company to not be susceptible to social engineering?

The scary bit (well, one of many) is how easy it is to get access to someone's Amazon account by just knowing their email address and billing address. That lets you buy anything, see their entire order history and probably gives you access to all of AWS.

If they try to add a new address, Amazon will ask for the payment method to be re-entered.

Can someone explain reasoning behind the implementation of those "remote wipes"? If Apple pulls a trigger, everything on my laptop erased when it is next online? I can't see any practical application for that.

It's to keep sensitive information from falling into the wrong hands.

Just logged into Amazon account and removed all of the cards I have on record. Suggest everyone else does the same.

That doesn't matter for the purposes of the crack. With your name, billing address and email the cracker was able to add a new credit card number

Don't forget to remove the credit card information from iCloud/.Me too.

Does this scare you?

Somebody else found out an iCloud flaw... http://m.smh.com.au/digital-life/consumer-security/aussie-ex...

http://whois.domaintools.com/emptyage.com reveals way too much about Mat Honan!

All the major registrars I know of offer free whois privacy services.

What does such a privacy service entail?

As far as setting it up, usually just a checkbox somewhere on the registrar's admin panel.

As far as the results, all of the contact information in the public whois record is replaced with the registrar's contact information. They will forward information on to you if absolutely necessary.

I keep whois privacy turned on for all my clients just to protect them from that damned Domain Registry of America scam.

Most important lesson as far as reducing vulnerability to social engineering is concerned: whatever service we subscribe to - we should always find out about their account retrieval process.

In other words, we should always ask "what is the password retrieval process for the new account you just opened?" This sounds like a big task and one where not all scenarios can be covered. But I think this is a good first step - as long as we are still dealing with passwords, federated identity, half-masked credit card #'s and security questions.

I think this exercise would help us be careful about our choice of passwords, answers, email ids.

What would be the most obvious downsides to this approach?

Can we please get the entire internet to agree to stop using email addresses as usernames. It's not a user, its an email address!

Probably not. Emails make terrific usernames because they are unique, easy to remember, double as a communication identifier and make registration slightly easier. And I'm not exactly sure how that would help in this situation. Are you suggesting that iCloud and Gmail login with usernames (different from your email address)?

With every platform, there is compromise between convenience and security; when your platform has to reach many, many non-tech-y people, convenience is preferred.

There are easy trade-offs one can make between convenience and security. For example, identity verification on the phone with the last four digits of a credit card (Apple).

But then there are policies and technology that increase BOTH convenience and security. Say the difference between using SSH these days versus using, say, paper and an Enigma machine.

The inconvenience of Google Authenticator is minimal and the security provided is huge.

How is that going to help? Are people going to be expected to use a unique username per site? And password recovery is still going to let someone take over.

It wont solve the problem, it also would not have prevented THIS issue. However -- many, many people use one email address for more or less everything in their lives. It's best if someone has no pieces of the puzzle, rather then have it half solved for them already. Especially when it's something like your first and last name as part of the address.

On the contrary, for a great many sites (low impact) I'm happy that they finally figured out to use my email address as a username. As a usability feature, it's much nicer than having to guess at whether my standard usernames are taken.

There are many problems with this. Among them that I have some iTunes purchases associated with an email account that hasn't existed in /years/. There's no way to rename an Apple account. This same problem exists on most sites that use email as username - if your email address of choice ever changes, you're SOL on having a single identity anymore.

You can change your primary email address, renaming the account, at http://appleid.apple.com

Not only that, but I have my own domain name which forwards anything@mydomain.com to my primary email. And to keep spam under control, I use a unique email for each (low impact) service. Therefore, I often forget which email I used when signing up for that particular service (unless they sent me a conformation mail, then I can usually dig it up).

If you use example.com@mydomain.com as the email address for your login to example.com, then they're easy to remember.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact