Some of the common misperceptions I see:
Myth: But what if my cell phone doesn't have SMS/signal?
Reality: You can install a standalone program called Google Authenticator, so your cell phone doesn't need a signal.
Myth: Okay, but what about if my cell phone runs out of power (added: or my phone is stolen)?
Reality: You can print out a small piece of paper with 10 one-time rescue codes and put that in your wallet.
Myth: Don't I have to fiddle with an extra PIN every time I log in?
Reality: You can tell Google to trust your computer for 30 days and maybe even longer.
Myth: I heard two-factor authentication doesn't work with POP and IMAP?
Reality: You can still use two-factor authentication even with POP and IMAP. You create a special "application-specific password" that your mail client can use instead of your regular password. You can revoke application-specific passwords at any time.
Myth: Okay, but what if I want to verify how secure Google Authenticator is?
Reality: Google Authenticator is open-source: http://code.google.com/p/google-authenticator/
Hmm. Maybe I should throw this up on my blog too.
I currently have two-factor setup on two accounts. I won't say there's no hassle involved--authorizing a machine via SMS so that I can login and generate an app-specific password is a chore--but the peace of mind is well worth the hassle.
Did you also know that you can re-assign your Apple ID to an existing e-mail address? Like, say, one you have two-factor enabled for? Now people can socially engineer Apple's flawed policy all day but they'll need to steal my phone, too.
EDIT: Also, if you don't have backups, you might as well just delete everything yourself right now and use that as motivation to prevent the same thing from happening again. Hackers, tornadoes, spontaneous combustion and ghosts are all conspiring to destroy your data sooner or later.
Part of the reason I think two-factor authentication is a usability burden is because each "identity provider" wants to use its own protocol. Google uses an Android app. PayPal sent me a card. My brokerage has a keychain token available. Other companies use a "soft" RSA token that runs on Windows. But if everyone agreed on a protocol, then I could have everything in one place, which would make two factor authentication significantly more enjoyable to use. (I know there are standards: the question is, who other than Google follows them? :)
I believe Google Authenticator is based on open standards and open source, so people could standardize on it if they wanted too.
The standards implemented are bonafide RFCs - HOTP (counter-based) was published in 2005, no less. I'm not sure how much lower the barriers for integration could be.
It is an open standard protocol, but I don't know in practice how many companies have compatible implementations.
I designed Authy with that in mind. I wanted a way to have a 1 token for all accounts. Maybe we will add support for Google Authenticator, so you could import your Google Auth token into the app.
They are Google-generated passwords with a user label. And if you use 2-factor authentication, they are the weakest link in the chain since they provide full access to a google account except for access vectors with 2-factor authentication. In addition, every app can use such a password, not just the app you created the password for …
2-factor authentication is great and certainly recommendable but you should not fool users by using the false term 'application-specific passwords'. In addition, more complex Google-generated password would be appreciated.
Also this password doesn't give you full access to your Google account. You cannot log into Google web apps this way (AFAIR). Thus you won't be able to mess with account settings (passwords etc). Before you can change critical account settings Google asks you to provide your traditional password again.
Your comment is a bit harsh if not FUD.
Using one of these so-called application-specific passwords, you can delete calendars, mails and contacts. That is critical enough for most users.
An additional concern is the usual 30-day authorization you give in order to avoid entering your 2-factor token again and again. Is there any way to de-authorize such a 30-day authorization?
Anyway, I don't rule out that my perspective might be too strict. For must users, the whole Google 2-step authentication system is probably a very important step towards improved security.
If a hacker gets a hold of an old backup of mine, which includes a Pidgin configuration file I forgot to delete, which holds a plaintext password, can he get into my account with that?
If you weren't using application specific passwords, of course, it would have been your actual Google password in that Pidgin config.
It's a little disappointing that the app-specific password isn't more secure, but it's certainly not less secure than foregoing them.
App-specific passwords are bypasses of 2-factor auth. Use them selectively and with care, revoke ones you're not using anymore. And replace them from time to time.
If you value security more than convenience don't use application specific passwords. Google certainly allows for that by only consuming their services through a secure web interface. Is there another mainstream e-mail provider who supports that?
However, if you would like to use apps, gtalk, pidgin, etc. Google will still let you and it will still be more secure than before (revoke specific passwords, etc.).
If one of your application specific passwords (ASPs) is compromised your e-mail content will be compromised but NOT your account as long as your phone / token generator are under your control, allowing you to recover access by your own means. That is a big difference.
A quick search shows that others share my concern about ASPs:
Off the top of my head, I don't even know if it's possible to ensure my iPhone or GalaxySII is using SSL/TLS for it's POP/IMAP connections (or if an attacker with brief access to my device could switch it to plain-text-passwords then sniff it's authentication on the local wifi).
A kludge, sure, but a possibility.
ASPs are certainly less secure than 2FA, but they only permit services access, rather than account administration access.
2FA is slightly more hassle for the knowledge that you are effectively protected against a wide range of attacks. It's very worth it.
2-step authentication is great but the (wrongly) so-called application-specific passwords are definitely the weakest link in the chain. They do not allow for a full takeover of a Google Apps account but there is still a lot of damage possible.
If you use the same generated password for every application, then you're not in that much better a situation than you would be otherwise. If you use different identifiers for each application, you can identify exactly where a breach happened from by last login time, and disable that password entirely.
It'd make me a lot more comfortable if I could lock each password down to a specific Google service (for instance: generate an OTP for Pidgin, and enable it only for Google Talk), they were a lot longer, and had special chars + numbers in them.
The point about permitting a password to be used for only certain services is absolutely a valid one, though.
Edit: To add to my reply, I have never used the remember password for Pidgin in the past, however enabling 2 step auth require me to do that. It'd be great if Google could somehow allow only the first associated app with a password for subsequent uses. Technically that looks very challenging (if at all possible).
The inflexible account ordering in Authenticator is bugging me, since I recently added a 5th account (not 5 google accounts) and my phone only shows 4 at a time. I don't know if you have any influence over the people maintaining the app, but it looks trivial to fix, given the comments.
It's similar to @mat's problem - Amazon assumed the CC last 4 digits was "non identifying", Apple assumed they were.
How much effort do you suppose your phone company expends securing your voicemail or call forwarding? I'll bet it's less than would be considered "industry best practice" for securing your corporate dns records…
First, let me explain a little bit of background on this "hack". From the article, they had 4 problems with their process that allowed them to get hacked badly:
1. AT&T was tricked into redirecting my voicemail to a fraudulent voicemail box;
2. Google's account recovery process was tricked by the fraudulent voicemail box and left an account recovery PIN code that allowed my personal Gmail account to be reset;
3. A flaw in Google's Enterprise Apps account recovery process allowed the hacker to bypass two-factor authentication on my CloudFlare.com address; and
4. CloudFlare BCCing transactional emails to some administrative accounts allowed the hacker to reset the password of a customer once the hacker had gained access to the administrative email account.
I'm not really sure what #3 is and #4 is irrelevant to our discussion, so I'll concentrate on points #1 and #2.
From #1, it follows that the attackers were able to obtain the phone number associated with the two factor auth. How did this work? My assumption is that it was a very targeted attack.
The article starts the attack at June 1st, 2012, but I believe (read: assume) that the attackers probably met the target of the attack beforehand and obtained his/her business card (with a cellphone number), which allowed them to perform #1 above.
So, given that this attack required a physical piece of paper (i.e. a business card) to be acquired from the target, it is not a stretch of imagination to say that if another attacker wanted to obtain the last 4 digits of someone's credit card, all they need to do is follow the person to a restaurant or gas station and get a payment receipt -- every receipt has the last 4 digits written on it. Some have the first four.
Therefore, I don't believe it was Amazon's fault for assuming the last 4 digits are non-identifying, but rather Apple's fault for assuming they are. To be clear, hindsight is 20/20, so I think it was relatively reasonable for Apple to assume that. However, I do expect Apple to change this policy in the future.
(And, there are many alternative and easier ways to acquire most people's cellphone number - no need to meet someone or get them to give you a business card… But your point still stands…)
It's true about a cellphone number -- from what I heard about his attack outside of this article, this a very targeted attack, and the attacker knew exactly what kind of data to expect in the GMail account, which is what led me to conclude that the attacker probably knew or met the victim, but likewise, good point about obtaining the cell in other ways.
The app works very good on both Android and iOS (I personally use the iOS app because I flashed new ROM all the time and sometimes Titanium Backup of the app doesn't work). I tried the SMS a few times, the message normally arrives within one minute.
I went to a different computer that had previously had Chrome synced using ASP, switched to my account, and went to settings. At the top, I get this error message: "Account sign-in details are out of date. Sign in again" That's good since ASP was revoked. But then I go down to advanced settings => passwords, and they're all still visible?!?! That's just WRONG! If the ASP login credentials have been revoked, access to all locally stored passwords need to be revoked too!
Any idea what's going on? This seems like a flaw. The 2-factor authentication is still great though.
(insert your answer below)
I would also recommend putting an unlock pattern on your phone to protect in case your phone is stolen.
1) You still need to enter your password every time you log in.
2) You can add backup phones that can be called/texted with the verification codes
3) You can print out back-up codes that will always work (once)
4) If your phone is stolen and is using an application-specific password, you can revoke that password for that application.
If your phone gets stolen and it's logged in to your google mail without a lockscreen pin/code, then yeah - the thief can read your mail, 2fa won't help. They can also run your Authenticator app and see the current 6 digit number, but that's not useful without the password as well.
(I'm not sure how easy it is to extract the Google password from an Android or i phone - I wonder if you can just switch them to non-TLS POP3 or IMAP and have them send a cleartext password over an unencrypted wifi connection?)
However, I think google services use XMPP if I'm not mistaken. In which case the password is never actually transmitted over the air. XMPP uses Digest access authentication. Short version: the server would first send a challenge to the client. The client hashes the challenge with a hash of the password and returns the result. The server performs the same operation and compares. So even with a MITM you'd get nothing. Furthermore, the client itself would never need to store the password either.
They can wreck havoc, but they cannot change your password and steal your account.
They would still have been able to compromise his iCloud account and thus destroy the data stored on his computers.
Unfortunately, I think that in this particular case, having two-factor auth on GMail wouldn't have helped. His account was compromised by having a password recovery email sent to his iCloud address. Presumably, password recovery bypasses two-factor auth.
In this attack and the earlier CloudFlare attack, the attacker took advantage of inappropriate recovery email settings. While it's reasonable for consumers to enter recovery emails, I think that professionals should avoid enabling them. When a @gmail.com sends recovery mail to @me.com (or vice-versa), the attack surface is greatly increased.
Secondly, the Wired article made this claim: "Because I didn’t have Google’s two-factor authentication turned on, when Phobia entered my Gmail address, he could view the alternate e-mail I had set up for account recovery. Google partially obscures that information, starring out many characters, but there were enough characters available, m••••firstname.lastname@example.org."
I don't know for sure whether that's true or not. But assume it is true. If two-factor authentication had been enabled, then the hackers would have had a much harder time guessing Mat's email address for iCloud and whether he had a @me.com email address at all.
The problem may be that "me.com" is so short that Google might display the full domain name. If that's the case, Google should fix it.
I disagree. I think most iCloud users (%80 of iOS users by Apple's count) have @me addresses when they upgraded to iOS 5 or Lion. I can use both my @gmail.com and my @me.com in App Store to purchase, or to login to icloud.com.
Gmail was not really needed to guess the name at @me.com.
Moreover, in his case, it seems he would be better off not having the secondary e-mail address for recovery at Google. It turned out to be anti-security measure.
The root problem in this story is that things are just too damn interconnected these days. And we're encourage to interconnect them even further (using cellphones to authenticate email, in this case).
I think that my reply is not harsh enough. After reading the comments more closely, I see that this a typical IT response to IT failure. 1. Ignore the root cause. (Interconnectedness.) 2. Blame the user. (Implicitly, for not enabling two-factor auth.) 3. Suggest a workaround and dismiss any concerns real-life scenarios. (E.g. loosing your wallet and cellphone in an emergency. Emergencies like that happen more often that you'd think.) 4. Feel smug.
I've found this to be true - to a certain extent. I had two factor authentication turned on and found it to be a nightmare in OSX Mail. Failures to retrieve mail, asking for my password constantly, etc. I was resetting the application passwords every two days. I tried to research a fix, but in the end it became less of a hassle just to turn it off and have my email work 100% of the time.
It's probably a bug in Mail.app in Lion, but I can't say my experiences with two factor auth have been positive.
Thats what they're for.
The fact that they pieced together all this information from multiple sources, including Amazon's ability to add credit cards over the phone, to getting the billing address through domain name registration, to hacking into Apple iCloud really makes me feel... I guess depressed is the word.
We really have no control over our own data security. I've been super paranoid about things like identity theft, and I got my identity stolen, which is something I've been dealing with over the past 2 years or so. Somehow, my birthdate, addresses, etc were all wrong, and I had to jump through hoops to get it changed. As well, I currently have an unpaid credit card linked to my account, and the credit agencies and the collection agency won't remove it. The collection agency required me to submit 3 copies of my signature, a police record, copies of my identification, etc, before they'll remove it, even though THEY were the ones who made the mistake. I went to the police station to file a report, but they needed documentation that I didn't have, since I had already changed most of the information through the credit agencies. At this point, I froze all my accounts through the credit agencies, and I've given up.
The safety of my email, etc, is something that I also take extremely seriously, and now I'm being told that there's a possibility of being hacked via clever hackers piecing together information from various sources, each of which have different security procedures. We literally have no data security except "security through obscurity", meaning that the likelihood of being randomly hacked is low, but if someone wants your account, they can and will get it, pretty easily it seems.
The industry NEEDS to standardize on very rigid set protocols on things like what information they give out, how accounts are reset, how things like credit cards are added to accounts, what information they leak, etc. This is ridiculous.
Unsurprisingly, I got the exact reaction I'm seeing here when it has been suggested: lots of questions about how it works, people who think their situation is unique so it won't work for them, and people complaining than SMS is insecure.
1) Don't ask anymore questions. Try it out, if you hate it turn it off.
2) Your situation almost certainly isn't unique. You get 10 codes to print out, you can have (revokable) application-specific passwords that don't require the token. Try it!!
3) Use the smartphone application.
Don't ask any more questions - just try it out!
I'm saying that people should turn it on, and try it. Most of the questions are the kind of things that would be solved by just trying it!
Not even these question:
Aren't we as tech people completely and utterly failing the world at large when the best possible response to this story is to turn on 2 factor auth on one of the many accounts a person has?
Is a very slight reduction to the attack surface really the best we can do?
At the moment, yes, I think it is.
At some point in the future perhaps this may improve (although I wouldn't count on it).
Now I have some codes squirreled away in a couple key locations.
(I managed to get the information by visiting GMail from my Linux box where the login didn't expire, but still)
Then type "gpg -c sensitive.png" and use the same password as your gmail account to secure it. Then put "sensitive.png.gpg" in "~/.ssh" or another place out of the way and forget about it, until the day comes that you'll need it.
About a month ago, one of my credit card accounts got hacked and was used to send money to someone else - the number itself wasn't compromised, it was the actual account. No doubt, the attackers tried to login and change my email password, but had to settle on the next best thing - spamming my email address with hundreds of emails per minute in an attempt to cover up the emails sent by my CC company.
Fortunately, the spamming wasn't very sophisticated and it only took me 30 seconds to filter it all to trash. I was on the phone with my credit card company within 10 minutes of the attack, which mitigated some of the damage.
I'm sure at some point weaknesses will be found in the 2-factor auth solution, but for now, it feels almost mandatory for important email accounts.
I can't _think_ of anything less of a hassle.
Then use dropbox to keep the .safe file synced across machines
I consider the 1Password file sensitive enough that it shouldn't be online, especially not with dropbox. I'd prefer if there were physical protection for it somehow, too (like a smartcard or FIPS module, which wouldn't allow bulk-export normally, and which might impose other rules on use like 5 passwords per hour when outside my home network, etc.) Same way you handle high-security private keys.
(Ultimately I'm not going to be happy until I have a trusted tablet of some kind, but building that either requires being Apple or waiting for WP8 hardware to come out and investing about $5mm in some serious security upgrades. Maybe worthwhile, though, since it solves the general problem of trusting client devices.)
So even if they get my password database and my password they don't have the key file. And if they can get the key file, they could have gotten the password database anyways.
"...but had to settle on the next best thing - spamming my email address with
hundreds of emails per minute in an attempt to cover up the emails sent by
my CC company."
Flooding someone's inbox (or telephone) is now available as a very affordable a la carte service.
If both companies agreed to a standard that made it obvious such practices were non-compliant, this wouldn't have happened.
I wish people would stop bandying this about as if there was an actual flaw in the 2-factor app or the protocol or crypto algorithms used. The linked breach was likely due to a social engineering attack on phone company support staff. Yes, it's concerning, and something Google and the phone companies should be investigating, but no, 2-factor auth wasn't "cracked."
Someone who's more informed than I, is there really a weakness in smart-phone based two factor authentication if you choose NOT to use the SMS or voice based backup, out-of-band authentication option? I'm pretty sure that is an optional feature of Google's 2-factor auth, with printed backup codes being the alternative.
On Android, way way easier, due to lack of secure device storage. Just get a copy of the disk image and then you've got the seed too.
It's still better than a password, but not as good as an actually secure independent factor. Sadly the SecurID sucks.
FYI: you can encrypt the boot flash drive in ICS and Jellybean; you need to type a passphrase to unlock/boot the phone.
On iOS and Blackberry, you're authenticating to a security chip which has a device-specific key (long, random). On an iPad 2 or iPhone 4S or later, you can't make attempts without being physically on the phone, and this is limited to no more than 8 per second on the fastest iPad 3 CPU. This makes a 4 digit passcode on iPhone 4S (with wipe after 10 tries) potentially more secure than an 8 character random alphanumeric on Android. Online (well, device-online) vs. offline attack. I'm not sure about the latest Blackberry OS security chip status, but a few years ago it was similar, so I hope it hasn't gotten worse.
(There are ways, even on the latest devices, to prevent the device wipe on 10 tries, but no known public ways to do attacks without doing them on the device itself, or physically tampering with the device (which isn't impossible, but requires physical access and chip-level attacks. If your passcode is long enough, you'd have time to detect your loss and presumably invalidate any credentials stored on the iPhone)
When someone steals your credit card, you just cancel the card and move on with your life. You don't call the credit card's data protection technology a failure. And changing your iCloud/Google/really any password is an orders-of-magnitude better user experience than canceling a credit card.
> It's still better than a password, but not as good as an actually secure independent factor.
Companies like Google have real, statistically significant data on how much 2-factor auth reduces account compromise. Your claims to the contrary seem to be rooted in an academic (at best) perception of weakness in the technology.
The attack can be done by pwning your personal computer, waiting for you to connect your iPhone via wifi or cable to it, and then remote-proxying the display on your phone to the attacker via the compromised personal computer. This would all be undetectable to the user.
Even a bad two factor system is better than passwords from a large service provider's perspective. Two factor using a phone isn't as secure as fully independent two factor for enterprise use.
Add to this that many high security environments don't allow phones, or that people carry only a single device (phone or maybe phone+tablet, often), and the "phone as two factor" becomes a lot less useful.
The big problem is having to carry multiple tokens, the cost of physical tokens (including replacement/management costs), and that no one makes a decent physical token at present.
iOS + some kind of "secure device-local mode" for the OS (which couldn't be remote-accessed for display, and which doesn't get pushed in backups (keystore-like), would make something like Authenticator much closer to a physical token in security.
The funny thing is WP8 actually has the tools to build this, and Enterprise (i.e. huge windows deployments with good device management) is the environment where it would be useful.
It seems like you refuse to accept any of that because if someone roots my laptop and proxies my phone's display over the internet then Google's 2-factor might as well be ROT13.
Maybe - just maybe - those two things don't need to be in conflict. But you're insisting on that conflict, not me.
Wordpress, Lastpass, and a few other sites seem to support Google Authenticator as well, but it has very little adoption in the enterprise (compared to physical tokens, x509 certs, and passwords).
FUD will never go away. Don't let it get to you.
Edit: forgot to mention, also open-source.
* Use a backup code. Learn more
* Send to your backup phone number ending in ##
* I cannot access any of my phones Learn more
So yes, if someone gets my phone, they can then gain access to my Google account. Grrrrr...
Only if they also know your password. That's why it's called "two factor authentication". Simply compromising your password is not enough. They also have to capture your phone.
Now, if you're stupid enough to write your password on the back of your phone.... you deserve everything that's coming to you.
I'm pretty sure the backup phone number is someone else's phone, not yours. I use my wife's phone number.
If you lose your phone (and thus access to the Google Authenticator app), you can send a code to the backup phone (for example, my wife's phone) allowing you to login.
I'm not trying to defend their stupid choice of offering option #2, but rather trying to offer a solution to your current problem.
If my phone becomes unavailable (eg lost/ stolen/ dropped in a toilet) then I need a backup option to login. The backup options Google provides are:
* Use a backup code
* Use a backup phone number
* None of the above, I still need help!
1. The backup codes are suggested to be printed and stored in a wallet; however you can put them anywhere you like.
2. The backup phone number can be somebody else's number. Your best friend, your partner, whatever.
3. If you still can't get a backup code, the third option is to go through Google's support team and recovery process. Selecting this option results in an advisory message stating the process could take from 3 to 5 days.
These options appear to be very sensible to me.
If someone gets your wallet, they can use your credit cards to gain access to your credit. You cancel the credit cards. Move on with your life.
Your phone already matters as much as your wallet now - that's just reality. Secure your phone as best you can. Use full-disk encryption wherever backups are stored. Don't give your phone to people you can't trust. Change your password if you lose it or it gets stolen. Move on.
These are tools to solve problems, not shrines to worship.
I don't see how this is an Amazon security flaw. The last four digits of my credit card is printed on receipts from just about every merchant I transact credit card purchases with. Treating such public information as if it is a PIN places the flaw clearly in Apple's court.
> Next you call back, and tell Amazon that you’ve lost access to your account. Upon providing a name, billing address, and the new credit card number you gave the company on the prior call, Amazon will allow you to add a new e-mail address to the account. From here, you go to the Amazon website, and send a password reset to the new e-mail account. This allows you to see all the credit cards on file for the account — not the complete numbers, just the last four digits. But, as we know, Apple only needs those last four digits. We asked Amazon to comment on its security policy, but didn’t have anything to share by press time.
> And it’s also worth noting that one wouldn’t have to call Amazon to pull this off. Your pizza guy could do the same thing, for example. If you have an AppleID, every time you call Pizza Hut, you’ve giving the 16-year-old on the other end of the line all he needs to take over your entire digital life.
This part seems relatively bad:
> Amazon will allow you to add a new e-mail address to the account. From here, you go to the Amazon website, and send a password reset to the new e-mail account.
The wiping of the author's devices was purely due to the level of Apple's security - a level which Apple established based upon the interests of their stockholders. To hold Amazon to a standard which protects Apple's customers (as the article implies) just doesn't hold water - Apple implemented remote wipe, Amazon didn't.
Even so, this seems like a decent way to compromise amazon accounts. Even though the danger involved when that happens is pretty minimal for the reasons that you mention, it should nevertheless be something that concerns them. Even just things like revealing purchase history is an issue, though of course unlikely to be a lifewrecker like the Apple situation. I can't imagine this process will work with them in a few days. All I meant to say is that they have something to fix, not that they share significant blame.
Based on the account, it appears that Apple does not - customer support call + password recovery + wipe iPhone + wipe iPad + wipe Macbook did not raise a flag.
Yes this is 80% Apple's fault, but Amazon doesn't have the right to give up my credit card digits. They are not public information as suggested earlier; they are only public if I choose to make it so (e.g. by my usage patterns).
"Hello Mr 67, before we start I'll need your pin"
"I have a pin?"
"Yes, when you set up this account you were given a pin required for phone access"
"Really? I have no idea what it is..."
"That's ok. If you can just answer these other few questions.
What's your mother's maiden name"
"and your birthdate"
"thankyou Mr 67, now how can I help you today? ..."
Turns out, when I can't remember it they tell me the first 2 letters!
I don't believe there is any hashing going on, after all, the bank in question is ANZ, they don't even use TANs for online-banking.
This is scary.
What should one try to do to protect against this?
Hypothetical actions to take:
Make sure that an email address that's doing double-duty as a login identifier for a given service is unique to the service and appears nowhere on the web or in outgoing mail.
Take particular care to have a "recovery" email address that is used for nothing else. Don't forward it to your regular mail, naturally.
Enable two-factor auth for email if you possibly can.
Have a credit card that is only used for online stuff.
Can one get a second address that is used only as a billing address? How would one do that? (A P.O. box? Expensive! A friend's house? I fear that credit card companies will leak this address like a sieve no matter what I do.)
EDIT: Startup wizards, here's a Minimum Viable Product: a credit card that can only be used for online accounts - which you must whitelist as you add them, via two-factor auth with your phone - and that features two billing addresses: The real one where the bills go and a dummy one that still validates. (Is that even legal under the CC rules? Sigh.)
The other suggestions in the article: Disable Find my Mac, reduce coupling between your accounts… was there something else?
Alas, nobody who isn't crazy paranoid is going to bother jumping through all these hoops. (I have tried to fight that paranoia but I think I'm losing that battle.)
Website: "Hey Bill, glad to see you today, what do you want to do"
Bill: "Delete _everything_ I've ever done on every system I have"
Website: "Of course! Let's get this started... beep boop bip and done!"
What about this:
1 - Kill request sent
2 - 48 hours is set on the clock so you can choose to cancel
3 - You can choose to pay $50 via credit card to have it happen immediately
4 - You are reimbursed $45 after a couple weeks
That might make it a little harder to have such hacks like this happen in the future.
I was thinking about remote storage and devices. For example, a backpack is stolen with your phone/tablet/laptop and you need to issue a wipe to it NOW before they are compromised.
Requiring a credit card at least leaves a paper trail of some sort.
I was relieved it was so easy, but unnerved at how easy it was.
Maybe they had the cell associated with my land line, but I doubt it, since I got the line before I ever had a cell.
Apple is really bad at running online services. It's a shame that they short-sightedly decided to go to war with Facebook and Google (who are good at services and bad at hardware) rather than playing more nicely together.
For every person who loses "irreplaceable" data to malice, many more lose it out of simple incompetence.
But yes, backups -- and not just online backups, but also offline backups.
This massively raises the bar for social engineering.
It's actually a good idea. I'll think about doing that…
The bad guys then have to crack your parents email account too, and being older they will be less likely to have daisy-chained Google, Apple and Amazon accounts.
"Pay with your iCloud password" just doesn't have the same fuzzy feel, does it?
The answer doesn't have anything to do with how he should have set up 8 factor authentication or how he should have had a Swahili-numeric password. The answer is that his hacker had extremely atypical motivations and that's the reason his life got destroyed.
The goal of this hacker was to pwn this guy's short, valuable twitter account. It's unlikely there's really any other hacker in the world who has that goal which is why such attacks are so rare. For most hackers, there's some sort of rational ROI calculation and if the ROI is negative, the hack isn't worth doing.
Nerds often have a hard time seeing that security is a holistic system. It's often comprised of many flawed layers that are layered in depth to provide a statistically secure system. In real life, security comes from being able to push down the ROI through institutional mechanisms rather than personal ones. Credit cards are designed to be stolen and recovered from, investigations are able to target key players in the field and tough penalties means that the negative effects outweigh the positive gains.
All this has lead to a black market rate of merely $2 - $3 per stolen credit card, meaning that there's not much motivation to hack in the first place.
Nerds naturally have a libertarian bent which makes them more inclined to believe encryption and technology is the solution to the problem when, in reality, it's a beefed up police state and American hegemonic decisions that can span the globe.
It does. It happens all the time. Most victims don't have the luxury of writing a wired article about it and are stuck picking up the pieces on their own.
I disagree. You can and will (for the foreseeable future) be able to choose a computer/configuration that doesn't allow some remote third party to run arbitrary code on it or wipe it.
His devices were all wiped because he let a third party have that level of access.
Yes, I made that phrase up. So here's what I mean:
- "Amazon then allows you to input a new credit card." <-- Amazon should then send an email confirming this to your email address, a txt to your phone, and a smoke signal to your Tipi.
- "Next you call back, and tell Amazon that you’ve lost access to your account.", email, phone, Tipi. And a waiting period.
- When you call Apple's tech support, again: email, phone, Tipi.
Maybe I'm missing the obvious flaw in this plan, but since customer support (humans) seems to be one of the main weak links, it would make sense for presume that's where people will attack, and to then attempt to reach out with all communication mediums possible to make sure you're talking to the real deal.
One thing that we should have is a "cool down" period. If you want to regain access to, say, your GMail account, then it will take 48 hours of waiting, and phone calls and emails will go out to your contacts before that is completed, so the real person has a chance to protest.
I don't understand how the MacBook data was permanently lost. Even if the files were deleted in the OS, they are recoverable by disk utilities. Unless they were encrypted. Which just goes to say that when you think the solution to your problem is encryption, you don't understand your problem.
"Okay, I'll need a notarized copy of a photo ID and once we have that, we'll give you a call to the number we have on file to confirm the change."
It's not perfect, but it would require an extremely dedicated and targeted attack to bypass, as opposed to "Hi, I'm your pizza delivery guy. I took a look at the receipt before I delivered your pie, and now I know the last 4 on your CC, your billing address, and your name. Let's go iCloud fishing!"
On the flip side, we perhaps need to come up with something better than usernames and password for authentication. There are plenty of services where I simply cannot remember my password and/or username. I'm getting better about writing them down inside a password protected master file. But for many of those services I rely on the account recovery procedures; a vast majority of which are vulnerable once the attacker has access to my e-mail inbox.
I use LastPass and just generate a new random password for every new account. If I ever forget my LastPass password, I am boned (since it's the encryption key for my data!), but I don't worry about forgetting passwords anymore, and I don't worry about RandomSite getting hacked and my password being leaked. It's not perfect, but it's good enough.
That might also be why a PIN is available for "stopping" the wipe. (As an aside: the group got what they wanted, and one of the members even seems remorseful: they have the PIN necessary to unlock the device, but this was never touched upon.)
At the very least, that is why iOS devices take split seconds to "wipe", as opposed to the time it would take to write thirty-two billion nul bytes to flash.
Old-school computer security breaks things down into the CIA categories:
Confidentiality is for things you want secret.
Integrity is for things you want to not be altered.
Accessibility is for things you want to be able to reach.
Honestly, very little of data requires confidentiality. Yet that's what encryption is usually used for. I would, by an order of magnitude, rather have a hacker gain access to my family photos than have them deleted beyond my ability to recover.
I hate whole-disk encryption. In nearly everything in my life, the threat of losing access to my data is vastly worse than someone else accessing it.
Keep an unencrypted backup in a secure location, not on a device you are bound to lose in a coffee shop or airport.
I run a business. A good deal of what is on my laptop I would put in the confidentiality category. I guess apps and settings would come under integrity.
After reading the "Yes, I was Hacked. Hard." post I updated several of my passwords and found that Netflix enforces a 10 character limit on their passwords. Does anyone have an idea why or how this could be the case? I would find it very ironic if they did this to save a few bits per user in their database considering they're a media streaming company.
- You can't rely on companies providing online services to have your best interests as their best interests.
- Take security seriously because if you don't you won't know about an attack until it's done.
- Don't use a vendor's all-in-one services.
- Don't use "the cloud" as a backup source.
- Back up frequently.
- Don't use one email account for everything.
- Have an email account that is used for recoveries and nothing else... and keep it obscure. e.g: email@example.com
- Don't use personal credit cards for online purchases.
- If it's an option, don't store credit card details against your account; choose to manually enter it every time.
- Don't use the same credit card for multiple sources of online shopping/billing/etc.
- Don't give real answers to "security questions", such as your mother's maiden name or the name of your first pet.
- Don't provide real personal information (address, contact number, etc) to online services when you create an account.
- Don't use Facebook, Twitter, etc irresponsibly.
- Shutdown if you're not at your computer.
- Encrypt your data.
See here: https://www.citibank.com/us/cards/gen-content/messages/van/i...
Separate credit card numbers for Amazon and Apple would have prevented this hack.
This would be much more effective than the "Verified by Visa" theatre. "Virtual credit cards" with a limit and maybe even vendor limited (for example, create a virtual card and add some sort of vendor id for Amazon)
Too bad it can't be used for anything, for example, some airlines require you present your CC when traveling (if it's your cc and you're traveling)
'We need to give Schneier a few drinks and get him to talk about this again. Failing that:
Backup for Darwin's sake.
Don't enable remote wipe of Mac OS X hardware. Just encrypt it.
Use Google two-factor (two-step verification) if you are a geek and can stomach it.
Fear the Cloud. Keep the data you value most close to you.
Don't use iCloud.
Don't trust Apple to get anything right that involves the Internet and/or Identity.
Not being Schneier my advice isn't worth much, but fwiw I suspect the "solution" is:
Get rid of the secret security question.
Strictly limit password resets. If someone lost last access, charge them $50 to go to bank, post office or notary to establish their identity.
Incorporate biometrics (thumb print and speech probably).'
However such laws also include access. You cannot use disproportionate means to require access. Biometrics would probably not be legal to protect things like photos etc.
That to me seems much worse than having an imac wiped.
With this in mind, it may not be wise to remotely link your MacBook such that it can be wiped by Apple Central Command. Do people seriously do that? A phone is maybe kind of reasonable for this kind of thing (only kind of), but your actual laptop? Is this a requirement of new versions of OS X or something? I don't know who would set this up willingly.
Any local data that you want to keep from attackers should be stored as ciphertext. Your secret key should be encrypted with a strong passphrase. Most thieves, even high-level corporate espionage-type thieves, won't know how to use GPG in the first place, but if they do, if you've done it right they won't be able to get in.
From the perspective of keeping ourselves safe in a world where all data is kept on (or hooked up to a remote control at) the server of a big faceless corporation, all plaintext should be considered public info. Just because they haven't published or leaked it yet doesn't mean they won't, and it doesn't mean that anyone with an interest can't go in and take it, or that they won't wreak havoc for an ultimately minor goal (like access to Twitter).
Encryption and backup. The two constantly repeated, never honored mantras whose inconveniences have plagued computer users for decades now. If people did these things correctly, hacks would rarely matter or jeopardize significant amounts of data. This is a field that is ripe for system-level disruption; Time Machine kind of helped with the backup, but we still don't have anything decent for layman's crypto (perhaps because the business models of companies are now so dependent on reading our information and selling it back to interested parties).
Encryption and backups set in stone. An attacker may not be able to read your encrypted backups, but if he can delete them, you still won't be happy.
I think the only feasible solution is that of online, write-only backups. They need to be online so that devices can backup themselves when they deem that necessary; you cannot trust users to do any manual backup task. They need to be write-only because, otherwise, with online backups, an attacker could wipe all your backups. Semi-write only, in the form of "deleting backups older than a year" or "delay any deletes by a month" (to give the user time to report his phone to be stolen) or "delete only after three-factor authentication" probably is acceptable.
"perhaps because the business models of companies are now so dependent on reading our information and selling it back to interested parties"
I think it is because online backup looks too pricey. People keep comparing the price of online storage to that of hard disks. For example Dropbox is about $1 per GB of storage per year. You can buy a SSD disk or a laptop for less than $1 per GB of storage. As this example shows, current solutions also do not protect well against attacks.
I am not sure that the options of having your own cloud, or of making a cloud with others (peer-to-peer backups) will make sense to Joe consumer. Users may not want yet another device at home, likely will not have the upload bandwidth (yet), and are a risk factor with respect to operations on such a device. A home device probably would have to be a custom device, not a PC. Users cannot be trusted to operate it in ways that keeps their data secure, so you must make it impossible for them to operate it.
A whole lot of damage was done, yes - but a "epic hack"? Don't think so.
epic: heroic; majestic; impressively great
This disconnect is unfortunately not limited only to tech industry. Every receipt you get while you pay with your credit card offline, will display some part of your credit card number. The crazy thing is that there is no standard for it and everyone picks different numbers! If you collect your receipts and then throw them all at once without destroying them - anybody can put the numbers together.
I would say this is a much bigger problem and has been around here for ages!
That sounds more like remote encryption to me. And a four digit PIN is easy to brute force (assuming that it isn't asking apple for the decryption key once entered (which means you need internet access do reverse it)).
Are you sure? Do you trust the minimum wage customer service reps of your phone company to not be susceptible to social engineering?
As far as the results, all of the contact information in the public whois record is replaced with the registrar's contact information. They will forward information on to you if absolutely necessary.
I keep whois privacy turned on for all my clients just to protect them from that damned Domain Registry of America scam.
In other words, we should always ask "what is the password retrieval process for the new account you just opened?" This sounds like a big task and one where not all scenarios can be covered. But I think this is a good first step - as long as we are still dealing with passwords, federated identity, half-masked credit card #'s and security questions.
I think this exercise would help us be careful about our choice of passwords, answers, email ids.
What would be the most obvious downsides to this approach?
But then there are policies and technology that increase BOTH convenience and security. Say the difference between using SSH these days versus using, say, paper and an Enigma machine.
The inconvenience of Google Authenticator is minimal and the security provided is huge.