Hacker News new | past | comments | ask | show | jobs | submit login
Even with 2FA, Google accounts can be hacked with just a phone number (ello.co)
274 points by philipn on Oct 31, 2014 | hide | past | favorite | 123 comments

I work as a sales rep in-store for a telco. From a security perspective, it's ridiculous.

We use computer monitors which customers face from the same angle as us. I'm sure someone thought it would make the retail scenario more inclusive, but security-wise it's a mess. I can't verify account details without pulling up those same details for the customer to see. So I ask people for their details, click the button, and cross my fingers that they're right. If they're wrong, what then? They might legitimately not have known whose name it was under. It might be under their dad, mom, partner or business' name. Doesn't matter, the system has absolutely no design affordances to allow multiple people various levels of security privilege in accessing and altering accounts which are used by more than one person.

Furthermore, we have no organisational clarity about access privileges. Everyone makes up their own standards. Some people in the company are very strict, and won't do a SIM swap without photo ID or full ID over the phone. Some people will do one if the customer quotes the same last name and could be theoretically the account-holder's child. But does it matter when any customer can easily find out name, DOB and address from coming in store, then call up and get the SIM changed over the phone? We do have account PINs but very few people set them. And you could find it out in store if you were sharp-eyed.

There's a constant tension between providing a good customer experience and protecting security and privacy. But our commission is based partly on customer experience feedback scores - and if you're the one asshole who tries to follow all the rules (or follow what you decide should be the rules, because there aren't any haha) then you're gunna get a) bad feedback and b) alienate and make life difficult for the majority of ambiguous security events, which I'm sure are 95-99% trustworthy people.

Anyone relying on two-factor auth with a phone number who uses my company is vulnerable. Simple as that. It would take a determined attacker a day to get control of your number. All you'd notice was that your SIM stopped working. It would all be too late by the time you'd gotten a new one re-activated - and you're still vulnerable.

I'm not sure what telcos are like in other countries but I doubt much better.

I feel bad for the telcos (and other agencies that try to keep our private info). I called up my ISP a few months back and was presented with a variety of security questions that I couldn't provide the answer to. I certainly didn't know the 4 digit passcode I created 2+ years ago and I haven't used since. My fist couple guesses on my favorite movie were wrong. It was only after my second guess of my best friend during elementary school (probably worth a blog post on the changing winds of our memory) that I was able to access my account. The problem was that I threw all sorts of answers at the customer service rep on the other end of the phone. They were willing to ignore all of my incorrect guesses in hope I would eventually hit on something they could verify. But that is exactly the problem. If I wasn't me I wouldn't want someone getting as many opportunities that I got to eventually hit the right answer to prove they are me. So where do you draw the line between customer support and customer security without either enraging real customers or allowing people to illegally access customer accounts?

TL;DR Someone create a startup to better identify people remotely.

A mobile carrier's identity verification could be augmented by asking questions about who you called recently.

Remote identity verification over the internet is not solved perfectly, but FIDO's U2F is pretty good. Hardware tokens cost money which most people won't buy, which is one problem. To prevent getting locked out you have to buy (and the service has to support) multiple hardware tokens, but that protects against loss or breakage. To prevent targeted token theft attacks, the token needs some kind of biometric verification (iris scans would be good), but that gets very expensive in a device that needs to be reliable and yet kept on a keychain.

That or something similar is the only way to provide verification while preventing the creation of a centralized identity database [I think a cryptographically assured identity verification system that dramatically limits identity repudiation would turn into a privacy nightmare dwarfing current identity database efforts being made by many companies]. With something U2F-like, each company or service stores their own verification seed value that is used in the future to verify you. It could be on a mobile device, like standard TOTP auth, or on a separate specialized hardware token. It could use pre-shared seed values and hashing, or nonces and asymmetric crypto.

"A mobile carrier's identity verification could be augmented by asking questions about who you called recently."

Except that could be socially engineered pretty easily.

Plus, if your phone dies, you're in for a major inconvenience, because no one remembers actual phone numbers any more. I remember the number I had as a kid, but nowadays, even though my mom lives in the same place, I reach her only through VoIP or cell, and both those numbers are stored on my phone, not in my head.

> Hardware tokens cost money

All your excellent examples will dwarf mine, but I'll still tell it as a very cheap medium:

When I was at Fortis Luxembourg, the bank gave me a passive token: A card with a few dozen digits on it. At each login it would request 3 of those along with the password.

The key point of this is, it never transmitted the full key over the wire. So someone who intercepted the communication could never rebuild my full password.

Cost for the bank? A few cents. Security? The best I ever had from banks.

> Security? The best I ever had from banks.

I'm baffled. In Germany, the chipTAN method [1] [2] is pretty standard, which uses the bank card as a cryptographic element. And usually, German IT seems to be years behind the industry standard (e.g. I don't know a popular German e-mail provider that offers 2FA.)

[Edit] This is the best thing about chipTAN: Even if the computer is subverted by a trojan, or if a man-in-the-middle attack occurs, the TAN generated is only valid for the transaction confirmed by the user on the screen of the TAN generator, therefore modifying a transaction retroactively would cause the TAN to be invalid. [/Edit]

[1] In action: https://www.youtube.com/watch?v=5gyBC9irTsM&t=41s

[2] https://en.wikipedia.org/wiki/Transaction_authentication_num...

So the chipTAN generator reads the details of the transaction optically, and then you just confirm them ?

Pretty clever. On my chipTAN (Belgium, ING) I have to enter the number by hand (part of the account# of recipient, amount).

On the positive side mine does ask for a PIN before generating the TAN, so is probably a bit more secure (balanced with a wear of the keys on the TAN generator, of course - so it is arguable which one is better)

Unfortunately, that sort of static information frequently is targeted for phishing. The bank can keep telling people that they will never ask for all the codes at once, but some subset of customers will happily comply with such a request in a badly written email.

Mind you, dynamic 2FA frequently only narrows the time window in which phishing is effective. Even with transaction-based 2FA, you'd need people to actually read the text message the bank sends them with the transaction authorisation code.

Telco in Australia (I've worked for all of the big ones) are exactly the same as you've described.

Disable SMS for 2-step and SMS for password resets and use a 2-step mobile app.


After enabling 2FA, disabling SMS for 2-step and SMS for password resets, and ensuring that you don't have any phone number set as a way to get into you account, what is your plan for continuing to use your account if your phone is stolen?

Backup codes.

It's also possible to install the seed for the TOPT generator on multiple devices - all the ones I've bumped into have a mechanism for typing in a long-ish string as well as scanning a QR code - record that string (secured like a password, in something like 1Password) and you can always re-seed another device to come up with the same codes. I've got all mine on two phones and a iPad - one of the phones is usually in my pocket, the other is almost always at home.

As always, it's a security/convenience tradeoff - I've gone from needing "something I know and something I have" to "something I know and any one of several things I have".

Your tradeoffs there may vary - if I were a political-dissident/whistleblower/drug-czar I'd probably consider the risk of losing access altogether preferable to opening up additional avenues for vulnerabilities - an NSA-level adversary would probably have a significantly easier time if they knew they only needed to stealthily subvert one of several devices (at least one of which I don't usually have on my person) to get access to all my tfa secured assets, but the additional risk if I'm protecting myself from 4chan-grade griefers or non-network-pervasive internet criminals is - for me - low enough to accept for the additional reliability and convenience of multiple authorised tfa token generating devices.

For all the sites that use TOPT, I have a screenshot of the QR code that was presented me, encrypted with GPG (using a symmetric key and a random password) and then I put that encrypted file in my 1Password collection.

I feel reasonably secure about this (as secure as I'm feeling about all the passwords already there in 1password) and I have a huge advantage that changing my phone won't require remembering to disassociate all accounts first if I don't want to lose access to them.

As TOPT works without a back-channel, that QR code stays useable until I manually revoke that key on the respective web site.

In my experience, when setting up a new device, you have to scan the QR or type in a code, then verify a generated key or two to "confirm" the new device. I'm not sure if that's an optional step, but it seems like you'd need to log in first, thus creating a chicken-egg situation for yourself. I'm sure you could enroll another device (e.g. tablet that always stays in the house, SO's phone, whatever), but it doesn't seem like it'd work as you spelled it out.

Backup codes may be a good option if kept somewhere very safe.

The "enter a generated code to confirm" step is to confirm at the server end that you've got an identical seed - they (presumably) use that before committing that seed to your user account (to ensure you aren't about to lock yourself out). It's mot needed at the client end.

I've got at least gmail, aws(/amazon), Github, Dropbox, Zoho, and several TOTP TFA protected WordPress sites on 3 different devices using this method. It definitely works. I see additional devices start to generate the same codes when I add the same seed (so long as their clocks are reasonable synced...)

This is using the Google Authenticatior app on iOS and Android, I _think_ any RFC6238 compliant TOTP app that lets you type in a string to key it should "just work".

I have a similar method. When I setup 2FA on an account, I print out the QR code and scan this with the phone to verify it works. I then store the paper QR code in a safe place.

Or you could right-mouse save the image of the QR code as a file and then put that file on a CD-ROM or flash memory.

I thought that would be an answer, but then if your phone is stolen and they get in, couldn't they simply invalidate your 2fa codes too?

Mind you, it's probably the best idea.

Simply stealing your phone isn't enough. They also need to know your password change 2-step settings.

So you also need to make sure that your phone's browser doesn't have your Google password stored, and/or your phone's storage is encrypted with a strong-enough key.

Google has made me re-enter my password when modifying 2fa settings.

Sure, but if it's saved in the browser than it can be extracted from the browser

Last I checked, this was not the case- And a major cause for concern.

Everytime I go to https://www.google.com/settings/security and click on 2-step verification, I'm required to enter my password if I haven't done so in the last 5 min or so.

With this scheme someone can't access your account by stealing your phone. You also can't access your account by getting your phone number to point to your new phone though.

Put a strong password on the phone. Not just a PIN. Touch ID makes that practical now.

If you have a targeted attacker then Touch ID is actually less secure.

Also other trusted devices can bypass 2factor.

Less secure, of course, but my desktop and laptop bypass two factor.

Bypass 2-step to access your account but they can't change your Google password.

I responded to a comment about, "what is your plan for continuing to use your account if your phone is stolen?"

Did you downvote and responded to a thread incorrectly?

I didn't downvote. My reply was to "other trusted devices can bypass 2factor" about yes they can access the account but they can't change the password without knowing the current password.

(Accidentally deleted a comment of mine, this attempts to copy it)

Indeed, as advised by telcos themselves (at least all _my_ local telcos):


'"SMS is not designed to be a secure communications channel and should not be used by banks for electronic funds transfer authentication," Stanton told iTnews this week.'

Make sure you disable it in BOTH spots, or you are still vulnerable! Disabling mobile for account recovery still leaves it for 2FA. You need to do both.




What strikes me most in these stories, is how you always have to find some higher ranking company employee through personal connections in order to get a tiny possibility to take your account back.

These companies build on their users but, when their users need them, they betray them.

People need to be much more aware of the fact that you don't own your gmail address, or your Twitter/Facebook/LinkedIn/Instagram/whatever account. Those companies encourage people to build their reputations and networks and "personal brands" inside their walled gardens, while repeatedly demonstrating that they won't lift a finger to help protect the user's custodianship of "their" usernames.

Unfortunately - when you explain this to people there's no really good answer to their immediate "so what should I do?" question.

I no more "own" the bigiain.com domain than I own "bigiain" on HN, or "bigiain@gmail.com". While I can ensure I keep paying for it's registration, I have no doubt that if Monsanto or Goldman Sachs or Apple launched an new thing and trademarked it "Bigiain", my registrar would fold instantly to a legal demand from their lawyers, and I'd be just as out-in-the-cold as all those people without friends-of-friends in high enough places at Instafacetwigoo to "fix things", or with publicity platforms like @mat behind them.

I suspect in the future, there'll be a well known way to tie your online activity/reputation/network to a strong public key (with some distributed blockchain-like revocation/renewal audit trail). If anyone's working on something like that - I'd love to hear about it...

I have been thinking about this quite a bit, recently, as well and I do think the future does look like something you described. However, I do have doubts. Outside of being worried about the big brands removing your access to your hard earned reputations (which seems unlikely on a mass scale), what would be the other common uses cases for a crypto identity key? As we know, in order for a majority of people to adopt new technologies, there has to be a very compelling use case. I am not sure a consolidated identity key solves any real problem or rather it is just a cool tech thing that us hackers would like to see, kind of similar to the problem that bitcoin in general is having in achieving adoption.

I am curious if you guys have any really good thoughts on products that could implement a crypto identity key that solves a real life problem. Would love to discuss.

I think it's a good idea to own your own domain name, at least as a tech savvy user. You can still use Google Apps with it (Google for work now?).

That being said, I think it's a bit unfair to say companies won't lift a finger to help protect their users usernames. On the technical security level, many companies put a lot of effort into things like 2F, general internet security, etc. In particular Google, but also Dropbox, github, and others. On the service level (i.e. what happens when you have to talk to someone) everybody could probably improve quite a bit. OTOH that's costly and would ultimately need to be paid for by the customers somehow.

On the legal level, there isn't really anything these companies could do for you. If you do not own a trademark for your chosen domain name (account name, page name, ...), you'll lose it to someone who does [0]. That also won't change if you have all kinds of friends in all kinds of places - your problem then is basic trademark law, not the goodwill of some company (that has to adhere to the law, after all).

Disclaimer: I work for Google.

[0] possibly with the exception of the account or domain name being your legal name, but I don't think there's a general norm for that.

Technical measures to prevent account theft are always welcomed but they stop there; at prevention. As most of us know through experience though, poop happens.

In our era, for many people an account at an online social network is part of their identity. Losing it can be devastating. An account at Google is even more; it is one's documents, emails, contacts, calendar, photos, various data and digital purchases. So it is very important that there is support when you need it. Is it really so costly? I don't know. How many cases of account theft are there every day if the technical (prevention) measures are good? Maybe affected users are willing to cover some of them?

I have no doubt that if Monsanto or Goldman Sachs or Apple launched an new thing and trademarked it "Bigiain", my registrar would fold instantly to a legal demand from their lawyers

That particular problem can be solved by getting a domain that nobody else would want. In my case, I've registered my first name+last name.com, which will certainly never be considered for a trademark.

Depends. What is your first and last name?

I require your address, SSN, mother's maiden name and the name of your first pet to verify your answer.

Thank you, have a great day!

I have no idea why are you asking me that or why was I downvoted. I said nothing about identity verification, just trademark issues. Seriously, wtf?

He is attempting humour.

A swing and a miss, apparently.

Your idea sounds somewhat similar to what keybase.io does.

You could own bigian.bit using Namecoin I think, and use it for everything else (like your email).

> .bit is a top-level domain that was created outside of the most commonly used domain name system of the Internet, and is not sanctioned by ICANN

Ah so it doesn't work for probably 99+% of the the internet

I'm just letting him know that the thing he envisions for the future already exists. We just have to convince browser vendors to embrace it.

It's the Internet's version of "privatize the gains, socialize the losses." Or, the older, "Tails I win, heads you lose."

This just happened to me. The same timeframe, the same vector of attack, but a different target. They wanted my Twitter handle. Fortunately it was an old handle that Twitter had locked down and was not transferable. The hacker succeeded in making me lose my handle for a few days, but some friends came to my aid and I was able to get resolution through Twitter support.

My telecom company was helpful at first, but then we began to see circle-the-wagons behavior from them. We were at least able to get the call forwarding off of the account, but they would not tell us any details about what had happened on the account.

Until your story (and even now) I'm not exactly sure if my hacker had been able to forward the text messages or simply routed the phone call to his phone and using Google's password reset process was able to get a robo call to accomplish the same thing.

All of this is seriously making me consider creating my own 2FA service, only slightly better.

One quick recommendation I would add would be to put a passcode on your account with your mobile provider. Just call them and say "I'd like to add a passcode to my account", so you can at least add one extra layer of security there.

I would advise to have them write on the account that in no circumstance are they to authorize you without the passcode.

This is the weakest part of the chain. We all forget our passwords.

Every cell phone carrier out there CAN NOT allow you to make changes to an account or get any personal information from an account without properly identifying yourself with an authorized name on the account, plus the last four of the account holders social OR an account PIN. If a customer can not provide these over the phone they need to visit a store with a photo ID matching the account holder to get access to the account.

Note, this isn't specific to any carrier, this is FCC regulations that poorly trained CSR's ignore.

Unfortunately it is not that hard to get someone's first name, last name, and their entire SSN (not just the last 4 digits), especially if you have a bit of money to spend and know the right places to look.

Personal aside: does anyone know if I could call Verizon support and ask them to require a specific passcode be used before accepting any call relating to my account? Before I call and ask, I'd like to know the odds of them actually agreeing and actually abiding by it.

This article is conflating two things:

- two factor login (you need password + sms text)

- account recovery (using only a phone) THIS IS DUMB.

I only use an alternate email for recovery (my wife and I cross). Thus, each recovery account is still 2FA secured.

There's already been a story floating around about a young kid charging his dad's credit card because of the phone recovery option (he had the android phone in this case). This is NOT the same as 2FA auth.

There's a balance between keeping others out and preventing yourself being locked out. Every time you add another factor, you also have to add another recovery option in case you lose that factor:

1) Password(A)

:| Hacker must break A

:| Losing A locks you out

2) Password(A) + SMS recovery(B)

:( Hacker must break A or B

:) Losing A and B locks you out

3) Password(A) + SMS(B) 2FA

:) Hacker must break A and B

:( Losing A or B locks you out

4) Password(A) + SMS(B) 2FA + SMS password recovery(B)

:| Hacker must break B

:| Losing B locks you out

5) Password(A) + SMS(B) 2FA + SMS password recovery(B) + Code sheet(C)

:( Hacker must break B or (A and C)

:) Losing B and (A or C) locks you out

6) Password(A) + SMS(B) 2FA + Code sheet(C) + 3rd channel password recovery(D)

:) Hacker must break (A and (B or C)) or (D and (B or C))

:) Losing (A and D) or (B and C) locks you out

Only the 6th option is unambiguously better than a single password. I guess using a friend's phone for password recovery and your own for 2FA would achieve that.

You could also have 2 SIM cards in your phone, one number known, one for additional business. A lot of phones have sockets for 2 SIM cards, and the cost is almost nothing.

"and every so often, I would get authorization code texts for the Gmail account that was tied to my Instagram handle"

As far as I know these authorization texts are only sent when your Gmail username and password have been entered correctly. This would indicate that the attacker knew your long random password. Keylogger? From there they only need your 2fa to access your account.

If I'm not mistaken, the attacker set call/msg forwarding on his phone via his telco and then they chose the "forgot my password" option where a SMS text from Google (now going to attacker's phone) can be used to reset the password.

Well I heard from a friend of mine that in Argentina the cellphone provider can access to your info.

The case was this one. He was cheating her girlfriend, a friend of her accessed to my friend's text messages log, saw the evidence, and told to the gf about it. Apparently, but I never confirmed this, the friend (the one who read the messages) worked in the cellphone provider of my friend.

Since then I know I can't trust in my cellphone ever again, but I always was suspicious about this could be possible.

Of course your cell phone provider can access your call logs. Probably even fairly low-level workers can get full access under certain conditions.

And of course some workers will abuse that access for personal reasons.

What did you expect? That workers at a phone company wouldn't be able to access your account info? Ideally, it would be compartmentalized, but...

I would expect auditing if not compartmentalization, and big legal risks from unauthorized access that mean a majority chance of getting fired.

That's a problem "low-level workers can get full access". I expect that someone with a higher rank than a low level.

That kind of people could work as little as several month (or even just weeks), make a huge damage and then what. No control?

This is why I always recommend against using SMS-based 2-factor. Without even doing any serious research, it seemed pretty obvious to me from day one that at the very least someone like NSA/FBI could forge your number somehow with or without the carrier's help, but there's also the potential for other attackers to do it, too.

Call forwarding didn't even cross my mind, but it just goes to show how ridiculously broken SMS-based two-factor authentication really is then, and even worse than I thought.

Ideally what I'd want is an NFC ring or a smart band/watch that can use FIDO's U2F or a similar protocol that works through NFC, to do 2-step verification for me.

> Ideally what I'd want is an NFC ring or a smart band/watch that can use FIDO's U2F or a similar protocol that works through NFC

I've got my eyes on a Yubikey NEO for just this kind of use: https://www.yubico.com/products/yubikey-hardware/yubikey-neo...

Keep your eyes on the nfc ring too http://nfcring.com

Why would the FBI/NSA bother with that when whoever is doing the auth will probably give them whatever they ask for directly anyways?

Did you consider the Yubikey neo, a FIDO U2F NFC/Mifare keyring + usb keyboard? it pretty much does that.

This is a good reminder that your phone may not be as secure as you think. In many countries governments are able to get access or ask for this type of change to be made from the national telco's.

The reactions you can take at the moment are to use a mobile App, (or preferably a security key!) rather than SMS backup, and if You're feeling especially uncharitable to your phone company, change the backup number google makes you enter to a google voice number rather than that of your actual phone - creating a circular situation where it can't really be used as a method for account recovery / hijacking.

This article brings up a question about protecting email addresses that I'm hoping a HN reader can answer.

I have a unique email address for PayPal--different from my normal email address--that I want to keep secret. The problem is that every time I make a purchase, the merchant gets this email address (in addition to the normal email address I gave to the merchant). I know that merchants get it because I get junk mail at my secret PayPal address from merchants I did business with.

Is there no way to make a PayPal payment without PayPal handing my email address over to the merchant?

As a related question, why do I have to trust the merchant to redirect me to PayPal's website to make the payment? There are many ways I can get fooled into entering my PayPal password directly into merchant's website (for example, the merchant opens the PayPal site in a frame or pop-up, so you can't verify that it's really PayPal). Isn't there a way I can open my own browser window, login to PayPal, and give some sort of invoice number to PayPal to direct payment to the merchant?

>(for example, the merchant opens the PayPal site in a frame or pop-up, so you can't verify that it's really PayPal) //

You can right-click the page in Firefox and choose "view page info", then on the security tab you can see if it's paypal, see the certificate, etc.. Someone could hijack right-click, it's going to be a bit of effort though. I think in FF shift+rightMouseClick overrides normal right-click to give you the browser menu, but probably that's capturable by the site too.

Ctrl+I is the shortcut, but I don't think it handles frames.

Interesting question. I have no idea. I suggest you shop it around as a new "Ask HN" question and as a question on security.stackexchange.com.

This is why "2FA" is supposed to actually be two factors. If you're using a phone number for 2FA, then authentication still boils down to the same thing: Something you know.

It's still two factors. If someone has only your phone but not your password, they still can't log in. The problem here is that the phone number was also used as a password recovery option, which effectively means you only need the phone to log in. I suspect most gmail users with 2FA are doing this, which defeats the purpose of 2FA. It just becomes "different factor".

It's the password recovery by phone that's the weakness. But I think people getting locked out of their own account is probably a bigger problem for Google than people getting hacked, so they err on the side of saving your from getting locked out.

No, it's not two factors. Access to the phone number is entirely based on something you know.

Interesting, so adding 2FA actually decreased security... Well shit. Interesting case that shows just how unpredictable such things can be.

As far as I understand, though, 2FA increased the attack surface in this case. A web interface itself still remains impenetrable, doesn't it (know your hard-to-guess password and you should be fine)? Mobile provider was the weakest link and any system is as secure as its weakest link.

It's similar to security questions in that way. If you answer them truthfully, then they can be used to attack you.

You should have the same answer for all security questions and it should be an easy to remember pass phrase about the service

1. Email account with 2FA

2. Email randomized password stored in PasswordDatabase

3. PasswordDatabase is stored in CloudDrive

4. CloudDrive randomized password stored in PasswordDatabase

5. CloudDrive with 2FA

6. PasswordDatabase secured by weak password

7. 2FA codes from 2FApp

8. PasswordDatabase, CloudDrive, Email only available together on devices with a human-friendly password. Those 3 and the 2FApp are all on the phone, secured by human-friendly password, on me always.

(How do I make 8 mathematically stronger?)

3 combined with 6 sounds like a recipe for disaster if someone manages to compromise your CloudDrive account (probably not by breaking the password, but by social engineering or a method similar to the one in this article). If they get that, they have your encrypted password database, and if that has a weak password... you're totally SOL. The password database's password is one you want to be /very/ strong.

Do you recommend I memorize a GUID? It's not quite "Correct Horse Battery Staple"

Has any CloudDrive service been socially engineered? I didn't find any results in my rudimentary search.

Personally, I have a password that my password manager generated that I use for it. I had it written down in my wallet for a while, but after typing it multiple times a day for a while I memorized it and since destroyed the paper. It's a shorter password than what I use for my stored passwords, but I think it strikes a good balance. (And it's not a GUID, but if you think you could memorize that then it probably couldn't hurt. That's risky, though -- if you forget, there go all of your passwords for everything!)

I don't know of any off the top of my head, but there was that time a few years ago when Dropbox accidentally let anyone in without a password. This isn't to pick on Dropbox, but security lapses happen and it's wise to have multiple layers of strong defense to reduce your risk. (Also, if someone compromises the email associated with your CloudDrive, they can use that to get your CloudDrive by invoking a password reset.)

EDIT: Wolfram|Alpha estimates the entropy of a password generated using the constraints I used for mine as roughly 85 bits (the relevant space would take 14 trillion years to enumerate). It actually has a pretty information-heavy password strength estimator (though I can't attest to its reliability as I'm not familiar with the internals).


SMS is not two factor authentication, and should never be part of an authentication system.

This is precisely why I thought Digits was such a terrible idea (check my comment history, it's there.) SMS is so incredibly insecure that anyone relying on it should not consider themselves security savvy. SMS TFA is lipstick on a pig. Cellphones are so cheap these days, they should all come with a TFA app pre-installed. I'm also not too keen on websites making it so easy to change your username. The story of @N on Twitter comes to mind. Is anyone working on Digits without the SMS part?

It's not just SMS that's the problem though. In many cases any random person off the street can call up your telco, pretend to be you, and get all of your calls and voicemail forwarded to them.

This leaves so many open questions. Foremost: How did they guess his GMail password? Is there a way to access GMail without knowing the password? Ie. by sending a reset password per SMS?

They enabled call forwarding and got Google to call with a password reset code.

So in order to prevent that attack vector, I need to make sure my phone isn't set up to receive calls to recover my account?

Thanks. I think I'll remove my phone number as a backup option.

Google periodically prompts me on login to add my cell phone number to increase my account's security.

I think someone confused about what "increase" means.

(And this is all the more surprising because in general I see all sorts of parts of Google making great end-user security decisions.)

It probably increases security for the average user, but decreases it for a targeted user.

Its not clear how they got his phone number though.

Phone book? Leaked address book? Leaked account database? Assume your phone number is public knowledge.

He said his gmail account was basically firstname lastname - given a name someone determined can generally make a fairly good guess as to phone number. Phone books, etc, etc.

At least now, more people will accurately describe it as two-step verification rather than two-factor authentication.

They are entirely different. If SMS OTPs were actually 2FA, the hacker would have needed to steal the phone too.

The difference between two-step verification & two-factor authentication. https://ramblingrant.co.uk/the-difference-between-two-factor...

This sounds like an argument for adding hardware multi-factor auth in google. It's not a panacea, but a good starting point that can't be easily spoofed or hijacked.

They already have it: https://support.google.com/accounts/answer/6103523?hl=en

And it adds nothing, since it still has fallbacks to the existing systems.

You should be able to remove less secure authentication mechanisms via accounts.google.com, after setting up a security key

You still need to keep atleast one backup method in case the security key is corrupted/broken/lost etc.

Print out a recovery code and keep it somewhere safe.

You can have two or more security keys associated with your account on Google.

Oh really? I thought that it forced you to go back to the app if you use a non-U2F browser.

So is the takeaway that we should all disable SMS-based options for receiving 2FA codes, because it weakens your 2FA to the level of your (non-2FA) cell phone account?

I think when I enabled iCloud 2FA it included 2 channels for communication with my phone: one as a named iOS device (where the OS handles receiving and displaying codes), and another as just its phone number. Is that for SMS? Why would they even do that?

Yes. Not everyone using a Mac has an iOS device.

Yeah but why would they do it when they know that number is registered to an iOS device? Why the double entry? (Perhaps it's a fluke in my case.)

Also, what happens when your iOS device gets lost or stolen?

You use your backup 2FA codes (which you've stored in a few different locations -- all offline -- including probably your wallet) to get back into your account. From there, you re-seed the 2FA.

Wild.. you know, using google voice for a number of years, I switched to using mvno operators for my cell phone a few years back... Now, I'm glad they don't allow call forwarding on those accounts.

Though it seems like a lot of work, It's hard to imagine going through this... with a similar mindset.

A bit late to the party but here's a good story about this: http://williamedwardscoder.tumblr.com/post/24949768311/i-kno...

Yet one more reason we shouldn't be letting telcos provide our phone numbers. They are painfully inadequate when it comes to security. And our mobile numbers are now probably the most important identifiers we have, due in no small part to the proliferation of SMS 2FA.

Incredible the lack of barriers in place for adding a forwarding number to a cellphone account. Maybe the attackers got the last 4 of his CC from a hacked set? Or maybe the same for his social. And from there they were able to authenticate with the telco rep

voicemail, then call forwarding, i wonder what is the next. People often ignore many of the service settings and leave them as is (me as well), which potentially creates chances for intruders.

His recovery email address attached to the account must've also been hacked if he had two-factor on, as google always starts the recovery process from that email.

This story sounds like a déjàvu: https://news.ycombinator.com/item?id=7141532

Immediately called my mobile carrier and asked them to disabled the ability to add/change call forwarding.

I appear to have missed something.

How did the hacker know his mobile number?

Domain name registration?

That's my guess based on a pretty easy to figure out WHOIS. But security by obscurity here only goes so far: if someone really wants your phone number, there's lots of ways to get it.

There is always a weak link. Ugh.

i need help unlocking my goggle account on my phone

The browser "back" button for the website is broken...

If it were me, I'd ditch the two letter handle, and with it the bother.

Of course it's not me.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact