Hacker News new | past | comments | ask | show | jobs | submit login
I know someone whose 2-factor phone authentication was hacked... (williamedwardscoder.tumblr.com)
172 points by willvarfar on June 12, 2012 | hide | past | favorite | 72 comments



I use the Google Authenticator app, which makes the token only available to that specific device, rather than through SMS. This gets around the problem of cloning a phone number.

iOS: http://itunes.apple.com/us/app/google-authenticator/id388497...

Android: https://play.google.com/store/apps/details?id=com.google.and...


I use this too, but I don't think it actually prevents the attack described in the article, at least in my case. When I setup my 2-factor auth for my Google account, I also setup a series of backups in case I lost access to my phone. One of them was my phone number, and another was a phone number of a trusted friend.


It prevents human engineering of the phone company. One might hope that Google would be better at the security implementation (i.e. the BT operator apparently "validated" an incorrect password, which indicates to me that it was a quick hack, probably just a field in the customer record about which the operators weren't trained).

Also: I'd hope that Google wouldn't allow a complete account reset based solely on the backup device. That should be a backup for your second factor only, you should still need the password.


You can remove your number as a backup source and restrict it to only use the app and the printed backup numbers.


Does the app notify you when authentication is attempted? The reason I still use SMS is that I will instantly get notified if someone has my password and attempts to access my account.


I was worried about someone getting into my account so I made this: http://blog.jgc.org/2011/06/my-email-canary.html


Whilst a cunning idea if it catches some unsophisticated crook who just dives in and starts looking for goodies, I'd expect that it's common enough knowledge (e.g. image-bugs dropped in spam to check account liveness) that a serious attacker would either slurp your account via IMAP/POP and browse with external resource loading disabled, or just enable that setting in your gmail account itself, which exists for exactly the reason mentioned above.

The main improvement I can see you've made is that it's real-time enough that you should be able to jump on it straight away and do something rather than the batch processes I suspect spammers use.

There was a cunning|creepy trick used by Facebook I recall reading about not that long ago[1] that relied on Outlook autoloading bgsound attributes despite image loading settings, but I don't know of any comparable holes in gmail.

[1] http://pandodaily.com/2012/03/06/facebook-knows-when-you-ope...


Since I already get a text message each time I authenticate, I'd even be in favor of one that just texted me each time I authenticated a new computer. That way I could use the security of the app, but the notification of sms.

I believe Facebook used to do just the notification part with some optional security feature where you had to name each new computer you used. They have 2 factor now, of course.


You cant log in without the App from an untrusted computer, the app is not connected to the internet, there are no notifications.


So it's basically similar to an RSA token, but as an app (and capable of addressing multiple accounts).


You can still press "don't have your phone?" and send a code through SMS, unless there's a way to disable that.


Yes. You can disable it.


Wait what? What good would receiving an SMS be if you don't have your phone?


It goes to a backup number you add that is the phone number of a friend or other phone number you specify.

Don't do what I did and stupidly use your google voice number. facepalm



Is it possible that the app runs in sandbox? Or is that already the case?


> Is it possible that the app runs in sandbox? Or is that already the case?

Why does that matter?


So that other applications cannot steal the generated code. I think the easiest way to steal the code is to OCR a screenshot.


If you remember the great HN "iOS is faster; you're wrong because Android isn't slow" debate of a few months back, the primary reason that android runs slower than you'd expect is that apps have absolutely no access to each other's frame buffers. Apps can't take screen shots of other apps (for exactly this reason)


Does anyone know how this works? It reminds me of a RSA SecurID but since it's only in software, can't it be reverse engineered?




Something like the Google Authenticator App instead of an SMS would remove the phone company from the equation, also my bank gives me an actual device on which I have to punch my code in to log into my bank account. These remove allot of the social engineering options that thieves have.


It annoys me that a trick is missed with the secure fob. Imagine that: the challenge screen includes the amount you are authorising and you type that amount into your secure fob along with the challenge code

That's basically how the auth works on with my online bank. I get a small calculator sized device that reads my debit card. I have to enter the card pin, a challenge code from the online transaction, and the amount - which then gives me a code to authorise the online transaction.

(The downside is that the devices are all identical - so anybody with one + a cloned card + my stolen login info can auth transactions - hey ho...)


That is beyond stupid. Why not have different signing/encryption keys on the individual fobs?


Cost/benefit I imagine. It makes the widgets much simpler to manufacture, distribute and use.

And the risk, from what I can see, is pretty low. It would have to be a very focused attack to clone my card & get all my auth info for my online banking account since the two sets of data (card + online auth) don't intersect anywhere normally.


Would you even trust the lowest-bidder factory where the card readers are made?


The factory isn't really an additional risk.

There's not a networked man-in-the-middle attack via the readers (they're not connected devices). You can't change the algorithm (it needs to be the same one implemented by the online bank). The algorithm is already essentially public (the devices are identical and widespread).

Pwning the factory doesn't really give an attacker an advantage.


Does it read your card (magnetic strip) or it uses the card's chip for encryption services to generate the tokens? It's not the same thing. Card chips are not easy to clone and have more data than the credit card number.


Good question. No idea :-)


are you talking about the barclays PINsentry? if so, it's a chip reader.


Nationwide's box. I don't have one to hand to go look.


Two factor authentication is still, in my opinion, the strongest way to go. This case is really the phone company's fault, maybe they'll learn from this and start teaching the customer support reps what the difference is between a correct password and an incorrect password.


Luckily for Gmail, its virtually impossible to call anyone at Google. Maybe all the talk of "Gmail's lack of support" is actually a security feature :)


This isn't true. Phone support is provided for domain administrators of paying apps customers.

http://support.google.com/a/bin/request.py


Actually I'd argue that it's a UK government problem -- the person who responsible should have been thrown in jail long before he managed to figure out how to social engineer multiple unrelated systems.

Society needs a certain level of trust to function, if the law ignores people who continuously try to defraud others than they are just going to get better at it.


(TL;DR at bottom)

I see a lot of hacks of voice mails and then requests for Google to use the second factor to reset the account...all by baddies. Who then proceed to take over the account.

So, it seems to me that it's worse than having no second factor at all.

After all, why is it stronger to use two factors than just using a strong password from your laptop or personal devices - without ANY backup contact information or second factor linked to the Google account? Hear me out.

Most people who are 'targets' (consider a millionaire VC who gets his contact details out a lot) are already far more compromised if their computer (personal laptop) has a keylogger or remote screen software (backdoors) by someone who knows who this person is, targetted the laptop they do all their work and most of their shopping on, and has gotten in. usually this hasn't happened. if this HASN'T happened, typing the password on that laptop via https is secure and doesn't allow anyone to get in. So, we have a stepwise function:

  ^
  |                                  
  |                                  
  |                                  you've been targeted and rooted: they can keylog
  |                                  and do A N Y T H I N G in your name from machine
 R|                                ------------------------------------------------->
 i|                               |
 s|                               |
 k|                               |
  |                               |
 o|                               |
 f|                               |
  |                               | 
 p|                               |
 a|                               |
 i|                               |
 n|                               |
  |                               |
  //                              //
  |                               |
  |                               |
  |                               |
  | (few people know it's your    |
  | laptop, none has rooted it)   |
  -================================------------------------------------------------->
                       Level you are breached
so either someone is in looking over your shoulder (which a lot of people would like to be doing when you're a target)... and can remote control/ do stuff in your name (perhaps when you're not using the computer), keylog and basically do anything you can do or have been doing....

or nobody is in yet. (Except as general malware that doesn't know who you are, nobody is remote controlling/keylogging you and checking those files.)

This is MOST of the cases - how do you even know which of millions of computers out there that are 'mostly secure' except against a targeted attack with a lot of known information about the laptop target, is the one that belongs to this millionaire VC? They'd have to look through millions of computers to find him or her...

I'm not talking about a botnet you're part of that has millions of users. I'm talking about someone targeting you.

So, it's basically vulnerability exposure is a STEP function, with a function that goes from "no remote keylogging; even though I'm important no one knows my computer's mac address or what software it's running" at x = 0 to whatever, with a corresponding horizontal y value of "low level of exposure", to, at the next step, "a keylogger is installed on my computer" having a HUGE jump in exposure rating to "totally fucked since now they have my every credit card, can see my every email, etc etc etc. They can just watch over my network and whenever I make a purchase, also make themselves a purchase."

Since Google services can be accessed via https, between those two steps, aren't you "safe as long as no one is getting into your computer since they don't even know this computer belongs to a strong target?"

But with the second factor, you're adding a step there between that stepwise leap:

-> Someone's figured out my phone number; now if they can hack my voice mail they can get Google to send a reset code to it, get the reset code, and take over my account.

The point is: WITHOUT breaching the original second step (i.e. even finding out what physical mac address or, at a given time, IP address, belongs to 'you', or what hardware and software you're even running.)

Your email address and phone number you use, meanwhile is in some sense 'totally public' as that is where you're MEANT to be reached. Both of those things are things that you give out willy-nilly, unlike any information about which computer in America is yours.

So it seems to me that not introducing an insecurity step between step 0 and step 1 would be a good solution: use a secure password, don't write it down anywhere, and use it from computers which aren't especially linked to you or particularly 'tainted'.

why make yourself a target with hackable 2-factor authentication?

TL;DR: your phone number is supposed to be public, your email is supposed to be public, the phone company is not a security token. Don't use the second Google factor.


2FA using phone calls/SMS is basically lameness (similar to KBA; it protects against huge numbers of users with bad passwords being a vulnerability to the bank, and is a cheap compliance step, but provides no additional security to a targeted victim).

Overarching all of this, there's a great opportunity to fix things in the desktop -> mobile transition; desktop OS security is IMO a lost cause, but mobile started from a much better place AND is progressing well.

2FA using a physical token, or, better, some kind of key storage device + secure I/O (to unlock and verify), reasonable increase in security.

2FA using a software application on a phone paired to your laptop (and hence having full ability to extract data, etc.) is somewhere in between -- implementation details and use case. Requiring two distinct devices does help (especially if one is stolen but not the other), but over time, people will move to mobile-only with some kind of cloud syncing, so it will make less sense.

The ideal is still something built into a mobile OS with hardware protection (e.g. iOS Keystore), storing either random long string passwords or some kind of public key credential, and either a trustable network proxy converting that to standard username/password to log into sites, or sites adopting this as a means of authentication (client cert auth sucked a lot in the past, true, but it doesn't have to suck).

Then, all your identity/presence (biometric, geofencing, heuristics, ...), backup, key recovery, etc. could be handled in one place, by one API.

That's what I was hoping Apple would do with Passbook/iOS6/iCloud, but doesn't appear to be anything they care about. Only Apple could build this (due to how the platform works, you can't override things), since every app would need to use the API, and web browsing (via Safari) would be 90% of the use. Unfortunately Android has no platform security (and anything would be 2-3 years away, once MTM is available), BB is dead, BB10 is stillborn, and WP doesn't seem to care.


Apple cares too much about user experience than to foist this type of inanity on users.


Single-Signon is basically universally regarded as the ideal user experience.

Enter your passcode (or otherwise ID yourself to the device), and then everything "just works", with no need to remember or type passwords to every single site. Apple's already perfectly content to consider iPads and iPhones single-user devices, and with OS X, you can have multiple user logins with fast user switching.


> I see a lot of hacks of voice mails and then requests for Google to use the second factor to reset the account...all by baddies.

(Disclaimer: I work for a telephone and software-based 2factor provider)

If your telephone-based 2 factor authentication is being thwarted by voice mail hacking, the problem lies in the implementation of the phone call itself, not necessarily the method. Unfortunately, certain solutions are built to be "quick and dirty" and just play an automated message that spits out a temporary password 3 times and hangs up. These make it easy for attackers to scrape them out of voice mail boxes.

Properly designed solutions will actually require call affirmation (for example: the user will be asked to press a randomized DTMF digit before a temporary password is spoken). Certain locations (like North America) can take advantage of features like call-forward detection, which can prevent sensitive information from being delivered if a number is determined to be forwarded.

Ultimately, it may not be a silver bullet, but when it's done right you're left with something much more effective than a minimum effort/lowest possible cost approach.


none of this applies to the user. There's nothing they can do about the process except not use it - my examples were specifically Google's solution.

Several companies were in the news after they got burned with it.


"it seems to me that it's worse than having no second factor at all"

I agree with you totally. A second authentication that is much weaker than the first (prone to social engineering or even googling like the endemic first name of your favorite uncle questions) can be worse than just having one strong authentication. For some reason many companies think that building a bridge out of very many weak components makes a strong bridge. That is not true though, you have a weak bridge in the end.


I'm a pretty big fan of the rolling token 2-factor authentication model, with the app on your phone presenting you the rolling token. The Blizzard login app is the biggest single example that comes to mind. SMS really isn't secure, I think something like this could be a good next step to phase in.


This is the same design as RSA SecurID: http://en.wikipedia.org/wiki/SecurID

I had no idea you could also do Google's two factor auth with SMS messages. That seems really flaky.


This is the same as the Google Authenticator app that people are talking about.


The thing that bugs me about this model is that it's not challenge-response, so someone can play man-in-the-middle.

While it's possible to hijack someone's phone number, as demonstrated, it requires a relatively high amount of effort per target. Whereas if you compromise a network segment somewhere (with DNS and a rogue SSL cert or whatever you need), you could just sit there, farming authentication cookies. Have your MitM check the "authenticate this computer for 30 days" checkbox and you've got a nice little collection to work with.


Are you familiar with methods that are resilient in the face of MitM attacks?


How would this help preventing the situation described in the article?


it does not rely on the phone companies, rather an app from blizzard


I hate to use the cliché, but: weakest link. SMS and dial-back systems rely on the security of the telco, who are dis-incented to secure their users' accounts. These systems do not use encryption! Of course they are going to get owned.


I wonder how much publicity this type of breach is getting outside of the HN bubble? I'd guess not a lot because I still have many friends that act like I'm paranoid just for using 2-factor at all.

At some point though, shouldn't phone companies notice and beef up their end a little bit. Maybe we need another large phone hacking scandal to really lock down answering machine security. http://en.wikipedia.org/wiki/News_International_phone_hackin...


If RSA 2-factor tokens can be hacked (or "stolen" I guess, but the effect is the same), there's not much hope for the rest of us. Still a whole lot better than not doing 2 factor.


The attack was on a phone call message for 2-factor authentication "Your one time password is XXXXXX. Please use this to login now" not an RSA token.



The point of multi-factor authentication is not that the additional factors are infallible. Rather, the point is, when one factor fails, there remain other factors still in place.

When RSA was compromised, accounts in which RSA was used as an additional factor remained protected by their remaining uncompromised factors - allowing time to replace the RSA factor.


Wish he had stated which UK bank since most of the ones I am aware of use 2-factor authentication using a card reader device. They even seem to use an identical card reader!


I looked into this a while ago and I believe that Lloyds TSB must be the bank as it uses telephone authentication as follows: http://www.lloydstsb.com/security/security_improvements_we_h...

"When you set up a new payment, we’ll give you a call to ensure that the instruction is coming from you.

Step-by-step payment security:

All you need is a telephone near you. You’ll be able to choose which number we call you on, provided it’s a number we already hold for you.

You’ll receive an automated call asking you to confirm details about your transaction.

Then you’ll need to enter a four digit number that will appear on your screen into your telephone keypad."

There's a Flash demo of this here: http://www.lloydstsb.com/new_internet_banking_demo/index.htm...


It sounds like LloydsTSB to me too, not only is the process for sending money as above, but you don't need your card to login (there is a User ID number, a password and a "memorable phrase" which you have to give three digits of).

While it's convenient as you don't have to remember the card reader when you want to login, it does worry me that it is less secure and vulnerable to keyloggers.

Edit: Halifax is also the same, but then it is owned by Lloyds and has recently transitioned its backend to the same platform as Lloyds uses.


This also underscores why the whole "banking through SMS" is not trustworthy - the telecoms are not banks, and are essentially weak points in the security chain.

The only way to do this properly is certificate-based 2-factor like with Google's Authenticator app.


The point is: Nothing is ever truly secure. Do what you can to avoid being the low-hanging fruit and you'll probably be OK.


No, I don't think that's the point. The point is more like "something I can access" is not a factor that's as strong as "something I know/am/have".


There was a "something I know" in this case - BT's password protection, which was broken in the first place. If this had worked as expected it would have been much more difficult to intercept the call.

Of course, if lots of money is at stake, it's not uncommon for the attacker to track you down, beat you up/kill you and steal your phone (or RSA keyfob) to finish the transaction. To get the PIN that goes with the keyfob they'll use lead pipe cryptography.

Dedicated people will get what they want. Google sending you an SMS is less of a risk than a bank calling you because it's unlikely that anyone would need your Google account as much as your bank account. And on top of that, you're more likely to be kidnapped if you have an authenticator like a "something I know" that is difficult to steal.

The real point is: there is no such thing as secure. To protect your money, spread it out across multiple banking institutions with different methods of access to increase attack surface, and don't log into your savings account.


I liked this so much I added it to the article at the bottom :)


Thanks!


The way it's commonly implemented, two-factor authentication is definitely not "a step in the right direction".

Two-factor authentication using phone numbers is a huge privacy breach, especially when you're dealing with websites that have no business knowing your phone.

And rolling code tokens aren't feasible for anything except some really high-security applications. Even there, I doubt they are really much more secure than a USB stick with your paraphrase-protected private key. Sure, you can't copy the token, but that doesn't just add to security, it detracts from usability.


Bcrypt is no longer recommended.

NIST recommends PBKDF2.

In short, it appears that advances in hardware have made it possible such that bcrypt hashes can be efficiently computed.

http://en.wikipedia.org/wiki/PBKDF2 http://security.stackexchange.com/questions/4781/do-any-secu...


Please read your references...

    The bcrypt key derivation function requires a larger (but 
    still fixed) amount of RAM and is slightly stronger 
    against such attacks, while the more modern scrypt key 
    derivation function can use arbitrarily large amounts of 
    memory and is much stronger. (wikipedia)

    Basically they recommend PBKDF2. This does not mean that 
    they deem bcrypt insecure; they say nothing at all about 
    bcrypt. It just means that NIST deems PBKDF2 "secure 
    enough" ... On the other hand, bcrypt comes from Blowfish 
    which has never received any kind of NIST blessing (or 
    curse). (stackexchange)
The general consensus here over the past few months/years is that bcrypt is good enough, scrypt is probably better, and PBKDF2 is pretty good. And that ALL of them are much better than hashing+salt.


Is this a troll, or are you misinformed?


django-twostepauth (https://bitbucket.org/cogni/django-twostepauth/src/79bbf0ce3...) is a Django app that works really well and has an exampleapp that makes it as easy as can be to try out. Setting up TFA with Google's authenticator is a breeze.

Unfortunately, the release of Django 1.4/django-registration 0.8 (https://bitbucket.org/ubernostrum/django-registration/src/27...) complicates matters a bit, and I'm torn between figuring out a way to keep my TFA or just roll it back altogether and implement d-r, and see if it has support for TFA.

If you're using Django 1.3 with something else than the default password-hashing, you should check out django-TSA.


Relatedly, it is possible to read the 2FA SMS message on android without pattern unlocking the phone -- it appears in the notification bar briefly.

I don't know whether it amounts to a security vulnerability, but it certainly makes it that little bit weaker.


Message previews in the notification bar can be disabled.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: