
I know someone whose 2-factor phone authentication was hacked...  - willvarfar
http://williamedwardscoder.tumblr.com/post/24949768311/i-know-someone-whose-2-factor-phone-authentication-was
======
pearkes
I use the Google Authenticator app, which makes the token only available to
that specific device, rather than through SMS. This gets around the problem of
cloning a phone number.

iOS: [http://itunes.apple.com/us/app/google-
authenticator/id388497...](http://itunes.apple.com/us/app/google-
authenticator/id388497605?mt=8)

Android:
[https://play.google.com/store/apps/details?id=com.google.and...](https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2&hl=en)

~~~
gibybo
I use this too, but I don't think it actually prevents the attack described in
the article, at least in my case. When I setup my 2-factor auth for my Google
account, I also setup a series of backups in case I lost access to my phone.
One of them was my phone number, and another was a phone number of a trusted
friend.

~~~
ajross
It prevents human engineering of the phone company. One might hope that Google
would be better at the security implementation (i.e. the BT operator
apparently "validated" an incorrect password, which indicates to me that it
was a quick hack, probably just a field in the customer record about which the
operators weren't trained).

Also: I'd hope that Google wouldn't allow a complete account reset based
solely on the backup device. That should be a backup for your second factor
only, you should still need the password.

------
gagabity
Something like the Google Authenticator App instead of an SMS would remove the
phone company from the equation, also my bank gives me an actual device on
which I have to punch my code in to log into my bank account. These remove
allot of the social engineering options that thieves have.

------
adrianhoward
_It annoys me that a trick is missed with the secure fob. Imagine that: the
challenge screen includes the amount you are authorising and you type that
amount into your secure fob along with the challenge code_

That's basically how the auth works on with my online bank. I get a small
calculator sized device that reads my debit card. I have to enter the card
pin, a challenge code from the online transaction, and the amount - which then
gives me a code to authorise the online transaction.

(The downside is that the devices are all identical - so anybody with one + a
cloned card + my stolen login info can auth transactions - hey ho...)

~~~
3pt14159
That is beyond stupid. Why not have different signing/encryption keys on the
individual fobs?

~~~
rwmj
Would you even trust the lowest-bidder factory where the card readers are
made?

~~~
adrianhoward
The factory isn't really an additional risk.

There's not a networked man-in-the-middle attack via the readers (they're not
connected devices). You can't change the algorithm (it needs to be the same
one implemented by the online bank). The algorithm is already essentially
public (the devices are identical and widespread).

Pwning the factory doesn't really give an attacker an advantage.

------
shadesandcolour
Two factor authentication is still, in my opinion, the strongest way to go.
This case is really the phone company's fault, maybe they'll learn from this
and start teaching the customer support reps what the difference is between a
correct password and an incorrect password.

~~~
its_so_on
(TL;DR at bottom)

I see a lot of hacks of voice mails and then requests for Google to use the
second factor to reset the account...all by baddies. Who then proceed to take
over the account.

So, it seems to me that it's worse than having no second factor at all.

After all, why is it stronger to use two factors than just using a strong
password from your laptop or personal devices - without ANY backup contact
information or second factor linked to the Google account? Hear me out.

Most people who are 'targets' (consider a millionaire VC who gets his contact
details out a lot) are already far more compromised if their computer
(personal laptop) has a keylogger or remote screen software (backdoors) by
someone who knows who this person is, targetted the laptop they do all their
work and most of their shopping on, and has gotten in. usually this hasn't
happened. if this HASN'T happened, typing the password on that laptop via
https is secure and doesn't allow anyone to get in. So, we have a stepwise
function:

    
    
      ^
      |                                  
      |                                  
      |                                  you've been targeted and rooted: they can keylog
      |                                  and do A N Y T H I N G in your name from machine
     R|                                ------------------------------------------------->
     i|                               |
     s|                               |
     k|                               |
      |                               |
     o|                               |
     f|                               |
      |                               | 
     p|                               |
     a|                               |
     i|                               |
     n|                               |
      |                               |
      //                              //
      |                               |
      |                               |
      |                               |
      | (few people know it's your    |
      | laptop, none has rooted it)   |
      -================================------------------------------------------------->
                           Level you are breached
    

so either someone is in looking over your shoulder (which a lot of people
would like to be doing when you're a target)... and can remote control/ do
stuff in your name (perhaps when you're not using the computer), keylog and
basically do anything you can do or have been doing....

 _or_ nobody is in yet. (Except as general malware that doesn't know who you
are, nobody is remote controlling/keylogging you and checking those files.)

This is MOST of the cases - how do you even know which of millions of
computers out there that are 'mostly secure' except against a targeted attack
with a lot of known information about the laptop target, is the one that
belongs to this millionaire VC? They'd have to look through millions of
computers to find him or her...

I'm not talking about a botnet you're part of that has millions of users. I'm
talking about someone targeting you.

So, it's basically vulnerability exposure is a STEP function, with a function
that goes from "no remote keylogging; even though I'm important no one knows
my computer's mac address or what software it's running" at x = 0 to whatever,
with a corresponding horizontal y value of "low level of exposure", to, at the
next step, "a keylogger is installed on my computer" having a HUGE jump in
exposure rating to "totally fucked since now they have my every credit card,
can see my every email, etc etc etc. They can just watch over my network and
whenever I make a purchase, also make themselves a purchase."

Since Google services can be accessed via https, between those two steps,
aren't you "safe as long as no one is getting into your computer since they
don't even know this computer belongs to a strong target?"

But with the second factor, you're adding a step there between that stepwise
leap:

-> Someone's figured out my phone number; now if they can hack my voice mail they can get Google to send a reset code to it, get the reset code, and take over my account.

The point is: WITHOUT breaching the original second step (i.e. even finding
out what physical mac address or, at a given time, IP address, belongs to
'you', or what hardware and software you're even running.)

Your email address and phone number you use, meanwhile is in some sense
'totally public' as that is where you're MEANT to be reached. Both of those
things are things that you give out willy-nilly, unlike any information about
which computer in America is yours.

So it seems to me that not introducing an insecurity step between step 0 and
step 1 would be a good solution: use a secure password, don't write it down
anywhere, and use it from computers which aren't especially linked to you or
particularly 'tainted'.

why make yourself a target with hackable 2-factor authentication?

 _TL;DR: your phone number is supposed to be public, your email is supposed to
be public, the phone company is not a security token. Don't use the second
Google factor._

~~~
rdl
2FA using phone calls/SMS is basically lameness (similar to KBA; it protects
against huge numbers of users with bad passwords being a vulnerability to the
bank, and is a cheap compliance step, but provides no additional security to a
targeted victim).

Overarching all of this, there's a great opportunity to fix things in the
desktop -> mobile transition; desktop OS security is IMO a lost cause, but
mobile started from a much better place AND is progressing well.

2FA using a physical token, or, better, some kind of key storage device +
secure I/O (to unlock and verify), reasonable increase in security.

2FA using a software application on a phone paired to your laptop (and hence
having full ability to extract data, etc.) is somewhere in between --
implementation details and use case. Requiring two distinct devices does help
(especially if one is stolen but not the other), but over time, people will
move to mobile-only with some kind of cloud syncing, so it will make less
sense.

The ideal is still something built into a mobile OS with hardware protection
(e.g. iOS Keystore), storing either random long string passwords or some kind
of public key credential, and either a trustable network proxy converting that
to standard username/password to log into sites, or sites adopting this as a
means of authentication (client cert auth sucked a lot in the past, true, but
it doesn't have to suck).

Then, all your identity/presence (biometric, geofencing, heuristics, ...),
backup, key recovery, etc. could be handled in one place, by one API.

That's what I was hoping Apple would do with Passbook/iOS6/iCloud, but doesn't
appear to be anything they care about. Only Apple could build this (due to how
the platform works, you can't override things), since every app would need to
use the API, and web browsing (via Safari) would be 90% of the use.
Unfortunately Android has no platform security (and anything would be 2-3
years away, once MTM is available), BB is dead, BB10 is stillborn, and WP
doesn't seem to care.

~~~
pbreit
Apple cares too much about user experience than to foist this type of inanity
on users.

~~~
rdl
Single-Signon is basically universally regarded as the ideal user experience.

Enter your passcode (or otherwise ID yourself to the device), and then
everything "just works", with no need to remember or type passwords to every
single site. Apple's already perfectly content to consider iPads and iPhones
single-user devices, and with OS X, you can have multiple user logins with
fast user switching.

------
cantankerous
I'm a pretty big fan of the rolling token 2-factor authentication model, with
the app on your phone presenting you the rolling token. The Blizzard login app
is the biggest single example that comes to mind. SMS really isn't secure, I
think something like this could be a good next step to phase in.

~~~
smackfu
This is the same as the Google Authenticator app that people are talking
about.

~~~
keturn
The thing that bugs me about this model is that it's not challenge-response,
so someone can play man-in-the-middle.

While it's possible to hijack someone's phone number, as demonstrated, it
requires a relatively high amount of effort per target. Whereas if you
compromise a network segment somewhere (with DNS and a rogue SSL cert or
whatever you need), you could just sit there, farming authentication cookies.
Have your MitM check the "authenticate this computer for 30 days" checkbox and
you've got a nice little collection to work with.

~~~
cantankerous
Are you familiar with methods that are resilient in the face of MitM attacks?

------
nowen
I hate to use the cliché, but: weakest link. SMS and dial-back systems rely on
the security of the telco, who are dis-incented to secure their users'
accounts. These systems do not use encryption! Of course they are going to get
owned.

------
mayneack
I wonder how much publicity this type of breach is getting outside of the HN
bubble? I'd guess not a lot because I still have many friends that act like
I'm paranoid just for using 2-factor at all.

At some point though, shouldn't phone companies notice and beef up their end a
little bit. Maybe we need another large phone hacking scandal to really lock
down answering machine security.
[http://en.wikipedia.org/wiki/News_International_phone_hackin...](http://en.wikipedia.org/wiki/News_International_phone_hacking_scandal)

------
eli
If RSA 2-factor tokens can be hacked (or "stolen" I guess, but the effect is
the same), there's not much hope for the rest of us. Still a whole lot better
than not doing 2 factor.

~~~
pilom
The attack was on a phone call message for 2-factor authentication "Your one
time password is XXXXXX. Please use this to login now" not an RSA token.

~~~
neilc
I believe the OP is referring to
<http://www.finextra.com/news/fullstory.aspx?newsitemid=22375>

~~~
Legion
The point of multi-factor authentication is _not_ that the additional factors
are infallible. Rather, the point is, when one factor fails, there remain
other factors still in place.

When RSA was compromised, accounts in which RSA was used as an additional
factor remained protected by their remaining uncompromised factors - allowing
time to replace the RSA factor.

------
drucken
Wish he had stated which UK bank since most of the ones I am aware of use
2-factor authentication using a card reader device. They even seem to use an
identical card reader!

~~~
jgrahamc
I looked into this a while ago and I believe that Lloyds TSB must be the bank
as it uses telephone authentication as follows:
[http://www.lloydstsb.com/security/security_improvements_we_h...](http://www.lloydstsb.com/security/security_improvements_we_have_made.asp?wt.ac=SECX70411)

"When you set up a new payment, we’ll give you a call to ensure that the
instruction is coming from you.

Step-by-step payment security:

All you need is a telephone near you. You’ll be able to choose which number we
call you on, provided it’s a number we already hold for you.

You’ll receive an automated call asking you to confirm details about your
transaction.

Then you’ll need to enter a four digit number that will appear on your screen
into your telephone keypad."

There's a Flash demo of this here:
[http://www.lloydstsb.com/new_internet_banking_demo/index.htm...](http://www.lloydstsb.com/new_internet_banking_demo/index.html?ibdm=8)

~~~
jdsnape
It sounds like LloydsTSB to me too, not only is the process for sending money
as above, but you don't need your card to login (there is a User ID number, a
password and a "memorable phrase" which you have to give three digits of).

While it's convenient as you don't have to remember the card reader when you
want to login, it does worry me that it is less secure and vulnerable to
keyloggers.

Edit: Halifax is also the same, but then it is owned by Lloyds and has
recently transitioned its backend to the same platform as Lloyds uses.

------
r00fus
This also underscores why the whole "banking through SMS" is not trustworthy -
the telecoms are not banks, and are essentially weak points in the security
chain.

The only way to do this properly is certificate-based 2-factor like with
Google's Authenticator app.

------
benburleson
The point is: Nothing is ever truly secure. Do what you can to avoid being the
low-hanging fruit and you'll probably be OK.

~~~
hythloday
No, I don't think that's the point. The point is more like "something I can
access" is not a factor that's as strong as "something I know/am/have".

~~~
willvarfar
I liked this so much I added it to the article at the bottom :)

~~~
hythloday
Thanks!

------
romaniv
The way it's commonly implemented, two-factor authentication is definitely not
"a step in the right direction".

Two-factor authentication using phone numbers is a huge privacy breach,
especially when you're dealing with websites that have no business knowing
your phone.

And rolling code tokens aren't feasible for anything except some really high-
security applications. Even there, I doubt they are really much more secure
than a USB stick with your paraphrase-protected private key. Sure, you can't
copy the token, but that doesn't just add to security, it detracts from
usability.

------
freshfunk
Bcrypt is no longer recommended.

NIST recommends PBKDF2.

In short, it appears that advances in hardware have made it possible such that
bcrypt hashes can be efficiently computed.

<http://en.wikipedia.org/wiki/PBKDF2>
[http://security.stackexchange.com/questions/4781/do-any-
secu...](http://security.stackexchange.com/questions/4781/do-any-security-
experts-recommend-bcrypt-for-password-storage)

~~~
mbreese
Please read your references...

    
    
        The bcrypt key derivation function requires a larger (but 
        still fixed) amount of RAM and is slightly stronger 
        against such attacks, while the more modern scrypt key 
        derivation function can use arbitrarily large amounts of 
        memory and is much stronger. (wikipedia)
    
        Basically they recommend PBKDF2. This does not mean that 
        they deem bcrypt insecure; they say nothing at all about 
        bcrypt. It just means that NIST deems PBKDF2 "secure 
        enough" ... On the other hand, bcrypt comes from Blowfish 
        which has never received any kind of NIST blessing (or 
        curse). (stackexchange)
    

The general consensus here over the past few months/years is that bcrypt is
good enough, scrypt is probably better, and PBKDF2 is pretty good. And that
ALL of them are much better than hashing+salt.

------
kmfrk
django-twostepauth ([https://bitbucket.org/cogni/django-
twostepauth/src/79bbf0ce3...](https://bitbucket.org/cogni/django-
twostepauth/src/79bbf0ce3af0)) is a Django app that works really well and has
an exampleapp that makes it as easy as can be to try out. Setting up TFA with
Google's authenticator is a breeze.

Unfortunately, the release of Django 1.4/django-registration 0.8
([https://bitbucket.org/ubernostrum/django-
registration/src/27...](https://bitbucket.org/ubernostrum/django-
registration/src/27bccd108cde/docs/upgrade.rst)) complicates matters a bit,
and I'm torn between figuring out a way to keep my TFA or just roll it back
altogether and implement d-r, and see if it has support for TFA.

If you're using Django 1.3 with something else than the default password-
hashing, you should check out django-TSA.

------
emmelaich
Relatedly, it is possible to read the 2FA SMS message on android without
pattern unlocking the phone -- it appears in the notification bar briefly.

I don't know whether it amounts to a security vulnerability, but it certainly
makes it that little bit weaker.

~~~
ajr44
Message previews in the notification bar can be disabled.

