Also: I'd hope that Google wouldn't allow a complete account reset based solely on the backup device. That should be a backup for your second factor only, you should still need the password.
The main improvement I can see you've made is that it's real-time enough that you should be able to jump on it straight away and do something rather than the batch processes I suspect spammers use.
There was a cunning|creepy trick used by Facebook I recall reading about not that long ago that relied on Outlook autoloading bgsound attributes despite image loading settings, but I don't know of any comparable holes in gmail.
I believe Facebook used to do just the notification part with some optional security feature where you had to name each new computer you used. They have 2 factor now, of course.
Don't do what I did and stupidly use your google voice number. facepalm
Why does that matter?
That's basically how the auth works on with my online bank. I get a small calculator sized device that reads my debit card. I have to enter the card pin, a challenge code from the online transaction, and the amount - which then gives me a code to authorise the online transaction.
(The downside is that the devices are all identical - so anybody with one + a cloned card + my stolen login info can auth transactions - hey ho...)
And the risk, from what I can see, is pretty low. It would have to be a very focused attack to clone my card & get all my auth info for my online banking account since the two sets of data (card + online auth) don't intersect anywhere normally.
There's not a networked man-in-the-middle attack via the readers (they're not connected devices). You can't change the algorithm (it needs to be the same one implemented by the online bank). The algorithm is already essentially public (the devices are identical and widespread).
Pwning the factory doesn't really give an attacker an advantage.
Society needs a certain level of trust to function, if the law ignores people who continuously try to defraud others than they are just going to get better at it.
I see a lot of hacks of voice mails and then requests for Google to use the second factor to reset the account...all by baddies. Who then proceed to take over the account.
So, it seems to me that it's worse than having no second factor at all.
After all, why is it stronger to use two factors than just using a strong password from your laptop or personal devices - without ANY backup contact information or second factor linked to the Google account? Hear me out.
Most people who are 'targets' (consider a millionaire VC who gets his contact details out a lot) are already far more compromised if their computer (personal laptop) has a keylogger or remote screen software (backdoors) by someone who knows who this person is, targetted the laptop they do all their work and most of their shopping on, and has gotten in. usually this hasn't happened. if this HASN'T happened, typing the password on that laptop via https is secure and doesn't allow anyone to get in. So, we have a stepwise function:
| you've been targeted and rooted: they can keylog
| and do A N Y T H I N G in your name from machine
| (few people know it's your |
| laptop, none has rooted it) |
Level you are breached
or nobody is in yet. (Except as general malware that doesn't know who you are, nobody is remote controlling/keylogging you and checking those files.)
This is MOST of the cases - how do you even know which of millions of computers out there that are 'mostly secure' except against a targeted attack with a lot of known information about the laptop target, is the one that belongs to this millionaire VC? They'd have to look through millions of computers to find him or her...
I'm not talking about a botnet you're part of that has millions of users. I'm talking about someone targeting you.
So, it's basically vulnerability exposure is a STEP function, with a function that goes from "no remote keylogging; even though I'm important no one knows my computer's mac address or what software it's running" at x = 0 to whatever, with a corresponding horizontal y value of "low level of exposure", to, at the next step, "a keylogger is installed on my computer" having a HUGE jump in exposure rating to "totally fucked since now they have my every credit card, can see my every email, etc etc etc. They can just watch over my network and whenever I make a purchase, also make themselves a purchase."
Since Google services can be accessed via https, between those two steps, aren't you "safe as long as no one is getting into your computer since they don't even know this computer belongs to a strong target?"
But with the second factor, you're adding a step there between that stepwise leap:
-> Someone's figured out my phone number; now if they can hack my voice mail they can get Google to send a reset code to it, get the reset code, and take over my account.
The point is: WITHOUT breaching the original second step (i.e. even finding out what physical mac address or, at a given time, IP address, belongs to 'you', or what hardware and software you're even running.)
Your email address and phone number you use, meanwhile is in some sense 'totally public' as that is where you're MEANT to be reached. Both of those things are things that you give out willy-nilly, unlike any information about which computer in America is yours.
So it seems to me that not introducing an insecurity step between step 0 and step 1 would be a good solution: use a secure password, don't write it down anywhere, and use it from computers which aren't especially linked to you or particularly 'tainted'.
why make yourself a target with hackable 2-factor authentication?
TL;DR: your phone number is supposed to be public, your email is supposed to be public, the phone company is not a security token. Don't use the second Google factor.
Overarching all of this, there's a great opportunity to fix things in the desktop -> mobile transition; desktop OS security is IMO a lost cause, but mobile started from a much better place AND is progressing well.
2FA using a physical token, or, better, some kind of key storage device + secure I/O (to unlock and verify), reasonable increase in security.
2FA using a software application on a phone paired to your laptop (and hence having full ability to extract data, etc.) is somewhere in between -- implementation details and use case. Requiring two distinct devices does help (especially if one is stolen but not the other), but over time, people will move to mobile-only with some kind of cloud syncing, so it will make less sense.
The ideal is still something built into a mobile OS with hardware protection (e.g. iOS Keystore), storing either random long string passwords or some kind of public key credential, and either a trustable network proxy converting that to standard username/password to log into sites, or sites adopting this as a means of authentication (client cert auth sucked a lot in the past, true, but it doesn't have to suck).
Then, all your identity/presence (biometric, geofencing, heuristics, ...), backup, key recovery, etc. could be handled in one place, by one API.
That's what I was hoping Apple would do with Passbook/iOS6/iCloud, but doesn't appear to be anything they care about. Only Apple could build this (due to how the platform works, you can't override things), since every app would need to use the API, and web browsing (via Safari) would be 90% of the use. Unfortunately Android has no platform security (and anything would be 2-3 years away, once MTM is available), BB is dead, BB10 is stillborn, and WP doesn't seem to care.
Enter your passcode (or otherwise ID yourself to the device), and then everything "just works", with no need to remember or type passwords to every single site. Apple's already perfectly content to consider iPads and iPhones single-user devices, and with OS X, you can have multiple user logins with fast user switching.
(Disclaimer: I work for a telephone and software-based 2factor provider)
If your telephone-based 2 factor authentication is being thwarted by voice mail hacking, the problem lies in the implementation of the phone call itself, not necessarily the method. Unfortunately, certain solutions are built to be "quick and dirty" and just play an automated message that spits out a temporary password 3 times and hangs up. These make it easy for attackers to scrape them out of voice mail boxes.
Properly designed solutions will actually require call affirmation (for example: the user will be asked to press a randomized DTMF digit before a temporary password is spoken). Certain locations (like North America) can take advantage of features like call-forward detection, which can prevent sensitive information from being delivered if a number is determined to be forwarded.
Ultimately, it may not be a silver bullet, but when it's done right you're left with something much more effective than a minimum effort/lowest possible cost approach.
Several companies were in the news after they got burned with it.
I agree with you totally. A second authentication that is much weaker than the first (prone to social engineering or even googling like the endemic first name of your favorite uncle questions) can be worse than just having one strong authentication. For some reason many companies think that building a bridge out of very many weak components makes a strong bridge. That is not true though, you have a weak bridge in the end.
I had no idea you could also do Google's two factor auth with SMS messages. That seems really flaky.
While it's possible to hijack someone's phone number, as demonstrated, it requires a relatively high amount of effort per target. Whereas if you compromise a network segment somewhere (with DNS and a rogue SSL cert or whatever you need), you could just sit there, farming authentication cookies. Have your MitM check the "authenticate this computer for 30 days" checkbox and you've got a nice little collection to work with.
At some point though, shouldn't phone companies notice and beef up their end a little bit. Maybe we need another large phone hacking scandal to really lock down answering machine security. http://en.wikipedia.org/wiki/News_International_phone_hackin...
When RSA was compromised, accounts in which RSA was used as an additional factor remained protected by their remaining uncompromised factors - allowing time to replace the RSA factor.
"When you set up a new payment, we’ll give you a call to ensure that the instruction is coming from you.
Step-by-step payment security:
All you need is a telephone near you. You’ll be able to choose which number we call you on, provided it’s a number we already hold for you.
You’ll receive an automated call asking you to confirm details about your transaction.
Then you’ll need to enter a four digit number that will appear on your screen into your telephone keypad."
There's a Flash demo of this here: http://www.lloydstsb.com/new_internet_banking_demo/index.htm...
While it's convenient as you don't have to remember the card reader when you want to login, it does worry me that it is less secure and vulnerable to keyloggers.
Edit: Halifax is also the same, but then it is owned by Lloyds and has recently transitioned its backend to the same platform as Lloyds uses.
The only way to do this properly is certificate-based 2-factor like with Google's Authenticator app.
Of course, if lots of money is at stake, it's not uncommon for the attacker to track you down, beat you up/kill you and steal your phone (or RSA keyfob) to finish the transaction. To get the PIN that goes with the keyfob they'll use lead pipe cryptography.
Dedicated people will get what they want. Google sending you an SMS is less of a risk than a bank calling you because it's unlikely that anyone would need your Google account as much as your bank account. And on top of that, you're more likely to be kidnapped if you have an authenticator like a "something I know" that is difficult to steal.
The real point is: there is no such thing as secure. To protect your money, spread it out across multiple banking institutions with different methods of access to increase attack surface, and don't log into your savings account.
Two-factor authentication using phone numbers is a huge privacy breach, especially when you're dealing with websites that have no business knowing your phone.
And rolling code tokens aren't feasible for anything except some really high-security applications. Even there, I doubt they are really much more secure than a USB stick with your paraphrase-protected private key. Sure, you can't copy the token, but that doesn't just add to security, it detracts from usability.
NIST recommends PBKDF2.
In short, it appears that advances in hardware have made it possible such that bcrypt hashes can be efficiently computed.
The bcrypt key derivation function requires a larger (but
still fixed) amount of RAM and is slightly stronger
against such attacks, while the more modern scrypt key
derivation function can use arbitrarily large amounts of
memory and is much stronger. (wikipedia)
Basically they recommend PBKDF2. This does not mean that
they deem bcrypt insecure; they say nothing at all about
bcrypt. It just means that NIST deems PBKDF2 "secure
enough" ... On the other hand, bcrypt comes from Blowfish
which has never received any kind of NIST blessing (or
Unfortunately, the release of Django 1.4/django-registration 0.8 (https://bitbucket.org/ubernostrum/django-registration/src/27...) complicates matters a bit, and I'm torn between figuring out a way to keep my TFA or just roll it back altogether and implement d-r, and see if it has support for TFA.
If you're using Django 1.3 with something else than the default password-hashing, you should check out django-TSA.
I don't know whether it amounts to a security vulnerability, but it certainly makes it that little bit weaker.