Hacker News new | past | comments | ask | show | jobs | submit login
How I bypassed 2-Factor-Authentication on Google, Facebook, Yahoo, LinkedIn (shubh.am)
392 points by sounds on May 17, 2014 | hide | past | web | favorite | 143 comments



I love how Google's response is akin to "well, if the password is compromised...anything is possible" logic and tagged as won't fix. Facebook and Linkedin of all people immediately triaged and started fixing the issue.

Unacceptable response from a company promoting its services as identity and communication platforms.


That's a backwards and false reading.

1. Google said that if your telco is insecure, you are insecure. If you knowingly choose to use an insecure telco for sensitive communications, you can't expect every system to refuse to communicate with you.

2. From the article, Google already offers a secure 2FA, in fact Google invented and open sourced it! Do FB and LI even offer a secure 2FA at all?

> disable 2FA on Google via texts or phone calls, and enable Google Authenticator based 2FA


Yes, you can use 2FA with Facebook... via Google Authenticator. :) They also support SMS but you should really use Authenticator or equivalent.


Well they didn't exactly invent the secure 2FA it has been an RFC for a while.


Agreed. Google is usually very good with security fixes, so it's surprising that they're basically filing this as a non-issue.

They're right that it isn't exactly a flaw in their systems, but they still have a relatively simple way of mitigating attacks against the telcos' security.


Like that one time when Google discovered and fixed the Heartbleed bug before anyone else did:

http://www.smh.com.au/it-pro/security-it/heartbleed-disclosu...


We did not file this as a non-issue. We clearly replied we filed a bug internally and are investigating defense options... Also see https://news.ycombinator.com/item?id=7761372


I disagree, it is a flaw in their system, because they make the assumption that an aspect of the telephone system is secure when it's not.


The user makes that assumption. Google offers and promotes Authenticator for users who want a secure connection.


I submit that a negligible proportion of users is aware that voicemail may be used as a factor, so they're not actively making that assumption. I would say it's Google's responsibility to protect users from the many insecure voicemail systems of telcos, since extra security is the whole point of enabling 2FA.


Google is gradually becoming the kind of company they used to say they were not.


Becoming? They're support always have been terrible; it always felt like a «we have one genius engineer per two millions of the likes of you, do you really think we're gonna bother?» mentality.


This is not customer-facing "read this script" support, though. This is "serious bug/security flaw" respond-to-this-so-your-customers-don't-get-screwed triage. They definitely used to respond to this type of thing a lot better.


The have been that company for a long, long time.


I've always thought that the "Don't be Evil" mantra/motto/whatever is akin to people in denial telling themselves that they are "good people".


If you need a rule to tell you not to be evil, you're definitely not good people.


I don't agree with that. It's a statement that sets the company's policy. While obvious that people should not be "evil", a company motto setting it explicitly to be good is a welcome thing. Of course, Google is no longer that company, and they removed that policy statement after it became a mocking tool for critics.


Google is and always has been downright hostile to anything that comes from outside Google.


What a pathetic response (from Google)

This crappy reaction to any form of customer communications will be the eventual ruin of Google.


Their customer support may be bad, but their developer support is far worse.

If your application is ever suspended from Google Play, you will be greeted with a message directing you to an appeal form which lets you enter a maximum of 1000 characters to make your case. This is without having an exact idea of the reason your application was suspended in the first place. You are also advised that you may not ask any questions about why you have been suspended, or else they will not reply to your appeal.

A few hours later you will invariably receive the following email:

    Hi,

    We have reviewed your appeal and will not be reinstating
    your app. This decision is final and we will not be
    responding to any additional emails regarding this removal.

    If your account is still in good standing and the nature of
    your app allows for republishing you may consider releasing
    a new, policy compliant version of your app to Google Play
    under a new package name. We are unable to comment further
    on the specific policy basis for this removal or provide
    guidance on bringing future versions of your app into policy
    compliance. Instead, please reference the REASON FOR REMOVAL
    in the initial notification email from Google Play.
    Please note that additional violations may result in a
    suspension of your Google Play Developer account.

Sources:

* http://www.bytesinarow.com/2014/04/skyrim-alchemy-advisor-pr...

* http://blog.hutber.com/how-my-google-devlopers-account-got-t...

* http://arduinodroid.blogspot.de/2014/03/arduinodroid-is-temp...


Hah!

AdWords customers like me (spending at some point several 100.000 Euro/year) are eternally grateful that they actually bothered to implement an appeal form after many years. We got locked out for more than a year with no way to contact anyone responsible (in good tradition of other Google services I presume) and thus no way to appeal, once the form got added, we got the lock removed within days ... What are you Android developers complaining about! </cynical>


Thanks for sharing. So screw the Play Store then.

I wonder how long this can continue. At some point, one of those big walled garden providers will run head-on into EU law with this kind of behaviour. If you hold control over a significant part of a market, the law will eventually ( and hopefully) step in and prevent you from playing God.


The Play Store is not a walled garden - you can easily install apps on your phone without it (unlike the Apple Store, which is, and where you can't).


I wonder if the app store or play store will ever be considered to have control over a significant part though. I imagine Apple would argue nope on the grounds of smaller market share/install base, Google on the grounds of a smaller revenue share.


With Google you're lucky to get any reaction at all. Security issues are one of the few things where you can actually still get a response from a human being.


It feels like he got someone who didn't understand the full implications of the issue. Or did understand but didn't understand why it would be such an issue.

It seems the bigger the company the more likely that you will get a front line response that doesn't really grasp what you are raising.


The article says 'hence the best solution to fix this temporaily is to disable 2FA on Google via texts or phone calls, and enable Google Authenticator based 2FA, if you think your telco may be vulnerable.' I suppose you would also need to remove any 'backup' ohone numbers or the attacker could request a 2F code to them?


Actually, I find it amazing that people still consider phone calls and SMS messages as trusted channels.

I fought long and hard with my bank to avoid using SMS one-time codes to confirm transactions, and I lost (stayed on paper lists of one-time codes as long as I could).


Maybe Google's incredibly irresponsible response hints at their general strategy regarding authentication on the internet: they are already pushing for the replacement passwords. I'm quoting: "[Google plans] to release an ultra-secure and easy to use identity verification platform that eliminates the need for long, user-generated passwords. Dubbed U2F (Universal 2nd Factor), the consumer-facing side of this initiative will be a USB dongle called the YubiKey Neo. Built to Google’s specifications by security specialist Yubico, the YubiKey Neo is a small, durable and driverless device that requires no battery. Plugged into your computer’s USB port it will add a second, highly secure layer of verification when you point Google’s Chrome browser to your Gmail or Google Docs account. You’ll initiate the login by typing your username and a simple PIN. The browser will then communicate directly with the YubiKey Neo, using encrypted data, to authorize account access. With U2F verification, if someone wanted to login surreptitiously to your account, he or she would need to know your username and PIN while simultaneously having physical possession of that specific YubiKey Neo."

Furthermore, they had Regina Dugan, former DARPA director and now their VP of Engineering, Advanced Technology and Projects, on stage at All Things Digital in 2013 talking about electric tattoos and edible passwords which would turn your whole body into an authentication token with a 18bit ESG light signal. (Link to talk: https://www.youtube.com/watch?v=fzB1EcocAF8).

So maybe they just don't care because they won't use 2-factor-authentication for much longer anyway.


That will be a problem where USB ports are disabled (which is increasingly becoming a standard security practice). E.g. my workplace has separate PCs for accessing Gmail etc. but those still have USB ports disabled.


If USB ports are disabled, how are the keyboard and the mouse connected? PS/2?


Now that you mentioned it, I think only USB Mass Storage was disabled, not the ports themselves. So the keyboard/mouse etc continue to work.


So Yubikey would work. It identifies itself as a Keyboard to the OS.


These are HID devices, because they need to do challenge/response with the website that's trying to authenticate. The old OTP YubiKeys were keyboards. Better than nothing but phishable.


They can be permanently connected to computer.


Yes but even such a solution would surely use some sort of existing internal interface such as USB or PS/2, even if in practice the wires are soldered directly to the motherboard? Otherwise you'd have to build special hardware, controller chips, develop and maintain the driver?


bluetooth?


Requires batteries. NFC is the obvious choice, and yubikey neo already supports that. Note that (going back to top-level parent comment) current yubikey neos cannot support U2F; all they say is U2F compatible devices will be available sometime [later] this year.


I imagine you could have a fallback to how YubiKey neo and 2fa works today: using a phone app to generate a code (in conjunction with the yubikey) and manually type that in.


Believe Google already uses YubiKeys for authentication internally - certainly some of the Google staffers I know have them for authentication


Hmmm.... let passwords become unusable, and move to hardware crypto dongle? That sure sounds like a great way to remove any remaining anonymity on their services, once they tie shipping addresses to accounts.

// I like the idea of crypto dongles... when you control both sides


Accept the key isn't tied to you account when you buy it and you don't have to buy from google...

> Consumer provisioning should allow users to buy a compliant token from a vendor of their choice, insert it into a computer where they’re already authenticated to a website, and register their token with a single mouse click.

http://www.computer.org/cms/Computer.org/ComputingNow/pdfs/A...

Furthermore there are no serial number exchanged afaict, so the key itself isn't identified by the server, only the fact it has valid credentials.


Ahh, was not aware of that. I think this would solve a lot of my concern.


>Note that the secure element never returns a previously generated public key in any new registration step. >This makes tracking users across websites difficult by using the token as a supercookie that bypasses other anonymizing precautions.

So at least they make an effort to prevent that, and if they do not subvert this scheme you wont be trackable. But as the dongle is more or less a black box, it would probably be hard to ascertain that generated public keys do not have some property in common to allow them to be associated with each other.


Google's response _The attack presupposes a compromised password, and the actual vulnerability appears to lie in the fact that the Telcos provide inadequate protection of their voicemail system. Please report this to the telcos directly._

... is reprehensible. It's a problem with the design as a whole, Google's customers are going to experience the flaw, and just passing the buck doesn't make the problem go away. I'm really disappointed in Google.


You cherry-picked one line that is not representative of Google's response.

Google's final response was: "[we] have filed a bug [and] will [...] take a look", see my coworker (Jeremy)'s reply from May 6 at http://static.shubh.am/2fadisclosure/google.pdf

Clearly there was initially a simple misunderstanding/miscommunication. The reporter, on April 30, gave Google a few Australian telcos presumably vulnerable. But then he clarified on May 3 that it is actually at least "a large majority of telco's in Australia and UK" that are vulnerable. So, based on this new information, Google replied "Thanks for explaining the potential scope of this issue... [we] have filed a bug".

I mean, compare this to the way it was reported to Facebook: the reporter told them right away that 2 of the top 3 telcos in Australia are vulnerable(!) Now if it had been worded like this when first reported to Google, the misunderstanding would probably not have happened.


Vodafone is the world's second largest mobile telecommunication company behind China Mobile. It is sloppy of Google to not recognise this name, as you suggest happened.


"I have filed a bug" sure sounds like a brush-off to me, especially when combined with the tone of the rest of the reply (which basically reads "Not our problem" -- while it might not be Google's own defect, it's definitely a problem for Google, and one that requires mediation).


All security reports receive a "I filled a bug" after a bug is filled.

https://www.google.com/search?q=%22Nice%20catch!%20I%27ve%20...

Not a brush-off.


The entire point of 2FA is to make password compromise insufficient to gain access to an account. Which makes this response just baffling.

I initially thought that the lack of any technical countermeasure might be a legitimate reason for Google to do nothing, but after I thought about it for a moment, I realized that when the 2FA system calls the user, it could simply prompt the user to do something before sending the verification code -- that way, the 2FA provider knows they aren't sending the code to voicemail.

Simple and effective, and later in the post I found the author and multiple companies came up with that idea on their own.


Better yet make them repeat something (a nonce in the form of dialtones) back to prevent replay attacks in the form of recorded responses (as was also pointed out in the comments there).


Why? How is that a Google problem, when you use voice as your backup password system and you're not securing it properly?


As a large company that serves millions of users in many of their most sensitive online endeavors, is it more responsible to throw your hands up and hope that many of the hundreds of telcos out there will fix their own security, or implement a relatively easy fix that will automatically secure all of your customers from this issue regardless of third-party security?


Most people are not aware of the attack vector (spoofing the phone number to reach the voice mail).


Seems to generally demonstrate that sending a token to a voicemail is secured at best by voicemail PIN; he's demonstrated effectively that it certainly doesn't require possession of the phone. That turns two factor authentication -something you know, something you have - into 'wish-it-was-two-factor-authentication' - something you know, and something else you know.

Sounds like the right approach is indeed to not give away 2fa codes when the recipient hasn't demonstrated that they have the user's phone in their hand


Quite right. Here's another way to reduce the strength of the "something you have" aspect:

You're at a one-on-one meeting with a powerful executive. She gets up to attend to some business outside the room, and leaves you with her cellphone on the table. Before the meeting you logged into Google on your own cellphone and set the wheels in motion, so as soon as she steps out you click "Verify" and Google calls her phone and interacts with "her" to verify the number. In just a minute or so you have convinced Google that her phone can be associated with your Google account.

Now this might not be so bad--what are you going to do, set up your 2FA to use the executive's phone? No. The "verify" step you did above was for Google Voice number porting (https://support.google.com/voice/answer/1065667?hl=en).

In a day or two, the executive's phone will stop working, as her plan will be automatically cancelled by her provider, as they are required to respond to Google's porting request (and they will probably not call to confirm). Now you can receive and even place all the calls you want--2FA verifications (including for Google), do some nefarious texting, etc.


Right, but to carry out that attack you have had to compromise two distinct factors - you had to figure out her password AND get access to her phone. The point of two factor authentication is precisely that that raises the bar. Someone who casually steals her phone is unlikely to be able to figure out her password; someone who compromises her password via the web is unlikely to be able to also steal her phone. But a determined, targeted attacker can certainly knock down both of those barriers.

EDIT - realized you meant that this would allow you to associate her phone with your account, without needing any additional compromise other than access to the phone. That is a more subtle case - effectively what you're doing is stealing her phone number, but without stealing her phone, which means she might not notice for a while that the number has even been stolen. Interesting that similar vulnerabilities in domain transfer exist and have been mitigated with the need to obtain porting codes from registrars, as well as having domains 'locked' by default. Is phone number porting not protected similarly? I seem to remember having to provide a password for my old AT&T account in the apple store when I ported my number to Verizon...


Your edit is correct. And the Google Voice instructions posted at the above link right now are clear: you do not need to interact with your old carrier to get your number on Google Voice. I have tried it myself.

I believe what I outlined turns "Something you know and something you have" into "Something you know and something you once had for a minute at a different point in time." This is a significant to the 2FA security model, because the victim may mistakenly believe that even when left alone with her phone, especially if the phone requires a code to access (which is often not needed to answer a call), you would not be able to take control of her accounts unless you were simultaneously using a computer to do so. All you need is to click the "Verify" button on your Google account at the right time, which would likely go unnoticed.


But all 'something you have' authentication is vulnerable to the theft of the 'something you have' - your attack is just a slightly more sophisticated version of stealing the phone. I don't think it's particularly relevant to the use of a phone as a 2fa device.


Sorry, couldn't resist. Shameful self-promotion, but this is why companies shouldn't implement their own two-factor authentication. Getting everything right is hard and chances are that you aren't reading or informed of the latest attacks.

At Authy we are obsessed with Two-Factor Authentication and spend a huge amount of time looking at whats happening in the ecosystem, which new attacks do we need to be aware of etc. It might look easy to build a quick two-factor authentication system, but history will repeat itself, and like passwords we'll see lots of bad and insecure implementations because its harder than what people think.


I appreciate startup self-promotion as much as the next guy buuuuuut are you saying multi-hundred billion dollar internet companies shouldn't implement their own 2f auth? they should instead trust the security of their hundreds of millions user to you? Really?


I think Authy is saying that even multi-hundred billion dollar internet companies get it wrong, that's how hard it could be to properly build 2-factor. Don't try it yourself, use us.


> Don't try it yourself, use us.

I get it, what I'm saying is, if you're a multi-billion dollar internet company whose business it is to manage hundred of millions of users, you should keep security in-house, and get it right instead of outsourcing it to a start-up.


I think specifically what they're saying is 'If even Google can get it wrong, then you should seriously reconsider implementing it yourself if you need it. That said, we know a lot more about it than most people and you can trust us more than you can trust something you build yourself.'


You should add a line in your response saying that 'Authy isn't vulnerable to the voicemail attack'. Good job for being dedicated to the service.


Please add public pricing to your website.


Authy is an awesome piece of technology, I definitely think some day it will be the standard.


[deleted]


Assuming you aren't compromising someone's account that was left logged-in or a stolen phone:

Note down all information about the account creation, frequent contacts, services used .. basically all dashboard data and then contact Google. If you have a secondary recovery address, that's even better.


so you're saying authy is better than google, etc and will never suffer from any security issue?


The point is that the company is dedicated to it, and they can't slip up or they outright lose customers, so they're more likely to pay attention to risks and the state of the art in multi-factor authentication.


Would recording a DTMF tone as your voicemail greeting get around the "press any key to get your code" prompt?


It depends on the quality of the codec used by the VM system. Any compression is likely to destroy the code. VoIP protocols usually send DTMF as specific messages for this reason. However I wouldn't be surprised if it worked.


Sending tones in band actually works reasonably well on mobile networks. It's not 100% reliable which is why alternate signaling is used, especially when the signal quality is low.


It would be interesting to see if the DTMF tone could be recorded, or if the voicemail system would capture it first and abort the recording. Maybe it depends on which tone is played?

Investigating the extra A, B, C and D DTMF tones may be of use in this scenario, just in case voicemail systems don't recognise them as commands but 2FA callbacks allow them for confirmation.


People don't tend to read comments - I asked about this same thing below, but, for example, PhoneFactor, uses the #, which is often used to end recording, so, "any key" is dangerous, # is not as much.


A different issue - but around phone companies and security.

When your contract is running low, they call you and ask you to tell you them your security information as part of resigning you under new terms.

To emphasise: they cold call you and claim to be from the firm (which you don't know - it might be a phishing attack), and then ask you for your security details. They socialise their customers towards being vulnerable to phishing attacks.

I had this ages ago with an Australian provider (Telstra), and recently in the UK (O2).

Having followed up, I know it to be O2 policy that they are happy to do this. They... defended it on commercial grounds around practicality.

I'm sure that I'm not the only one to have though this is crazy. I raised an incident but as you'd expect it went nowhere. Has anyone else tried to pick it up with firms that do it?

Imagine a simple piece of legislation that banned calling and asking for security information. Would there be edge-cases that would make this a bad law?


When this has happened to me, I ask for proof that they are from the company concerned, or that they already have my details. So far they've always been happy to do this, although I always have to go first…


Here's an idea: get rid of voicemail completely. The only person, the ONLY person I know who has used mine is my father, and his message is always "Sorry I missed you, call me when you can." Which I know already, thanks.


I always wondered (and still do) why the voicemail recordings weren't stored on the phone. I could get it in the 90's (storage was expensive and phones weren't beefy enough to transcode efficiently) but at the beginning of the century I got puzzled by this.

Obvious advantages are:

- you never lose any information ; - you can archive and transfer them on your personal devices (PC, etc.) for later use ; - you don't have to call your voicemail and navigate a audio/keypad interface that is basically a huge timesink and an horrible experience.


One disadvantage is that your phone would need to be turned on and in range to receive the message. This could be solved by a hybrid system where the carrier also stores it.

A perhaps more important factor is that when you call the carrier to get your voicemail, you sort of automatically know you'll have to pay for it. But if your phone silently downloads a message you may never check, not so much. So that crestes a billing problem.


I've never understood why they weren't sent as MMS to your phone as soon as all the phones supported it (which was when, 2001-2003?).

In the pre-smartphone era, Japanese phones actually had a answering phone feature inside them that would receive and record the message to the microSD card. This had the big downside of not working when you're out of range or out of battery, which is still probably the #1 reason for a call to go to voicemail (IMO).


Congratulations - you have invented the answerphone.


I disabled it on my previous provider through a bunch of obscure operations that seemed like black magic to me.

Unfortunately, I haven't figured out how to disable it on my current provider.

I don't know why providers make this so hard for their users. Perhaps it's because the providers make money from those brief, useless voicemail connections? I assume the caller is billed just like he made a normal phone call.

I personally hate listening to a machine and playbacks of recorded messages, but my employer uses my voicemail sometimes so I guess I'll have to cope.


> Here's an idea: get rid of voicemail completely. The only person, the ONLY person I know who has used mine is my father

The only person you know is your father. There are millions of people who still rely on voicemail to communicate.

> and his message is always "Sorry I missed you, call me when you can." Which I know already, thanks.

You wouldn't know that if your mobile phone was out of range and didn't receive the missed call.


> You wouldn't know that if your mobile phone was out of range and didn't receive the missed call.

You could, actually. My provider sends a text message when that happens, which you receive when you're in range again.


You could, I can't. I use Verizon and they don't offer that feature.


My first idea as well, have always had it disabled and I can't remember last time I actually came across one, it died in the nineties (an SMS is way better anyway). Although I bet all operators have them included in all plans in case you want to enable it.

And that's a problem of it's own, it probably wouldn't be difficult to turn anyone's voicemail with social engineering or probably a host of other techniques.

So I don't know if you suggest getting rid of it completely or just disable it as a customer, the first will not happen over night and the latter might not help much (but probably can't hurt).


Others find valuable. Here in a country where there is no notion of voicemail, all we have is is a missed call log. And then we have to call each one up in person to inquire what they wanted to talk about if they didn't SMS me right after, which few people do.

A voicemail system would be good here. Some people may feel it unnecessary, and are welcome to not use/disable it.


Here in the UK, it's easy to disable it. The number to do so usually becomes preloaded on phones, or you can phone up customer support and have them do it, or search for it online.


Presumably then it's also easy to enable it - can you enable it for an arbitrary phone by using one of the caller ID spoofing systems mentioned in the article?


I wouldn't think so, but I haven't tried.


I hate 2fa based on SMS or voice auth. Requiring proving control of a number is decent as an anti spam or flooding technique, but is horrible for authentication of anything high value enough to justify 2fa.

I also strongly prefer 2fa systems which allow me to enroll my own hardware token or use a software token (eg AWS IAM) vs systems which supply the seed so I can't (CloudFlare is depressingly the only service I use which still suffers from this, despite being otherwise pretty awesome.)


I really dislike SMS-based 2FA and do not use it at all.

Imagine you're traveling, now you may have to pay 75 cents just to login somewhere.

Google Authenticator-based 2FA avoids all these problems.


My problem is that banning cellphones and blocking cell signals is common in secure spaces, so cell based 2fa fails there, too.


How can you pay for incoming SMS?


Roaming, or USA.


There are no roaming charges for incoming SMS in the EU.


To clarify: there are no charges for incoming SMS to EU numbers, no matter where in the world your phone happens to be.


And Canada.


Interesting article, I've always thought that phones are one of the weaker links in the 2FA chain (but a lot cheaper than dedicated tokens).

The general use of SMS/voice mail has another potential weak point which is where people start using VOIP services a lot. If an attacker has compromised someone's client computer with the usual set of trojans and they use something like Skype to receive SMS and voice calls, 2FA which relies on tokens via SMS or voice could be easily compromised as the attacker will already have access to them..


dedicated tokens are actually cheap. you can get one for 10-20 bucks


The problem with most dedicated tokens is you probably don't want to use the same token across multiple services. It is a financial and physical logistical cost to have one per service.

I'd love a cheap hw token which could support around 32 simultaneous totp seeds. It would cost an extra 2 digits on the display and maybe an extra button (but hold vs press could be multiplexed so you get both)


I use a YubiKey NEO which has the ability to store multiple TOTP tokens on it. You have to arse around with it first and install a jar to the token to give it that capability (I remember it's all official from YubiKey, they just don't ship the keys with that capability).

Then, I simply NFC it to my Android phone, and the YubiKey neo app shows the 2FA tokens for all the secrets on that key. You can also password encrypt the key, so that someone can't just steal the key off you.

Adding new secrets is also easy, I just scan the QR with the app, touch the key to the phone, give it a name and it's added.

I've got around 12 TOTP secrets on there, works very well.


That is kind of an annoying workflow; I'd strongly prefer something with a display and input vs the neo. (I tried using the neo as a pgp card, too, and just switched to the pgp cards)


It's maybe a little silly, but there are J2ME implementations you can stick on a candy bar phone.


sure but depending on who's paying and how many subscribers you have it can be a decent sized up front expense. I'd argue it's well worth it in the long run but a lot of companies would prefer something which is less expensive per user (to them) so based on phones or soft tokens.


Probably should note, for all Australian readers:

    All vulnerable endpoints for Optus Voicemail have been fixed. Including the endpoint I used to bypass their initial fix.


Very comprehensive and will written post. Also, good job on the due diligence of alerting all concerned parties and posting detailed responses.

Google's reply is too complacent. Despite this being Telco's fault, Authy & Duosecurity are better at mitigating it.


2FA based on mobile phones is a very weak method of security. It goes worse, when using smartphones. When you use your smartphone to access internet services (worse: make banking) secured by 2FA, your 2FA is reduced to 1FA, Trojaner prone security.

There are also already cases known, where attackers just made phone companies send a "replacement" sim card and the attackers intercept those. Thus the second factor was very simple out. Some phone companies are more aware now, but all is very much prone to social engineering.

So: 2FA is only a good idea, when using some dedicated device, else it just makes the barrier a little higher, sometimes not so much, as you think.


Why a 2FA based on mobile phones using Authenticator app is a very weak method?


Wow, this Open Sans font is just plain unreadable on Windows (7) machines. It might be the fault of Windows' subpar font rendering, but here, the font is just too thin and blurry to be read.

Damn, how I miss the days of simple HTML pages with no styling at all. Not beautiful, but readable, usable, fast, and ... yes, somehow beautiful in a pragmatic and very nerdy sense.


Just checked on my hi-res Nexus 4 screen: While not as blurry as on my W7 machine, the font is still unreadably thin. Seriously, why do people believe it was a good idea to make the font so thin that you can only read it with severe eye pain?


Saves ink! ;-)


Web fonts are the new Flash. They make pages slow to load, hard to read and bring users no benefits. Thankfully they can be blocked easily with Adblocks or similar filters.


Hang on. The first step of this exploit is that "The attacker logs into the victims account on a 2FA enabled web application". How does the attacker do this if the account has 2FA enabled in the first place? And if the attacker can already log into the victim's site, why are the other steps even necessary?


At least if you have a recovery email address configured, you can only initiate the process that sends codes to phones via a link sent to the recovery email address.

So being able to intercept codes sent to a phone is not enough: you also need to have control over the recovery email address.


For the Optus (and their MVNO) it's fairly trivial to just nuke the voicemail completely with a call to a specific code. Seriously, who even uses it anyway? Bear in mind that in Australia we are charged a clear dollar a minute to retrieve voicemail messages.


Spoofing could re enable it too I guess?


They're a code that you type in rather than a number to call. I really have no idea how they work, but I imagine they're some sort of non-text non-call sort of thing? Don't really have anything sensible to google so I can't find out for sure. Hadn't thought of that though.


I imagine they are some kind of text, but not sure either...


Microsoft and other services don't use 2FA for all services. I'm sure this can also be exploited to log into vulnerable service without 2FA and then being logged, change the password.

My favorite is services that use secret challenge responses to reset passwords. Some of those "secret" questions are colors, car makes, etc., which narrows down the choices to just a few.

It's unfortunate that many idiots carry the title of "software engineer" when they completely hack basic analytical thinking and basic math skills to prevent such exploits!


As far as I'm aware Microsoft (and presumably others) will require you to generate a one-time app password if a service does not support 2FA. And to generate that password, you need to be logged in with 2FA.


Not everywhere. For example, Skype doesn't do 2FA neither does the Windows 8 login if you use a Microsoft Account.


Yes, the Google's response (if it is real) to this problem is embarrassing. "It is not our problem that we practically post your credentials to open access at some occasions" [as this post shows using voicemail is akin to doing this in case of significant number of providers]. That's so disregarding to customers.


Which customers? Google's customers are advertisers. Users are the product.


Product is ad space, not users

And, by the way, there are paying customers


Hmm around here in EU pretty much noone uses voicemail (and is disabled by default on most mobile accounts).

Is that such a huge issue in US?


I wonder where in the EU you live then.

My experience in the Netherlands is that voicemail is often on by default with a at least a 'this is the voicemail of <telnr> leave a message after the beeb' greeting in it.


I moved to Germany five months ago and just last week my co-worker convinced me to disable voicemail on my cell phone. :)


I never use TFA if it's SMS based. I don't understand why every company isn't using the standard TOTP TFA with a secret key, that way I can manage who has access to my TFA codes very easily. I just scan a barcode on my phone once and I'm able to generate TFA codes for myself, what's better than that?


Can those services that require interaction be tricked by hacking the voicemail and recording a message with touch tones in them? Fortunately, most that require # also use it in voicemail greeting recording to end the recording, but the example, there are some that require any key, which probably is vulnerable.


Third comment on this post, but anyway. Why are 2FA providers still using SMS and not USSD (http://en.wikipedia.org/wiki/Unstructured_Supplementary_Serv...)?


For a start, because USSD is provider specific and unless you are the provider you need to initiate it with a special USSD code from the phone which you need to register in every network you want to operate in.


Nexmo offers it (although it's in Beta like pretty much every meaningful service): https://www.nexmo.com/tour/USSD.html


What would be the advantage of using USSD?


I'm not Kolev, so I don't know his reasoning. But I can guess.

USSD is interactive. That lets you keep the one-time password's validity time short since you know when the subscriber sees the one-time password. The other is that it's straightforward to require an interactive step, e.g. "press 7 to get a code".

The interactive step makes an attack based on SMS forwarding harder. The short timeout makes an attack based on having limited access to a victim's phone harder.

Not sure if the security improvements are worth the useability problems---people aren't familiar with USSD and, because it's not used as often, mobile terminal software for it is likely to be buggier.


Yes, that, plus, supposedly, apps like Facebook Messenger and Google Hangouts won't see those messages.


To use an automotive analogy this is a bit like using social engineering techniques (e.g. pretending to be from the electricity company) to enter someone's house and then, once in, getting paperwork pertaining to their car. Theoretically you could then ordering a new set of keys from a locksmith, doorstepping the locksmith (when he arrives with the replacement key). You could then steal the car. Realising this stunt could work with any of the manufacturers that the locksmith can get keys for, you could then complain about how useless their cars were, that their locks were essentially broken. Of course, the attack would be entirely theoretical as the car 'stolen' would be one's own (because you were testing this attack vector so you could blog about it).

He should have got a job with News International. Essentially their 'phone hacking' relied on standard, factory set voicemail codes and their 'work' only came to disgust the general public when they deleted voicemail messages off a murdered teenager's phone, in so doing giving the parents false hope that she was still alive (as they were able to leave new messages as the voicemail inbox was no longer full).

Had the 'journalists' at News International known about this little trick for 2FA then, would they have really been able to glean anything useful? Yes, however, it would have been a one-time trick.

As soon as some junior royal (or footballer or politician) realised that they could no longer login to Facebook/whatever (because the password had been reset), they would have to reset it for themselves, plus they would have emails in their main inbox stating that their password had been changed. During this time the Facebook/whatever account could be thoroughly gone through, however, on-going access would be unlikely. So, in practical situations, e.g. getting scoops for 'newspapers', there is still limited use to this technique.


Am I missing something or the fact that once gmail password is compromised, the attacker could easily change the phone no. or set it to use Google Authenticator? Further steps regarding voicemail etc. will be moot after that.


You need to login to change the settings. If 2 factor authentication is enabled only a password isn't enough to login. That is exactly what 2FA means.


Unfortunately, modifying the 2FA settings for a Google account doesn't require the second factor again if the user is logged in; the password is sufficient.


If your telco has broken voicemail, I suggest using Google Voice and doing the dance to forward your no-answer/busy calls there.


Why is it a phone call and not a text message




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: