Unacceptable response from a company promoting its services as identity and communication platforms.
1. Google said that if your telco is insecure, you are insecure. If you knowingly choose to use an insecure telco for sensitive communications, you can't expect every system to refuse to communicate with you.
From the article, Google already offers a secure 2FA, in fact Google invented and open sourced it! Do FB and LI even offer a secure 2FA at all?
> disable 2FA on Google via texts or phone calls, and enable Google Authenticator based 2FA
They're right that it isn't exactly a flaw in their systems, but they still have a relatively simple way of mitigating attacks against the telcos' security.
This crappy reaction to any form of customer communications will be the eventual ruin of Google.
If your application is ever suspended from Google Play, you will be greeted with a message directing you to an appeal form which lets you enter a maximum of 1000 characters to make your case. This is without having an exact idea of the reason your application was suspended in the first place. You are also advised that you may not ask any questions about why you have been suspended, or else they will not reply to your appeal.
A few hours later you will invariably receive the following email:
We have reviewed your appeal and will not be reinstating
your app. This decision is final and we will not be
responding to any additional emails regarding this removal.
If your account is still in good standing and the nature of
your app allows for republishing you may consider releasing
a new, policy compliant version of your app to Google Play
under a new package name. We are unable to comment further
on the specific policy basis for this removal or provide
guidance on bringing future versions of your app into policy
compliance. Instead, please reference the REASON FOR REMOVAL
in the initial notification email from Google Play.
Please note that additional violations may result in a
suspension of your Google Play Developer account.
AdWords customers like me (spending at some point several 100.000 Euro/year) are eternally grateful that they actually bothered to implement an appeal form after many years. We got locked out for more than a year with no way to contact anyone responsible (in good tradition of other Google services I presume) and thus no way to appeal, once the form got added, we got the lock removed within days ... What are you Android developers complaining about! </cynical>
I wonder how long this can continue. At some point, one of those big walled garden providers will run head-on into EU law with this kind of behaviour. If you hold control over a significant part of a market, the law will eventually ( and hopefully) step in and prevent you from playing God.
It seems the bigger the company the more likely that you will get a front line response that doesn't really grasp what you are raising.
I fought long and hard with my bank to avoid using SMS one-time codes to confirm transactions, and I lost (stayed on paper lists of one-time codes as long as I could).
Furthermore, they had Regina Dugan, former DARPA director and now their VP of Engineering, Advanced Technology and Projects, on stage at All Things Digital in 2013 talking about electric tattoos and edible passwords which would turn your whole body into an authentication token with a 18bit ESG light signal. (Link to talk: https://www.youtube.com/watch?v=fzB1EcocAF8).
So maybe they just don't care because they won't use 2-factor-authentication for much longer anyway.
// I like the idea of crypto dongles... when you control both sides
> Consumer provisioning should allow users to buy a compliant token from a vendor of their choice, insert it into a computer where they’re already authenticated to a website, and register their token with a single mouse click.
Furthermore there are no serial number exchanged afaict, so the key itself isn't identified by the server, only the fact it has valid credentials.
So at least they make an effort to prevent that, and if they do not subvert this scheme you wont be trackable. But as the dongle is more or less a black box, it would probably be hard to ascertain that generated public keys do not have some property in common to allow them to be associated with each other.
... is reprehensible. It's a problem with the design as a whole, Google's customers are going to experience the flaw, and just passing the buck doesn't make the problem go away. I'm really disappointed in Google.
Google's final response was: "[we] have filed a bug [and] will [...] take a look", see my coworker (Jeremy)'s reply from May 6 at http://static.shubh.am/2fadisclosure/google.pdf
Clearly there was initially a simple misunderstanding/miscommunication. The reporter, on April 30, gave Google a few Australian telcos presumably vulnerable. But then he clarified on May 3 that it is actually at least "a large majority of telco's in Australia and UK" that are vulnerable. So, based on this new information, Google replied "Thanks for explaining the potential scope of this issue... [we] have filed a bug".
I mean, compare this to the way it was reported to Facebook: the reporter told them right away that 2 of the top 3 telcos in Australia are vulnerable(!) Now if it had been worded like this when first reported to Google, the misunderstanding would probably not have happened.
Not a brush-off.
I initially thought that the lack of any technical countermeasure might be a legitimate reason for Google to do nothing, but after I thought about it for a moment, I realized that when the 2FA system calls the user, it could simply prompt the user to do something before sending the verification code -- that way, the 2FA provider knows they aren't sending the code to voicemail.
Simple and effective, and later in the post I found the author and multiple companies came up with that idea on their own.
Sounds like the right approach is indeed to not give away 2fa codes when the recipient hasn't demonstrated that they have the user's phone in their hand
You're at a one-on-one meeting with a powerful executive. She gets up to attend to some business outside the room, and leaves you with her cellphone on the table. Before the meeting you logged into Google on your own cellphone and set the wheels in motion, so as soon as she steps out you click "Verify" and Google calls her phone and interacts with "her" to verify the number. In just a minute or so you have convinced Google that her phone can be associated with your Google account.
Now this might not be so bad--what are you going to do, set up your 2FA to use the executive's phone? No. The "verify" step you did above was for Google Voice number porting (https://support.google.com/voice/answer/1065667?hl=en).
In a day or two, the executive's phone will stop working, as her plan will be automatically cancelled by her provider, as they are required to respond to Google's porting request (and they will probably not call to confirm). Now you can receive and even place all the calls you want--2FA verifications (including for Google), do some nefarious texting, etc.
EDIT - realized you meant that this would allow you to associate her phone with your account, without needing any additional compromise other than access to the phone. That is a more subtle case - effectively what you're doing is stealing her phone number, but without stealing her phone, which means she might not notice for a while that the number has even been stolen. Interesting that similar vulnerabilities in domain transfer exist and have been mitigated with the need to obtain porting codes from registrars, as well as having domains 'locked' by default. Is phone number porting not protected similarly? I seem to remember having to provide a password for my old AT&T account in the apple store when I ported my number to Verizon...
I believe what I outlined turns "Something you know and something you have" into "Something you know and something you once had for a minute at a different point in time." This is a significant to the 2FA security model, because the victim may mistakenly believe that even when left alone with her phone, especially if the phone requires a code to access (which is often not needed to answer a call), you would not be able to take control of her accounts unless you were simultaneously using a computer to do so. All you need is to click the "Verify" button on your Google account at the right time, which would likely go unnoticed.
At Authy we are obsessed with Two-Factor Authentication and spend a huge amount of time looking at whats happening in the ecosystem, which new attacks do we need to be aware of etc. It might look easy to build a quick two-factor authentication system, but history will repeat itself, and like passwords we'll see lots of bad and insecure implementations because its harder than what people think.
I get it, what I'm saying is, if you're a multi-billion dollar internet company whose business it is to manage hundred of millions of users, you should keep security in-house, and get it right instead of outsourcing it to a start-up.
Note down all information about the account creation, frequent contacts, services used .. basically all dashboard data and then contact Google. If you have a secondary recovery address, that's even better.
Investigating the extra A, B, C and D DTMF tones may be of use in this scenario, just in case voicemail systems don't recognise them as commands but 2FA callbacks allow them for confirmation.
When your contract is running low, they call you and ask you to tell you them your security information as part of resigning you under new terms.
To emphasise: they cold call you and claim to be from the firm (which you don't know - it might be a phishing attack), and then ask you for your security details. They socialise their customers towards being vulnerable to phishing attacks.
I had this ages ago with an Australian provider (Telstra), and recently in the UK (O2).
Having followed up, I know it to be O2 policy that they are happy to do this. They... defended it on commercial grounds around practicality.
I'm sure that I'm not the only one to have though this is crazy. I raised an incident but as you'd expect it went nowhere. Has anyone else tried to pick it up with firms that do it?
Imagine a simple piece of legislation that banned calling and asking for security information. Would there be edge-cases that would make this a bad law?
Obvious advantages are:
- you never lose any information ;
- you can archive and transfer them on your personal devices (PC, etc.) for later use ;
- you don't have to call your voicemail and navigate a audio/keypad interface that is basically a huge timesink and an horrible experience.
A perhaps more important factor is that when you call the carrier to get your voicemail, you sort of automatically know you'll have to pay for it. But if your phone silently downloads a message you may never check, not so much. So that crestes a billing problem.
In the pre-smartphone era, Japanese phones actually had a answering phone feature inside them that would receive and record the message to the microSD card. This had the big downside of not working when you're out of range or out of battery, which is still probably the #1 reason for a call to go to voicemail (IMO).
Unfortunately, I haven't figured out how to disable it on my current provider.
I don't know why providers make this so hard for their users. Perhaps it's because the providers make money from those brief, useless voicemail connections? I assume the caller is billed just like he made a normal phone call.
I personally hate listening to a machine and playbacks of recorded messages, but my employer uses my voicemail sometimes so I guess I'll have to cope.
The only person you know is your father. There are millions of people who still rely on voicemail to communicate.
> and his message is always "Sorry I missed you, call me when you can." Which I know already, thanks.
You wouldn't know that if your mobile phone was out of range and didn't receive the missed call.
You could, actually. My provider sends a text message when that happens, which you receive when you're in range again.
And that's a problem of it's own, it probably wouldn't be difficult to turn anyone's voicemail with social engineering or probably a host of other techniques.
So I don't know if you suggest getting rid of it completely or just disable it as a customer, the first will not happen over night and the latter might not help much (but probably can't hurt).
A voicemail system would be good here. Some people may feel it unnecessary, and are welcome to not use/disable it.
I also strongly prefer 2fa systems which allow me to enroll my own hardware token or use a software token (eg AWS IAM) vs systems which supply the seed so I can't (CloudFlare is depressingly the only service I use which still suffers from this, despite being otherwise pretty awesome.)
Imagine you're traveling, now you may have to pay 75 cents just to login somewhere.
Google Authenticator-based 2FA avoids all these problems.
The general use of SMS/voice mail has another potential weak point which is where people start using VOIP services a lot. If an attacker has compromised someone's client computer with the usual set of trojans and they use something like Skype to receive SMS and voice calls, 2FA which relies on tokens via SMS or voice could be easily compromised as the attacker will already have access to them..
I'd love a cheap hw token which could support around 32 simultaneous totp seeds. It would cost an extra 2 digits on the display and maybe an extra button (but hold vs press could be multiplexed so you get both)
Then, I simply NFC it to my Android phone, and the YubiKey neo app shows the 2FA tokens for all the secrets on that key. You can also password encrypt the key, so that someone can't just steal the key off you.
Adding new secrets is also easy, I just scan the QR with the app, touch the key to the phone, give it a name and it's added.
I've got around 12 TOTP secrets on there, works very well.
All vulnerable endpoints for Optus Voicemail have been fixed. Including the endpoint I used to bypass their initial fix.
Google's reply is too complacent. Despite this being Telco's fault, Authy & Duosecurity are better at mitigating it.
There are also already cases known, where attackers just made phone companies send a "replacement" sim card and the attackers intercept those. Thus the second factor was very simple out. Some phone companies are more aware now, but all is very much prone to social engineering.
So: 2FA is only a good idea, when using some dedicated device, else it just makes the barrier a little higher, sometimes not so much, as you think.
Damn, how I miss the days of simple HTML pages with no styling at all. Not beautiful, but readable, usable, fast, and ... yes, somehow beautiful in a pragmatic and very nerdy sense.
So being able to intercept codes sent to a phone is not enough: you also need to have control over the recovery email address.
My favorite is services that use secret challenge responses to reset passwords. Some of those "secret" questions are colors, car makes, etc., which narrows down the choices to just a few.
It's unfortunate that many idiots carry the title of "software engineer" when they completely hack basic analytical thinking and basic math skills to prevent such exploits!
And, by the way, there are paying customers
Is that such a huge issue in US?
My experience in the Netherlands is that voicemail is often on by default with a at least a 'this is the voicemail of <telnr> leave a message after the beeb' greeting in it.
USSD is interactive. That lets you keep the one-time password's validity time short since you know when the subscriber sees the one-time password. The other is that it's straightforward to require an interactive step, e.g. "press 7 to get a code".
The interactive step makes an attack based on SMS forwarding harder. The short timeout makes an attack based on having limited access to a victim's phone harder.
Not sure if the security improvements are worth the useability problems---people aren't familiar with USSD and, because it's not used as often, mobile terminal software for it is likely to be buggier.
He should have got a job with News International. Essentially their 'phone hacking' relied on standard, factory set voicemail codes and their 'work' only came to disgust the general public when they deleted voicemail messages off a murdered teenager's phone, in so doing giving the parents false hope that she was still alive (as they were able to leave new messages as the voicemail inbox was no longer full).
Had the 'journalists' at News International known about this little trick for 2FA then, would they have really been able to glean anything useful? Yes, however, it would have been a one-time trick.
As soon as some junior royal (or footballer or politician) realised that they could no longer login to Facebook/whatever (because the password had been reset), they would have to reset it for themselves, plus they would have emails in their main inbox stating that their password had been changed. During this time the Facebook/whatever account could be thoroughly gone through, however, on-going access would be unlikely. So, in practical situations, e.g. getting scoops for 'newspapers', there is still limited use to this technique.