SMS is not a secure 2nd factor. It is subject to not only technical attacks such as the one in the article, but also a wide variety of social engineering attacks. Getting cell phone reps to compromise an cell phone account is apparently not hard, and has been used many times to take over online accounts.
SMS as a 2nd factor represents an engineering trade-off. Prior to its introduction, the only people who had access to 2FA were people who got $60 tokens from RSA. It blocks against certain classes of attacks, but is vulnerable to others (like malicious or insecure carriers).
Now, Apple users can use their fingerprint as a 2nd factor (e.g. for Apple Pay), but fingerprints have the unfortunate property of not being rotatable if compromised.
And there are FIDO U2F security keys, but you still need to issue $18-$50 tokens to each user, and you need host application support.
> fingerprints have the unfortunate property of not being rotatable if compromised
Coworker has a wonderful term for this. He calls fingertips "amputationware". It really drives the point home.
But there is another very good reason to avoid fingerprints auth methods. The scanners are by design doing some level of fuzzy matching, so if/when[0] someone finds a way to generate an input that reproduces the signal pattern from the reader well enough, it can be fooled. (Yep, done already.)
My personal take on fingerprint authentication is that they are not passwords. They are usernames.
Fingerprints can certainly be "amputationware" if you refuse to give up your passcode and your threat model has a credible threat of amputations.
For most people, having their unlocked phone snatched out of their hands on the street[0] is a far more credible threat and somehow we don't have everyone saying you shouldn't unlock your phone outside.
Yeah - I was going to saw something about people with extreme eczema suffers (Which I happen to be one), can have issues with skin just splitting open on tips of the fingers. This is the reason I don't use my finger print scanner on my phone...
Do note however, that masterprints are not applicable for all current solutions.[1] Their statement is of course biased, but they nevertheless point out a significant flaw in the referenced study.
"If you want to transfer $1000 to account 1243567890, confirm with code ABCDEF".
Assuming an attentive user, this protects against malware silently changing amount and recipient on a transaction the user is making - an attack the banks were struggling with. It also makes phishing much harder, since the attacker needs to convince the user to enter the code after reading this message.
Code-based 2FA is vulnerable to phishing where the attacker relays the phished credentials in real time. German banks were doing 2FA long ago, in the form of sheets of paper with one-time passwords (called TAN). Attackers were phishing those, so banks switched to "indexed" (iTAN) OTPs: The bank tells you which OTP to use, so someone who has just phished 1 or 2 won't be able to use theirs. Phishers started relaying the stolen credentials to the bank in real time, looking which iTAN the bank asks for, then asking the victim for the same.
Given these known capabilities of attackers, any 2FA method that is not phishing resistant and not bound to a specific transaction is not usable, and probably not better than SMS. Anyone can make a phishing site, running attacks on phone networks is harder.
Edit: Just realized that U2F would not be a suitable solution for banks, since it does not authenticate specific transactions, so malware could still swap recipient and amount.
U2F was designed to protect login where you don't need more context except the site you're logging into and that's protected as part of the protocol. Abusing it for other purposes (tx confirmation) as you showed is not a good idea.
Transaction confirmation can be improved by using mobile banking app that displays the transaction info and has accept / reject buttons. The communication would go over HTTPS.
But the problem of trust goes deeper - how do you know that account X is the correct number for your intended recipient? You usually have this saved in you banking system and when it is compromised you'd still be convicted that you're sending money to correct account. Unless of course you have a paper backup of account numbers...
Except SMS is better than nothing, right? Yes it's flawed. But it's a harder attack than simple password auth. An attacker has to to target an individual and know their phone number, and be able to spoof their phone.
It depends. SMS is better than nothing as a second factor, but SMS has a weird way of worming its way into single-factor status. I think people should avoid SMS 2FA, and should be skeptical of the security of companies that offer only SMS and neither of TOTP or U2F.
This frustrates me endlessly about Namecheap. They only allow SMS, and, for something as valuable as my domain name (figuratively the keys to my kingdom), that's unacceptable.
Have you tried using a (free) Google Voice number for 2FA purposes? Google Voice has no/minimal customer support, so it should be almost as secure as your Google Account.
2FA altogether isn't necessarily better than nothing.
At its worst, it opens up a social engineering / customer support "lol lost my phone" attack vector that didn't exist before.
SMS 2FA can be defeated by hijacking someone's number. Happened to me. One day I opened my Macbook and saw the "A new iPhone has been activated for your iCloud account". They had access to my number for hours after I called up my carrier.
No, or yes if you replace shared passwords with EC assymentic crypto with added protections (test of user presence, taking origin into consideration - signature for Google.com cannot be used on fake-google.com, attestation - you can check remotely if token is from e.g. Yubico if you trust only them).
But what happens when thieves steal my phone? How do I authenticate then? Most places use SMS as a backup, which gets us back to the original problem.
People with popular YouTube accounts have to deal with this all the time and the advice right now seems to be to buy a burner phone on a false name[1] and never share the phone number with anyone, which is just crazy.
[1] Fraudsters are able to convince phone employees that they forgot their phone number and left their phone at home.
Easier: register with an SMS-enabled VoIP provider, like Twilio or voip.ms, and use said virtual SMS number as your 2FA. Rather hard to steal.
You can then set the service up to forward received SMS messages to your regular SMS number—but, if your phone is compromised/stolen, you can go back to the account and immediately turn off this forwarding.
---
Sadly, this approach reduces the security back to single-factor, since you get into the SMS account with something you know, rather than something you have.
You can at least treat the VoIP account like a password-manager: give it a long, unforgiving "master password." (Or a random password that's stored in a password-manager, if you use one.)
There's a guy that does an SMS to web gateway with real telephony hardware. You pay by bitcoin and its accessible over TOR. You pay, he pops in a topped up PAYG SIM. You have a real mobile telephone number.
You can run your own private one at home with a Raspberry Pi/ttl serial adapter and a cheap SIM module. Or use a dongle.
AFAIK you can get a number from a different carrier and then transfer it to the VoIP carrier. When queried, it will still show up as "owned" by the original carrier.
It depends on how they classify; numbers are issued in blocks of 1000, and the allocations are public knowledge, but you can also do a real time carrier lookup, which changes after porting completes.
> Sadly, this approach reduces the security back to single-factor, since you get into the SMS account with something you know, rather than something you have.
I don't understand why Google Authenticator is supposed to be so conceptually different from a password. The codes it provides are a deterministic (and public!) function of a code shared between you and the party you're authenticating to. Or in other words, it's exactly equivalent to a password.
When I set up 2FA on github, they gave me a code to put into Google Authenticator, and also a bunch of "one-time reset codes" in case I lost my phone. Why wouldn't I just record the original code in whatever place I'm supposed to store the one-time reset codes?
1. Not shared across sites
2. Guaranteed minimum length
3. The portion going over the wire cannot be used more than once or after a few seconds
4. The phone app doesn't expose the seed or allow you to copy it off the device. Running on iOS has strong guarantees against access by other apps.
That's not perfect and U2F is a huge improvement but it's safer than SMS and given the number of google users who get attacked daily a reasonable incremental improvement.
Only your third point is actually a conceptual difference from passwords. In terms of the common idea that 2FA should be "something you have" plus "something you know", a TOTP seed is "something you know", not "something you have", but everyone talks as if the opposite were true.
1. Passwords are selected by the user; you cannot prevent reuse. While it's technically possible that a site could allow you to set the TOTP seed nobody does.
2. Password entropy is notoriously hard to calculate programmatically – e.g. well known movie passwords or leet-speak are often judged as stronger than shorter true-random sequences – but you generate the TOTP seed.
4. As with #1, preventing reuse is hard. Knowing that all extent implementations offer fairly strong protections against accidents is a key difference.
Your finally point depends on how you judge the whole system: “something you know" can be told to someone else, which isn't true of TOTP. It's also always valid and reusable whereas the one-time code is closer to an on-demand verification. From the perspective of security boundaries, a desktop user accessing a separate token, phone, or arguably even an iOS user using a separate app is closer to something you have than something you know.
TOTP isn't the strongest form of MFA but it's closer to that end of the spectrum and, especially in consumer contexts, a huge improvement over nothing. In the real world, those kinds of security improvements are still meaningful even if they're not perfect implementations of a textbook concept.
> “something you know" can be told to someone else, which isn't true of TOTP. It's also always valid and reusable whereas the one-time code is closer to an on-demand verification
This is not correct. When I set up TOTP 2FA, the site generates a seed and tells it to me. It is always valid and always reusable, and it's no more difficult for me to communicate it to anyone else than it was for the site to communicate it to me.
Well, technically, everything that you have (and can provide as "proof") is susceptible to being interpreted as "something you know". If I know enough about that thing that you have, I can probably manufacture a good-enough replica that I can provide as "proof of ownership".
Think of the physical one-time tokens: If I know the data stored on it, I can replicate them too. So, are they "something you have", or "something you know"? The reason why they're "something you have" is that it's easier to have the physical ownership of the device, than to know they "secret" of the device. But, the same is true about Google Authenticator! In that respect, Google Authenticator can be, indeed, interpreted as "something you have".
Again, think of what that means in normal usage: it's technically possible but there's no UI to do it and in cases like iOS the system actively blocks it.
If you're building a consumer service, that's a big improvement over someone picking their dog's name and saving it in a Word doc on their desktop.
I don't know how google 2FA is implemented, but I'd just assumed the initial code is akin to something like an initialisation vector / nonce for a counter or CBC mode block cipher. With the counter being 'time', meaning the scheme works something like:
That's almost certainly an over-simplification / wrong; it doesn't seem very secure to just use the previous output as the IV for the next round. Maybe it chains all previous outputs together somehow, IDK. I should probably research this, given how much I rely on 2FA for security.
With Google Authenticator you would have to use your one-time codes to reset it if your phone is lost. With 1password the 2FA is no longer linked to the phone, this way you don't have to reset.
Also worth noting: Google Authenticator is just TOTP[1] (X=30, Digit=6, HMAC-SHA-1 IIRC). You can just copy the URL from the QR code and use it from whatever you like, as long as your clock is somewhat synced.
Typically, you are asked to print out backup codes when you enable 2FA. In addition, you could copy these backups codes into your password manager and sync your password file with multiple devices, so whenever one device is gone you can access all your passwords, including 2FA backup codes from other devices.
If these backup codes are in your password manager, you're effectively* back to 1FA. Same with putting the TOTP seed into it.
* Yes, technically you need the 1Password file as well. But someone who compromises your machine will get access to both. Or if you have it synced to your phone and both are unlocked with a fingerprint, all someone needs is your device and a little effort to fool the fingerprint sensor.
Not that I recommend SMS, but if thieves steal my phone I turn off its Google voice connection and it stops getting text that particular phone #, which I can still get just fine in Google Voice. More to the point I do tend to feel like N-factor auth is often a sort of Matryoshka doll thing on one end or another more than anything.
The big problem that SMS solves that nothing else does is that you can be completely irresponsible/unlucky and it still works. You can lose your u2f key , lose any one time backup codes on paper, and so long as you can convince your phone company that you are you you're fine.
I don't think most people are responsible enough to deal with more secure MFA. Most people don't know how to keep custody of stuff like that.
> so long as you can convince your phone company that you are you you're fine.
Well I think the problem starts when someone else convinces your phone company that they are you. As we've seen several times now it becomes easier and easier to pull this trick.
As for U2F add more than one to your account (2 is minimum) and you are safe. The same applies to any kind of physical key (home, car, etc.)
I think there are plenty more people who would lose all their u2f keys than be attacked via the phone company. Sure, that's easy for you to manage, but not for most people.
I rather have to convince my bank I'm me as they have much higher security standards generally than the phone companies so less chance of someone else convincing them they are me.
I think a lot of this is close enough to the way mechanical keys work that it's not a huge leap. You can have more than one house key, you change the lock if a key gets stolen, etc. It's a little different, but not really more complicated.
One good thing is that we are moving towards where most devices will have USB type c as a port, so that mobile devices, laptops, and desktops can all share keys.
Well, people do have 10 fingerprints, but the bigger concern is how easy it is to find them and copy them. They're not secure. You'd actually be much more secure using toe prints as auth tokens, as gross and impractical as that sounds. Most biometrics are lame because they're so readily available. However, they are ridiculously useful because of how fast and conveniently they can authenticate you.
Honestly, this sounds like a much better technology: http://www.dailymail.co.uk/sciencetech/article-3220886/Forge... It transmits data through the human body. You could turn it on or off selectively, authenticate with anything you touch, and it would remain cryptographically secure.
Fingerprints are on file in police records. Whenever you get an US visa, a passport with biometrics or stuff gets stolen from the office and the police is called to investigate, you get fingerprinted.
I have high hopes for the W3C WebAuthn spec [1], it combines the security of FIDO U2F keys with the ease and ubiquity of devices with fingerprint scanners. Would be cool if Apple will support it too.
Security token devices for online banking is quite old now and came long before the trend of 2fa using sms, and they did not cost 60$. I have owned several by now and the first one one I owned was given as part of a gratis student package by the bank.
Yup. I had one from Symantec and was never charged for it. If I lost it, it would cost something like 10$. It was also quite small, close to a U2F key/USB flash memory stick but it did not have a keyboard to type a PIN on. Most do.
This is how it works often in Europe, a paper slip with a bunch of one-time tokens. I don't see how any technical solution would be better in security or usability.
And now I'm trapped in this multi-app universe where every entity uses a different app. My employer uses Symantec, my school uses Duo, my bank has its own app (as does Steam), and a handful of sites use TOTP.
DUO and other similar tools where you accept on the phone instead of getting a code and having to type it in can be more convenient than typing; and provided you get the push, you can take care of it from the notification so you don't have to find the app, so it's not terrible if you need seven different apps.
I found that my Blizzard account 2fa app had been forgotten when I moved from my old phone, so I was locked out of an account I don't use very regularly. But they had an option to just text me a code, which worked and I got access to my account. I didn't know if I should be relieved because it worked or disappointed that their secure 2fa was so easily side stepped.
It would be secure if mobile companies gave us APIs to pull information about number like has it been ported recently or number has been forwarded. Banks would even pay for it.
It's a clear text transmission on the signaling channel. Unless you encrypt the message from originator to recipient, having info on number ports or forwards won't changes the risk or susceptibility. It's nothing new either, how do you think the NSA captures text messages around the globe?
That's not the scenario. If I want into your account and I only have the password I don't wait for you to try to log in, I just try to log in and sniff the message to "prove" that I have the phone.
I thought the whole point of multi-factor authentication is to reduce the risk of breach by having different unrelated means of authentication, each of which is assumed not fully secure.
Wouldn't acquiring the 1st factor access (say, normal user credentials) be subject to similar social engineering attacks ?
Not just the phone company. Naive (ie old) people have been robbed by receiving a text saying "This is the bank, what's your password?" "Thanks. Now, what's the code we just sent you?"
Banks here in the UK use your chip & pin based card as a second factor (or rather, as the two factors - the chip you have, the pin you know) - they give you a little card reader that can use the card and pin to provide a 2FA token for logging in or sign requests to send money.
It's a much better system. Of course, some banks don't use it to it's full potential - many use it only for signing money transfers, but it's still pretty good. The readers are also cheap and standardised, so you can use any one of them for any account, which is useful.
In the US they have finally started rolling out chip-based cards. However, there's no PIN needed if you run the card as credit, defeating much of the security.
I just moved to the States from Canada. Surprised me how far behind payment technologies are here. Swiping is at least as common as the chip readers. I have been to two places that accepted tap and it blew the employee's mind both times that I had a card capable of doing that.
I find this an especially entertaining juxtaposition with the transit systems. In Canada, you swipe on the bus and tap in the stores; in the US, swipe in store, tap on the bus.
As with all strange things, they can usually be explained by incentives.
Banks get paid far less interchange rates for Chip & PIN than they do signature. Thus the system we have. So some banks can skim an extra .5-2% off the top of every single transaction in the US.
> Banks get paid far less interchange rates for Chip & PIN than they do signature.
This only applies if the PIN is being used for debit. I have two credit cards[0] that are issued by US-based financial institutions and, when the chip reader is equipped to do so, result in a prompt for a PIN when used. The interchange rate is as a normal credit card.
The bigger issue, as has been explained to me by two people I know who work in payment tech, is that credit card issuers in the States are utterly panicked by the idea of doing anything that might result in even a small fraction of a percent of their cardholders switching to another card. PINs are seen as inducing friction or a previously-unknown step into the payment process (since US customers have been conditioned for 20 years to "enter a PIN for debit" and "sign for credit"). To now change to PIN-primary cards would result in confused customers, higher support costs, and plenty of customers switching either to another credit card or to (the horrors) another payment method entirely, like cash. So the theory goes, anyway. Therefore, US banks are loathe to introduce cards that can be verified by PIN at all, much less ones that are primarily verified by PIN.
0 - They are not debit cards with VISA or MasterCard logos; they are actual credit cards accessing a line of credit. One comes from the State Department Federal Credit Union and the other from First Tech Federal Credit Union.
Interesting, I guess I always thought this applied to credit as well, just not as drastically. I stand corrected.
> Increased revenue: In the US, the different in interchange rates between PIN-based debit and signature-based debit transactions can be substantial for larger dollar payments. Although Regulation II (a.k.a. the Durbin amendment) caused the card networks to reduce debit interchange rates for the largest card issuers to just 0.5% plus $0.22, this still results in interchange of about $0.72 on a $100, signature debit purchase. The same PIN-debit purchase might generate about $0.25 and possibly less in a area where there's substantial competition between PIN-debit networks. To be fair, this isn't true for credit card transactions (since there is effectively no such thing as a PIN-based credit card transaction in the US today and therefore no distinct interchange rates for PIN-based payments) but is does explain why there might be a general preference not to proliferate the use of PINs as the US card population and terminal base is converted to support EMV.[0]
In many ways, this is better, since a skimmer can't grab my pin and credit cards have rules where they must immediately refund fraudulent purchases, while debit cards are often much more difficult to reverse.
Your PIN being skimmed is refunded in The Netherlands without any problem. They usually use the combination of the magnetic swipe and your PIN. Restricting bank cards to just EU reduced this fraud to around 15% of the original amount. See https://www.rtlnieuws.nl/nieuws/binnenland/skimmen-met-80-pr.... This was implemented around 2012.
In the Netherlands, magnetic swipe has basically been phased out. You are now supposed to use the chip (and obviously still PIN). I think this happened around 2014, basically, the 'swipe' part of almost all readers is now blocked by a sticker that instructs you to use the chip. I'm not sure if the sticker can be moved in case you really want to swipe.
Tap-payment is also being rolled out quite rapidly. It's about 1.5 years old and I'd say I now tap for 70% of all transactions.
Not from the issuer's perspective. They're mostly interested in combating forged cards, not stolen ones, because the former accounts for the majority of their losses.
A quick skim still makes this look much more secure than any alternative I've seen. Sure, it's not perfect, but a lot of those flaws are due to particular implementations, and most apply equally (or worse) to other systems.
My bank in Germany offers a similar system - of course, there are limits to the security as well but there is no public infrastructure, radio transmission or an OS involved.
Edit: I also don't need to make transactions often enough for it to be cumbersome - if I was travelling a lot for business and needed to make a lot of transactions, I'd worry about losing the device and being unable to do anything without it.
They aren't devices that connect to your computer. They are standalone things (a little like a calculator) with a screen that shows you a token to type in, like a 2FA app. Works with any web browser supporting a form with a text input.
They work perfectly fine. From the browsers POV all that's going on is a input field. Since you manually type in the number from the device there's no need for OS support either.
The Swedish one used to have Linux support, but apparently so few people where using it they couldn't justify the cost and so they stopped supporting it a few upgrades ago.
They also seem to be trying in general to move everybody off the USB reader and smart card solution to a smartphone app based solution, so that the total number of people using the USB reader approach is down substantially.
I don't know about that particular implementation, but I can use my citizen card (also a smartcard) to login to government sites on Linux. The site uses a Java applet, which connects to libpcsclite to use the reader.
Sure and much more inconvenient one, because you have to carry this device with you everywhere. Even much better system would be a living being at each ATM machine checking your credentials.
Except it's not because they are very small, cheap devices that everyone has, generally a couple of. I have one at home, one at work, one in my bag, and everyone I know has one I could borrow if I needed one.
Essentially all security is a trade off against convinience, this is, in my eyes, a no-brainer. It's barely any more effort and much, much more secure.
What? I can't think of a single person who uses this and I have lived here ten years. I once got one for a corporate account and it was atrocious with required plug-ins for ie
You must be thinking of something else. This is just a card reader that you type a number into that your bank tells you to type, and that in combination with typing your PIN in (and having your card inserted) spits out a number on a little screen, which you type back into your bank's website.
Takes about 1 minute to do, and you don't need it for every action. E.g. you use it for adding a new account payee and for making a first (or very large) payment to a payee.
You are clearly thinking of something else. These things are ubiquitous in the UK now. All the UK banks provide them freely with accounts and it's all easy to use and standard, no plug-ins or craziness, just an HTML form.
You don't need it everywhere. One card reader at home plus one at your workplace and you've covered most use cases. When you travel for long period of times it's not hard to carry one in your luggage.
It's the old military-grade 4 factor authentication: Something you know, something you have, something you are, and someone who shoots you if you try anything funny.
> Is the bank taking responsibility and covering the loss for their customers?
I think they should be required to. Although, isn't it possible that the sum of withdrawn amounts could be so large that the bank simply can't cover the loss, thus making it insolvent?
The problem with SS7 is that trust is assumed. Mobile carriers that have roaming agreements will have either a direct link or via a hub. So what happened here was the network of the foreign roaming partner was used to redirect the SMS traffic on the victims carriers. Would not be surprised if it was an inside job.
With ss7 you can do fun things like query the last location update/logged in base station for a mobile phone, due to roaming carrier x can query for customers on carrier y in another country. If you link up to one of the roaming hubs you can pretty much get the location of anyone with a mobile phone. Feature phones included.
The roaming hole was known to Russian blackhat scene, and was actively exploited for ages. I say, first number hijacking services were offered around 2001 to 2005, when Russia had a paid SMS content craze. Back then, the most apparent use of that was to steal somebody's number and then send paid sms to your number.
The fact that big IMEI databases gathered with Android scamware are now being sold around "social marketing" community only makes things worse.
The best known attack with that was an attempted hacking of British MPs personal emails a few years ago. Russian phone operator Megafon was complicit, yet they never even got a slap on the wrist.
All and every eMail or blog site that has password reset through SMS (Including whats up, and gmail) is open to that attack.
The headline makes it sound as if abusing SS7 was all they needed to do but in fact they had to have the other factor as well so it really is not quite as scary as it at first appears. It also seems from the article that the thieves were able to log in to the accounts with just a password and only needed the SMS to sign transactions.
It's different here in Norway; the banks require two factor authentication to log in as well as signing transactions.
I don't claim it's perfect but at least no one can log in unless they control both factors.
> It also seems from the article that the thieves were able to log in to the accounts with just a password and only needed the SMS to sign transactions.
This "feature" was enabled by default for my Danish online bank as well. I've since disabled it.
I'm still a little irked that Google constantly reminds me to add a phone number as a backup for my email account. I already have google push login, OTP, as well as backup codes.
This proves that the phone can be more a liability in the face of much better technology.
In case anyone doesn't know it, Amazon doesn't always have legit products. Sometimes sellers provide clones that look real enough to fool Amazon, if Amazon is even checking.
And clearly you don't want a fake security device.
Can you give flavor on this:
"Now we need to remove our phone number as backup method. (If you're curious why it's important to not have a phone number on your account, see the (https://techsolidarity.org/resources/security_key_faq.htm)[s... key FAQ].) "
Unfortunately, the security key FAQ contains no info on why you shouldn't have your phonenumber. I assume it's because phone-number migrations can be used to take control of your accounts?
EDIT: Yeah. Perhaps reading the article instead of just coming to the comments might have been useful.
It's pretty obvious in the context of this story. You can recover your Google password if you have access to SMS on that number and can answer the "security" question.
So long as you never switch or factory reset phones, because Google Authenticator, by design, never reveals the private keys. (I've locked myself out of accounts because I broke my phone and had to get a new one.)
Also, do you really trust your Android phone with your TOTP private key? How do you know there isn't malware running on it as root?
One advantage that software TOTP provides over hardware TOTP, is that you can print the QRCode w/ the private key, and lock it in a safe.
This allows you a trivial means of cloning and backing up the token. Unfortunately, this is against best practices, since you have multiple copies of the private key lying around; but this has been an acceptable trade-off for me thus far.
Yes, not to mention that you can enroll more than one TOTP authenticator. There are legit complaints about Google Authenticator (foremost among them that it's a nightmare when you have more than 4-5 accounts), but "no backups" isn't one of them. Don't back up TOTP secrets.
You can authenticate more than one device. At least with Google accounts in the past. Meanwhile I use Titanium backup to backup authenticator and restore it on a different device.
That does not provide an equivalent level of protection. If I can get you to enter your Google credentials into a site I control with a TOTP token, I own your email. That is not true for a security key.
The guide suggests adding Google Authenticator as a backup. Doesn't that mean this isn't any more secure than just using Authenticator? Why can't a crook say they don't have the key, and proceed to hack Authenticator?
Because that would require them to have physical access to your unlocked mobile device. It's equivalent to saying "why can't a crook just steal your security key".
The threat model this setup is protecting against is phishing. For that purpose, a security key is much better than TOTP (authenticator app).
That doesn't make any sense to me. If they require physical access to my unlocked phone, then isn't the phone just as secure as the yubikey? If so, why bother with the yubikey?
If the phone has other attack vectors, such as compromising the OS, and is indeed less secure than the yubikey, then doesn't having it as a backup just lower the bar for security to the phone? As far as I can tell, there's nothing stopping from someone just ignoring the yubikey if authenticator is also an option.
U2F security keys are a mutual authentication mechanism. The key authenticates the site as the site authenticates the key. Phone TOTP applications can't do that.
That still doesn't answer my question. If you read the guide that I am asking about, it advocates using security keys and also setting up phone TOTP as a backup.
But I also don't need to know too badly, so I think I'll just move on.
The subtlety is in how phishing attacks work. If you initiate a login yourself, unbidden by any outside request, the likelihood of you being phished in that scenario is epsilon. In that situation, the TOTP authenticator on your phone is fine. However, if you're logging in as a middle step in a series of steps to get something done (say, answering a request you received via Slack or an email), the likelihood that you could be being phished grows. In those scenarios, the mutual authentication done by the security key helps protect you.
The security key is slightly easier to use than the TOTP authenticator and it's what you'll tend to use most of the time. But if you happen to forget it at home or you're logging in to check your mail and your security key is halfway across the house but you have your phone handy or something like that, the TOTP backup option is convenient. You also need some kind of backup in case you lose the security key.
I agree with Maciej here: the stock message board "I'm too smart to leave any advice unchallenged" attitudes on these threads are doing a lot of people who face serious risks a lot of harm.
If you know what you're talking about and have a serious concern about this kind of advice, by all means present your argument. But if you don't, find another way to learn.
You make this point frequently, but it really seems out of place here on HN where you have near 100% technically competent users who aren't going to get phished, at least not in any way that a security key is going to protect against. (Thinking of the recent Google Docs incident.)
Security keys are great for journalists, activists, and high profile business people, but for your average geek it's an unnecessary amount of trouble, IMO. TOTP gets the job done.
The whole reason U2F exists is because technically savvy users were getting phished. Sophisticated phishing campaigns are basically indistinguishable from legitimate pages; targeted phishing campaigns will take advantage of the normal rhythms of your work and the identity of your coworkers. The fact that you're certain you're too competent to be phished probably makes you more vulnerable, not less.
+1. You find horror stories even on HN in the past how reckless Namecheap is.
I personally had my domains on hold frozen without traffic being routed to my servers when my ex-gf chat with them gave my username (no password) and claimed it is her account because obviously she knew my full name and address where I live. While they didn't give her access to my account they sure froze my domain for about 5 days until everything got solved.
Holy crap! I'll keep that in mind next time my registrations are up. This combined with their unwillingness to make proper 2FA a priority (a tweet told me they were 'setting up the infrastructure' 3 months ago) is a strong signal to look elsewhere.
Yep. Everyone has been saying SMS is not a secure channel for forever now, and this is only one of many possible attacks that can be used to trivially bypass SMS based auth. It's sad but true that in general banks have some of the weakest security on the internet, most online games do a better job protecting user accounts from unauthorized access.
The banks didn't get the memo from NIST and if anything they are getting worse - they are actually ramping up their use of callback and SMS authentication. Last month I had several apps force an SMS authentication because I hand't logged in since paying bills the prior month. One app disabled Touch ID and forced an SMS auth before I could log in again. One bank locked me out of my account entirely because you can't log in or contact custom service without receiving an SMS code (no voice option), but their system refuses to send SMS to my number.
It would be great if the banks supported TOTP and U2F keys (or if they managed passwords correctly and didn't limit the length or force absurd character recipes).
I once applied for a job at a top 3 bank's IT security area and on the way to the interview room I noticed that every single desk had a well worn copy of Computer Security For Dummies. I think that may be the root cause of all banking security problems right there.
> One bank locked me out of my account entirely because you can't log in or contact custom service without receiving an SMS code (no voice option), but their system refuses to send SMS to my number.
I have this problem with one of my credit card issuers (not the two I mentioned elsewhere in this thread) and they're going to lose me as a customer as a result. I can't log into any online banking without receiving a phone call or SMS and I'm prompted to enter a number at which I can receive such a thing. The problem is, I am entering the number that I know the bank has but entering that number--or any other number I own--is met with "hmm, it doesn't look like that number belongs to you." Of course it doesn't considering my mobile phone service is paid for through my LLC and this is a personal credit card account.
When I tried calling them, I'm told that, again, I need to verify myself with an SMS and, no, the number I have used for a decade is not sufficient.
At least they've now returned to mailing me paper statements. Once I've verified that my last automatic payment has moved away from them as of next month, the card gets canceled. If I can't cancel by phone, I'll cancel by mail. If I can't cancel by mail, I'll just leave it in a drawer and watch my mail for any new statements until the card is canceled for inactivity.
It's sad but I have to agree. My local bank suddenly changed their Mastercard Securecode online verification scheme from a password to either SMS 2FA (for which they charge 9 cents per SMS and don't even support all numbers) or some really shitty mobile app which has a rating of 1.7 on the play store with tons and tons of people complaining that it just doesn't work and now renders their CC totally useless. I'm sure they are losing customers left and right. Mobile phones (especially Android) are incredibly insecure. Why not use these little token generators like in the past?
Another bank of mine can't issue proper bank reference letters anymore which are required in many cases to open other accounts or form a company. The same bank also stopped the Visa support of their debit cards so they are practically useless apart from using at the ATM.
Another bank with a business account can't issue credit cards anymore. For many transfers they require tons of verification and paperwork, opening a new account gets harder and harder. I have to fill out a stupid W-8ben form even though I have nothing to do with the US. It goes on and on.
It seems in the past 5 or so years banks in general have gone into a slow but steady self-destruct mode - especially with all that speculation in the debt casino. Banking is becoming a more and more frustrating experience even though it's so core to our society.
All of my banks are still Wish-It-Were-2FA and doing the silly "Security Question" bonus passwords dance. If someone were to point out to me an American bank that was doing the right thing technically, I'd probably switch in an instance, but at this point I've interacted with all of the major US banks and they all seem to be security idiots.
True but SMS was the only available 2fa for a long time. In fact, it's still largely the only available 2fa for most things (sadly). As bad as it is, it's better than just a straight password.
Blizzard Entertainment Group had better security for imaginary currency for five years than most financial organizations have today. They were handing out key fobs at conventions.
It's not that they aren't available. It's that only a couple places (like online brokerages) even bother.
The Sueddeutsche article claims that German customers were affected too. Most German banks I know of support TAN-generators[1] which are completely unhackable by any known methods. Insert your card, scan the barcode on your screen, confirm the target IBAN and amount, and you get a unique TAN that is calculated from your transaction parameters.
My bank's 2FA literally comes on a piece of paper. A set of numbered codes, and the banking app/site tells me which code to use for any given transfer.
Yea, I have the same from two different banks. For me this seems to be the obvious 2FA solution, but comments here are mostly about elaborate technical approaches. Is there something I'm missing?
I was really surprised, and the first time I came across this. I gave it some thought, and concluded I am more comfortable with this solution over anything technical.
As long as that means your funds are insured and will be replaced after they're stolen via SMS phreaking, I suppose that's not the worst answer they could have given you. Though I wonder how long it would take to get the replacement funds...
I wonder how you would prove that that a given transaction was fraudulent, and how much of a hassle that would be? And of course, that's assuming someone notices the transaction. If it was small enough, it could probably go unnoticed.
In August, Lieu called on the FCC to fix the SS7 flaws that make such attacks possible. It could take years to fully secure the system given the size of the global network and the number of telecoms that use it.
One of the newly discovered great sins of the early 21st century, is to disseminate insecure code. Before the public became widely aware of chemical pollution, I'm sure many polluters thought themselves innocent and environmentalists as pernicious busybodies.
Another feather in the cap for a dedicated 2FA solution such as Google Authenticator etc. that doesn't use SMS?
Though having replaced two phones since using that solution - it can be a pain to have to re-set it up with each provider every time. I can see that if someone is prone to losing their phone, it will become a major issue.
I think the problem is that all the companies whom I use 2FA for have totally different methodologies for re-setting it up on a new device. Whilst some have an automated way of verifying my identity and resetting the new device almost instantly, I have had a couple that needed talking to a human support rep (inconvenient, but understandable) and one company that needed another employee in the company to do a full 2FA verification themselves, and then talk to a company support rep on my behalf to verify my request to reset my 2FA settings! (WTF).
Thus, each time I replace my phone, I find myself actually culling the number of services where I use 2FA purely because it was too much of a pain to go through the reset process, and it was actually easier to drop 2FA with them altogether (or in one case actually drop the service altogether).
There are some more friendly options than Google Authenticator if it's a hassle to keep setting up over and over.
I keep my 2FA in 1Password, and it works pretty well. I can access the codes from the desktop app without getting out my phone, or from my phone or tablet if I'm mobile.
The only scary issue is that all that information is encrypted in Dropbox. And I use 2FA on Dropbox! Hello cyclic dependencies! As a result, Dropbox is the only 2FA that I don't store in 1Password.
You can announce "this IMEI roams through me," to the original carrier. Some operators have some safeguards to prevent a locally active IMEI to be roamed, sometimes it is possible to have traffic routed both ways if they don't.
Edit: I had an idea for an improved sms 2fa, but comments gave persuasive reasons why google authenticator was better. Thanks for the comments!
Idea basically is a 3FA system where bank sends you a one-time 6-digit number. You then have to translate that number using a user-seeded cryptographic hash function. This secret function is your third factor which translates the received SMS code into the value you'll input at login.
Analysis: Security would increase; but ease-of-use would decrease, especially in regards to how a user would reset their password if they lose both their password and their program that calculates the cryptographic hash.
2FA is already a hassle for users. Now you want to make them do math too? This is not a solution. Just don't use SMS at all. Google Authenticator is a better solution than yours.
You make a good point about ease-of-use. I agree a phone app is much easier to use with a smartphone. However, people with flip phones couldn't install such an app. You might then argue the demographic with flip phones would either use an RSA device or not have 2FA enabled at all - which seems like a valid point.
Security-wise, having a secret user math function seems more secure than the Google app. I can give reasons why if needed.
Seems vulnerable to phishing. The attacker already uses phishing to get account number, password and phone number; now they just have to send a fake 2Factor message and observe how the number is translated.
Even if the function is lossy, it has very little entropy. Maybe even vulnerable to brute forcing...
I agree with the other poster, Google Authenticator looks like a better solution.
One, you'd need to use an app and something actually secure to combine the password (that's what you're proposing, a second password that mutates the token) and the 2FA token -- if the password was a simple algorithm like you're suggesting, attackers could guess it a good proportion of the time. This is a good example of why you (or I) shouldn't try and invent security measures; leave it to professionals.
Second, the regular passwords had already been compromised on these accounts. Presumably, at the time they phished the regular password, they could have phished the special 2FA password as well. It also means that 2FA could no longer be used as a password reset mechanism -- because you need to have another password to use it. You've essentially made if 3FA.
Phreaking in 2017, interesting. The golden age of phreaking ended with SS7. SS5 was very insecure, people could just emit tones in certain frequencies and pull off tricks like calling for free. Maybe this is the beginning of a new era.
I think major websites should stop using SMS and ask for just an authenticator app or secure keys. SMS should be regarded as a bad security practice.
This is really scary... can banks please start using something like Google Authenticator? I was assuming that 2FA over SMS was the most secure thing ever...apparently that's not the case.
Nah, it's not the case [0] and hasn't really ever been. It's only now, due to its ubiquity with banking and other high-value sites that the incentives are there to abuse it.
As an aside, as someone who spends time between multiple different countries, SMS 2fa is a real pain to deal with.
This sounds a lot like attacks on the CAN buses within car systems. We can no longer afford to have zero-authentication, zero-authorization networks anywhere.
Well, the article explained that in this instance, it was via a foreign run telco.
So, essentially, to get in you just need to find the weakest telco connected to the main SS7 network and own them, and use their infra as a staging point.
I would much prefer something with end to end crypto like Signal. Of course, that creates problems with key rotation, but perhaps that could trigger additional validation of some sort.
There already exists a much better solution for 2FA - the OATH protocol's TOTP and HOTP. It uses a local token to hash a counter or the current time with no need for anyone else to have your current token or communicate it directly in any means. These are already popularly implemented in Google Authenticator and Authy.
The article links to the Wikipedia page, which is a good starting point.
> Signalling System No. 7 (SS7) is a set of telephony signaling protocols developed in 1975, which is used to set up and tear down most of the world's public switched telephone network (PSTN) telephone calls. It also performs number translation, local number portability, prepaid billing, Short Message Service (SMS), and other mass market services.