Now, Apple users can use their fingerprint as a 2nd factor (e.g. for Apple Pay), but fingerprints have the unfortunate property of not being rotatable if compromised.
And there are FIDO U2F security keys, but you still need to issue $18-$50 tokens to each user, and you need host application support.
Coworker has a wonderful term for this. He calls fingertips "amputationware". It really drives the point home.
But there is another very good reason to avoid fingerprints auth methods. The scanners are by design doing some level of fuzzy matching, so if/when someone finds a way to generate an input that reproduces the signal pattern from the reader well enough, it can be fooled. (Yep, done already.)
My personal take on fingerprint authentication is that they are not passwords. They are usernames.
For most people, having their unlocked phone snatched out of their hands on the street is a far more credible threat and somehow we don't have everyone saying you shouldn't unlock your phone outside.
Obviously the non rotable issue remains (although less viable to amputation auth)
"If you want to transfer $1000 to account 1243567890, confirm with code ABCDEF".
Assuming an attentive user, this protects against malware silently changing amount and recipient on a transaction the user is making - an attack the banks were struggling with. It also makes phishing much harder, since the attacker needs to convince the user to enter the code after reading this message.
Code-based 2FA is vulnerable to phishing where the attacker relays the phished credentials in real time. German banks were doing 2FA long ago, in the form of sheets of paper with one-time passwords (called TAN). Attackers were phishing those, so banks switched to "indexed" (iTAN) OTPs: The bank tells you which OTP to use, so someone who has just phished 1 or 2 won't be able to use theirs. Phishers started relaying the stolen credentials to the bank in real time, looking which iTAN the bank asks for, then asking the victim for the same.
Given these known capabilities of attackers, any 2FA method that is not phishing resistant and not bound to a specific transaction is not usable, and probably not better than SMS. Anyone can make a phishing site, running attacks on phone networks is harder.
Edit: Just realized that U2F would not be a suitable solution for banks, since it does not authenticate specific transactions, so malware could still swap recipient and amount.
Either he acts as a transparent proxy between you and your bank without attacking, or the authentication fails.
Edit: with local malware you're doomed unless you have trusted external hardware with a dedicated screen.
U2F was designed to protect login where you don't need more context except the site you're logging into and that's protected as part of the protocol. Abusing it for other purposes (tx confirmation) as you showed is not a good idea.
Transaction confirmation can be improved by using mobile banking app that displays the transaction info and has accept / reject buttons. The communication would go over HTTPS.
But the problem of trust goes deeper - how do you know that account X is the correct number for your intended recipient? You usually have this saved in you banking system and when it is compromised you'd still be convicted that you're sending money to correct account. Unless of course you have a paper backup of account numbers...
At its worst, it opens up a social engineering / customer support "lol lost my phone" attack vector that didn't exist before.
SMS 2FA can be defeated by hijacking someone's number. Happened to me. One day I opened my Macbook and saw the "A new iPhone has been activated for your iCloud account". They had access to my number for hours after I called up my carrier.
People with popular YouTube accounts have to deal with this all the time and the advice right now seems to be to buy a burner phone on a false name and never share the phone number with anyone, which is just crazy.
 Fraudsters are able to convince phone employees that they forgot their phone number and left their phone at home.
You can then set the service up to forward received SMS messages to your regular SMS number—but, if your phone is compromised/stolen, you can go back to the account and immediately turn off this forwarding.
Sadly, this approach reduces the security back to single-factor, since you get into the SMS account with something you know, rather than something you have.
You can at least treat the VoIP account like a password-manager: give it a long, unforgiving "master password." (Or a random password that's stored in a password-manager, if you use one.)
You can run your own private one at home with a Raspberry Pi/ttl serial adapter and a cheap SIM module. Or use a dongle.
I don't understand why Google Authenticator is supposed to be so conceptually different from a password. The codes it provides are a deterministic (and public!) function of a code shared between you and the party you're authenticating to. Or in other words, it's exactly equivalent to a password.
When I set up 2FA on github, they gave me a code to put into Google Authenticator, and also a bunch of "one-time reset codes" in case I lost my phone. Why wouldn't I just record the original code in whatever place I'm supposed to store the one-time reset codes?
1. Not shared across sites
2. Guaranteed minimum length
3. The portion going over the wire cannot be used more than once or after a few seconds
4. The phone app doesn't expose the seed or allow you to copy it off the device. Running on iOS has strong guarantees against access by other apps.
That's not perfect and U2F is a huge improvement but it's safer than SMS and given the number of google users who get attacked daily a reasonable incremental improvement.
1. Passwords are selected by the user; you cannot prevent reuse. While it's technically possible that a site could allow you to set the TOTP seed nobody does.
2. Password entropy is notoriously hard to calculate programmatically – e.g. well known movie passwords or leet-speak are often judged as stronger than shorter true-random sequences – but you generate the TOTP seed.
4. As with #1, preventing reuse is hard. Knowing that all extent implementations offer fairly strong protections against accidents is a key difference.
Your finally point depends on how you judge the whole system: “something you know" can be told to someone else, which isn't true of TOTP. It's also always valid and reusable whereas the one-time code is closer to an on-demand verification. From the perspective of security boundaries, a desktop user accessing a separate token, phone, or arguably even an iOS user using a separate app is closer to something you have than something you know.
TOTP isn't the strongest form of MFA but it's closer to that end of the spectrum and, especially in consumer contexts, a huge improvement over nothing. In the real world, those kinds of security improvements are still meaningful even if they're not perfect implementations of a textbook concept.
This is not correct. When I set up TOTP 2FA, the site generates a seed and tells it to me. It is always valid and always reusable, and it's no more difficult for me to communicate it to anyone else than it was for the site to communicate it to me.
Think of the physical one-time tokens: If I know the data stored on it, I can replicate them too. So, are they "something you have", or "something you know"? The reason why they're "something you have" is that it's easier to have the physical ownership of the device, than to know they "secret" of the device. But, the same is true about Google Authenticator! In that respect, Google Authenticator can be, indeed, interpreted as "something you have".
If you're building a consumer service, that's a big improvement over someone picking their dog's name and saving it in a Word doc on their desktop.
hash_fn(concat(IV, time)) -> func(somehash) -> [code seq. 1]
Then on the next round the IV is discarded and it becomes:
hash_fn(concat([code seq. 1], time)) -> func(somehash) -> [code seq. 2]
That's almost certainly an over-simplification / wrong; it doesn't seem very secure to just use the previous output as the IV for the next round. Maybe it chains all previous outputs together somehow, IDK. I should probably research this, given how much I rely on 2FA for security.
EDIT: The Wikipedia article on TOTP seems to suggest that you're right; the initial shared secret looks indistinguishable from a password (although the article is a bit vague): https://en.wikipedia.org/wiki/Time-based_One-time_Password_A...
So, for example, I can temporarily move Authenticator to an tablet if my phone is lost/stolen using my laptop, which is already trusted.
There are some potential security drawbacks to this but access to a trusted device seems a reasonable compromise between security and convenience.
* Yes, technically you need the 1Password file as well. But someone who compromises your machine will get access to both. Or if you have it synced to your phone and both are unlocked with a fingerprint, all someone needs is your device and a little effort to fool the fingerprint sensor.
You could also buy a dual SIM phone and buy a throwaway sim card with a phone number you do not share.
I don't think most people are responsible enough to deal with more secure MFA. Most people don't know how to keep custody of stuff like that.
Well I think the problem starts when someone else convinces your phone company that they are you. As we've seen several times now it becomes easier and easier to pull this trick.
As for U2F add more than one to your account (2 is minimum) and you are safe. The same applies to any kind of physical key (home, car, etc.)
* How U2F works on their phone
* What if the U2F key gets lost or stolen (revocation)
* How to have a backup of the U2F key
* Are multiple identities possible (home/work/whatever)
Honestly, this sounds like a much better technology: http://www.dailymail.co.uk/sciencetech/article-3220886/Forge... It transmits data through the human body. You could turn it on or off selectively, authenticate with anything you touch, and it would remain cryptographically secure.
The challenging part is setting up the 2FA and integrating it into all your systems.
Wouldn't acquiring the 1st factor access (say, normal user credentials) be subject to similar social engineering attacks ?
It's a much better system. Of course, some banks don't use it to it's full potential - many use it only for signing money transfers, but it's still pretty good. The readers are also cheap and standardised, so you can use any one of them for any account, which is useful.
I find this an especially entertaining juxtaposition with the transit systems. In Canada, you swipe on the bus and tap in the stores; in the US, swipe in store, tap on the bus.
Banks get paid far less interchange rates for Chip & PIN than they do signature. Thus the system we have. So some banks can skim an extra .5-2% off the top of every single transaction in the US.
This only applies if the PIN is being used for debit. I have two credit cards that are issued by US-based financial institutions and, when the chip reader is equipped to do so, result in a prompt for a PIN when used. The interchange rate is as a normal credit card.
The bigger issue, as has been explained to me by two people I know who work in payment tech, is that credit card issuers in the States are utterly panicked by the idea of doing anything that might result in even a small fraction of a percent of their cardholders switching to another card. PINs are seen as inducing friction or a previously-unknown step into the payment process (since US customers have been conditioned for 20 years to "enter a PIN for debit" and "sign for credit"). To now change to PIN-primary cards would result in confused customers, higher support costs, and plenty of customers switching either to another credit card or to (the horrors) another payment method entirely, like cash. So the theory goes, anyway. Therefore, US banks are loathe to introduce cards that can be verified by PIN at all, much less ones that are primarily verified by PIN.
0 - They are not debit cards with VISA or MasterCard logos; they are actual credit cards accessing a line of credit. One comes from the State Department Federal Credit Union and the other from First Tech Federal Credit Union.
> Increased revenue: In the US, the different in interchange rates between PIN-based debit and signature-based debit transactions can be substantial for larger dollar payments. Although Regulation II (a.k.a. the Durbin amendment) caused the card networks to reduce debit interchange rates for the largest card issuers to just 0.5% plus $0.22, this still results in interchange of about $0.72 on a $100, signature debit purchase. The same PIN-debit purchase might generate about $0.25 and possibly less in a area where there's substantial competition between PIN-debit networks. To be fair, this isn't true for credit card transactions (since there is effectively no such thing as a PIN-based credit card transaction in the US today and therefore no distinct interchange rates for PIN-based payments) but is does explain why there might be a general preference not to proliferate the use of PINs as the US card population and terminal base is converted to support EMV.
 - https://www.quora.com/Why-is-the-USA-adopting-chip-signature...
In many ways, this is better, since a skimmer can't grab my pin and credit cards have rules where they must immediately refund fraudulent purchases, while debit cards are often much more difficult to reverse.
Tap-payment is also being rolled out quite rapidly. It's about 1.5 years old and I'd say I now tap for 70% of all transactions.
Edit: I also don't need to make transactions often enough for it to be cumbersome - if I was travelling a lot for business and needed to make a lot of transactions, I'd worry about losing the device and being unable to do anything without it.
they look like this: http://l7.alamy.com/zooms/04452dfd35964790b44ee6514745de5c/n...
to transfer money you enter your PIN, then the target account number and amount to transfer, and it gives you a code you type into the browser
They also seem to be trying in general to move everybody off the USB reader and smart card solution to a smartphone app based solution, so that the total number of people using the USB reader approach is down substantially.
Sure and much more inconvenient one, because you have to carry this device with you everywhere. Even much better system would be a living being at each ATM machine checking your credentials.
Essentially all security is a trade off against convinience, this is, in my eyes, a no-brainer. It's barely any more effort and much, much more secure.
Takes about 1 minute to do, and you don't need it for every action. E.g. you use it for adding a new account payee and for making a first (or very large) payment to a payee.
Picking two at random:
Second comment: SMS should have been removed long time ago considering the SS7 problems. Better to use a secure token.
Is the bank taking responsibility and covering the loss for their customers?
I think they should be required to. Although, isn't it possible that the sum of withdrawn amounts could be so large that the bank simply can't cover the loss, thus making it insolvent?
With ss7 you can do fun things like query the last location update/logged in base station for a mobile phone, due to roaming carrier x can query for customers on carrier y in another country. If you link up to one of the roaming hubs you can pretty much get the location of anyone with a mobile phone. Feature phones included.
The roaming hole was known to Russian blackhat scene, and was actively exploited for ages. I say, first number hijacking services were offered around 2001 to 2005, when Russia had a paid SMS content craze. Back then, the most apparent use of that was to steal somebody's number and then send paid sms to your number.
The fact that big IMEI databases gathered with Android scamware are now being sold around "social marketing" community only makes things worse.
The best known attack with that was an attempted hacking of British MPs personal emails a few years ago. Russian phone operator Megafon was complicit, yet they never even got a slap on the wrist.
All and every eMail or blog site that has password reset through SMS (Including whats up, and gmail) is open to that attack.
It's different here in Norway; the banks require two factor authentication to log in as well as signing transactions.
I don't claim it's perfect but at least no one can log in unless they control both factors.
This "feature" was enabled by default for my Danish online bank as well. I've since disabled it.
Who would have guessed?
This proves that the phone can be more a liability in the face of much better technology.
I wouldn't buy a security device from amazon. You can buy that device on the official yubico website.
And clearly you don't want a fake security device.
Unfortunately, the security key FAQ contains no info on why you shouldn't have your phonenumber. I assume it's because phone-number migrations can be used to take control of your accounts?
EDIT: Yeah. Perhaps reading the article instead of just coming to the comments might have been useful.
Also, do you really trust your Android phone with your TOTP private key? How do you know there isn't malware running on it as root?
This allows you a trivial means of cloning and backing up the token. Unfortunately, this is against best practices, since you have multiple copies of the private key lying around; but this has been an acceptable trade-off for me thus far.
I own a yubikey but it's useless as a permanent device since it needs a fullsize USB port - something my phone (and the latest gen macbook) lack.
The threat model this setup is protecting against is phishing. For that purpose, a security key is much better than TOTP (authenticator app).
If the phone has other attack vectors, such as compromising the OS, and is indeed less secure than the yubikey, then doesn't having it as a backup just lower the bar for security to the phone? As far as I can tell, there's nothing stopping from someone just ignoring the yubikey if authenticator is also an option.
But I also don't need to know too badly, so I think I'll just move on.
The security key is slightly easier to use than the TOTP authenticator and it's what you'll tend to use most of the time. But if you happen to forget it at home or you're logging in to check your mail and your security key is halfway across the house but you have your phone handy or something like that, the TOTP backup option is convenient. You also need some kind of backup in case you lose the security key.
If you know what you're talking about and have a serious concern about this kind of advice, by all means present your argument. But if you don't, find another way to learn.
Security keys are great for journalists, activists, and high profile business people, but for your average geek it's an unnecessary amount of trouble, IMO. TOTP gets the job done.
Pretty unacceptable considering how important domain control is.
I personally had my domains on hold frozen without traffic being routed to my servers when my ex-gf chat with them gave my username (no password) and claimed it is her account because obviously she knew my full name and address where I live. While they didn't give her access to my account they sure froze my domain for about 5 days until everything got solved.
Not long after I moved to NameSilo.
It would be great if the banks supported TOTP and U2F keys (or if they managed passwords correctly and didn't limit the length or force absurd character recipes).
I once applied for a job at a top 3 bank's IT security area and on the way to the interview room I noticed that every single desk had a well worn copy of Computer Security For Dummies. I think that may be the root cause of all banking security problems right there.
I have this problem with one of my credit card issuers (not the two I mentioned elsewhere in this thread) and they're going to lose me as a customer as a result. I can't log into any online banking without receiving a phone call or SMS and I'm prompted to enter a number at which I can receive such a thing. The problem is, I am entering the number that I know the bank has but entering that number--or any other number I own--is met with "hmm, it doesn't look like that number belongs to you." Of course it doesn't considering my mobile phone service is paid for through my LLC and this is a personal credit card account.
When I tried calling them, I'm told that, again, I need to verify myself with an SMS and, no, the number I have used for a decade is not sufficient.
At least they've now returned to mailing me paper statements. Once I've verified that my last automatic payment has moved away from them as of next month, the card gets canceled. If I can't cancel by phone, I'll cancel by mail. If I can't cancel by mail, I'll just leave it in a drawer and watch my mail for any new statements until the card is canceled for inactivity.
Another bank of mine can't issue proper bank reference letters anymore which are required in many cases to open other accounts or form a company. The same bank also stopped the Visa support of their debit cards so they are practically useless apart from using at the ATM.
Another bank with a business account can't issue credit cards anymore. For many transfers they require tons of verification and paperwork, opening a new account gets harder and harder. I have to fill out a stupid W-8ben form even though I have nothing to do with the US. It goes on and on.
It seems in the past 5 or so years banks in general have gone into a slow but steady self-destruct mode - especially with all that speculation in the debt casino. Banking is becoming a more and more frustrating experience even though it's so core to our society.
It's not that they aren't available. It's that only a couple places (like online brokerages) even bother.
> We're always on the lookout of how we can keep our members' accounts secure. Right now, the Mobile Texts are FFIEC compliant.
One of the newly discovered great sins of the early 21st century, is to disseminate insecure code. Before the public became widely aware of chemical pollution, I'm sure many polluters thought themselves innocent and environmentalists as pernicious busybodies.
Though having replaced two phones since using that solution - it can be a pain to have to re-set it up with each provider every time. I can see that if someone is prone to losing their phone, it will become a major issue.
I think the problem is that all the companies whom I use 2FA for have totally different methodologies for re-setting it up on a new device. Whilst some have an automated way of verifying my identity and resetting the new device almost instantly, I have had a couple that needed talking to a human support rep (inconvenient, but understandable) and one company that needed another employee in the company to do a full 2FA verification themselves, and then talk to a company support rep on my behalf to verify my request to reset my 2FA settings! (WTF).
Thus, each time I replace my phone, I find myself actually culling the number of services where I use 2FA purely because it was too much of a pain to go through the reset process, and it was actually easier to drop 2FA with them altogether (or in one case actually drop the service altogether).
I keep my 2FA in 1Password, and it works pretty well. I can access the codes from the desktop app without getting out my phone, or from my phone or tablet if I'm mobile.
The only scary issue is that all that information is encrypted in Dropbox. And I use 2FA on Dropbox! Hello cyclic dependencies! As a result, Dropbox is the only 2FA that I don't store in 1Password.
Idea basically is a 3FA system where bank sends you a one-time 6-digit number. You then have to translate that number using a user-seeded cryptographic hash function. This secret function is your third factor which translates the received SMS code into the value you'll input at login.
Analysis: Security would increase; but ease-of-use would decrease, especially in regards to how a user would reset their password if they lose both their password and their program that calculates the cryptographic hash.
Security-wise, having a secret user math function seems more secure than the Google app. I can give reasons why if needed.
Even if the function is lossy, it has very little entropy. Maybe even vulnerable to brute forcing...
I agree with the other poster, Google Authenticator looks like a better solution.
One, you'd need to use an app and something actually secure to combine the password (that's what you're proposing, a second password that mutates the token) and the 2FA token -- if the password was a simple algorithm like you're suggesting, attackers could guess it a good proportion of the time. This is a good example of why you (or I) shouldn't try and invent security measures; leave it to professionals.
Second, the regular passwords had already been compromised on these accounts. Presumably, at the time they phished the regular password, they could have phished the special 2FA password as well. It also means that 2FA could no longer be used as a password reset mechanism -- because you need to have another password to use it. You've essentially made if 3FA.
I think major websites should stop using SMS and ask for just an authenticator app or secure keys. SMS should be regarded as a bad security practice.
As an aside, as someone who spends time between multiple different countries, SMS 2fa is a real pain to deal with.
So, essentially, to get in you just need to find the weakest telco connected to the main SS7 network and own them, and use their infra as a staging point.
> Signalling System No. 7 (SS7) is a set of telephony signaling protocols developed in 1975, which is used to set up and tear down most of the world's public switched telephone network (PSTN) telephone calls. It also performs number translation, local number portability, prepaid billing, Short Message Service (SMS), and other mass market services.