IMHO the UX for all this stuff is very confusing to non technical users. People lose their phones, don't print out the codes, or simply don't understand how this works and do silly things like trying to use codes from the wrong account.
Since introducing 2FA , requests of people to reset their 2fa are a very regular thing for our support people. Especially when it concerns paying users, saying no is not really an option. So, resets are a common thing. I've since educated our people to at least not do this blindly but obviously, social engineering is a big problem with all this stuff. If this happens to us, you can bet it is an extremely regular thing for basically everything that has 2fa.
But my biggest worry with this stuff on my own accounts is somebody talking support into resetting 2FA on my accounts. I can do everything right and still get compromised because some underpayed support contractor falls for some social engineering hack.
It's quite confusing for technical users as well. Google has or is in the process of deprecating TOTP, and will make new users/accounts to use something else based on their android app (which is quite likely proprietary and/or tied to gms, push notifications etc.) if they don't use hardware keys. Or they'll force you to use SMS, which is also worse than TOTP. I think it is still possible to use TOTP if you jump through some hoops, but this is yet another case of some rubbish policy.
That falls apart pretty quick when you attach things like recurring billing to your account. Nobody is perfect and at some point somebody, somewhere is going to want to cancel that billing but not have access to their account and checked your "never reset this account" box.
How do you deal with that very real edge case? Especially since that edge case can easily escalate to a lawsuit depending on the account in question.
That way in the very worst case if support gets socially engineered into removing the credit card details from the account the customer will get mildly annoyed as they have to login and reset it, but their whole account won't be taken over.
Security isn't binary, it's a spectrum and requires tradeoffs. 2FA isn't a silver bullet.
And if you're the email provider, you probably already have my phone number anyway and could/should send a text message to notify me about anything like this.
If you make this mistake, you need to then disable Advanced Protection, re-login to your phone, then download the Smart Lock app, and THEN re-enable Advanced Protection to get things working. Otherwise you’ll be locked out of your phone.
Does the phone need to pass SafetyNet to use a Yubikey over NFC with it?
I enrolled a key this way (USB-A key plugged into an adaptor on the USB-C port) once just to see if it works. Bizarrely Google will let you do that, but, having proved that it works, they then immediately tell you that you need to install Desktop Chrome before you can enroll any other keys. No idea why they think that's a good idea let alone necessary.
Unfortunately, for an AWS root or IAM account, the limit is still one per https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credenti...
I don't like how fragile this situation is (not being able to use my preferred systems to enroll) so I did not enroll my non-work accounts, whereas for systems like GitHub where Security Keys just work fine, of course both my work and personal accounts are enrolled.
Security keys are supported on Google and Facebook on (almost) all browser/devices. Almost means you still need to be a bit careful when you buy the key to check exactly the support, but you'll find the right combination that works for you. If you want some extra websites, you also have Twitter, Dropbox, Github, etc.
This said, I personally wouldn't want to enable my security keys on too many websites, because loosing or breaking a key it's a major pain. I much prefer to use oauth and delegate authentication to Google or Facebook, and keep these two highly protected.
But the real value is for the most critical logins, such as to Google, or similar, where a breach would be tremendously harmful to you.
Edit: here https://lastpass.com/support.php?cmd=showfaq&id=8126
U2F works with dashlane I think?
tl;dr: You can use the yubikey to replace google authenticator wherever you're using standard TOTP codes.
SQRL (as a whole deliverable package) is not finished yet though.
If the human doesn't do this apparently pointless busywork, their security is secretly destroyed. If you ignore the fact that the SQRL says "goodguy.example" but the phishing site you're on says "badguy.example" then you authorise the bad guys to log into your bank account. Oops.
In WebAuthn and U2F this doesn't work because the web browser was brought in to the party. "This" says the browser, "is badguy.example", and the user isn't asked to do any busywork with a risk of getting themselves in trouble. The bad guys get nowhere.
Train drivers do the same routes over, and over, and over. They get used to things that they know intellectually are just patterns, routines, not set down anywhere as facts. Hypothetical example: maybe at Oakwood junction on the Down Fast with the Express you always get a Caution, never Clear or Danger, if you're on time, because the signaller is moving a local out of your way ahead. Without any computer assistance, drivers, despite being trained about this specific problem, will look at a signal showing Danger at Oakwood junction on the Down Fast, and their brain says "No, this is always Caution" and _even though they saw a Danger signal_ and even though they are trained that if they aren't sure they must treat signals as "Danger" - they act as though it was definitely Caution.
With a computer helping, the driver can be "shaken" out of this mistake, for example the computer can sound an audible alarm that the train seems not to be slowing appropriately, and then since this alarm doesn't normally sound, the driver may go "Wait, was that Caution? Actually I think it was Danger! Oops" and brake. But we must be cautious with such aids, most of the British railways have a system that sounds a horn each time you're shown a signal that is NOT Clear. The driver acknowledges the horn, or if they do not the train autonomously brakes to a halt. But both "Caution" and "Danger" are not Clear, at our hypothetical Oakwood junction the driver would get used to hearing the horn, because the signal says "Caution", except today it is "Danger" ...
If your talking about iframing goodguys website inside of badguy website, then yes this is an issue that goodguy should not allow iframing of their login page.
1. You, the victim, go to badguy.example, perhaps as a result of a phishing email, a sponsored link, or it's a typo squatter.
2. badguy.example tells you it's Good Guys. You need to log in (as is usual with Good Guys) so you go to the login screen...
3. When you do this the Bad Guys connect to goodguy.example and ask to log in too, they get a SQRL code for goodguy.example
4. Bad Guys (still pretending to be Good Guys) show you the SQRL code to log in to goodguy.example
5. You scan the SQRL code and press OK
6a. Now the Bad Guys are successfully logged in as you since this was their SQRL code for your account.
6b. You receive some error message or other stalling tactics to buy them some time.
The SQRL user has been taught, over, and over, and over, that they need to check the domain name shown in SQRL. They did, it said goodguy.example, as they expected. Unfortunately that was worthless because what mattered is that they were visiting badguy.example in their web browser.
Gibson is aware _this_ can only be fixed the way U2F/ WebAuthn fixed it, which is to modify the user's web browser, not just add a fun phone app. And once you've done that modification (Firefox, Chrome, Edge in beta, Safari to come) the QR code and phone app plays no useful role. It's a hangover from Gibson's original idea that doesn't quite make any sense once you fix the actual problem.
Gibson give a pretty honest assessment of the problem and what the weaknesses are under the various configurations. When the login agent is on the same machine as the browser (which would be the normal case), the problem mostly goes away (if I understand it correctly).
It's too difficult to setup FIDO U2F for your own webserver. There is still no Apache module or nginx plugin that allows you to protect a directory of your document root.
Also U2F is not available for PHPBB. There is a plugin that appears to be unfinished and buggy since 2015.
Could someone shed some light on this? What is it that prevents a phising page from basically proxying the crypto challenge from the website to your key and present your answer back?
During registration you receive several pieces of info, such as keyHandle, public key you use to verify future device responses and an attestation certificate so you can verify the device vendor.
The interesting part is this: when you challenge the u2f device to sign AUTHENTICATION request, what it does is keep a counter tied to your appId (the domain where the .js that deals with this is executed) and it produces encoded json which is signed by the device, using an EC private key.
The counter increases every time the device is challenged by that particular appId. The response is signed using the counter and a private key that's on the device (which you can't tamper with).
So, the phishing / mitm should have the value of private key and the moving part (counter) that's tied to a specific appId. That's difficult since it SHOULD do this during enrollment process and every subsequent authentication request.
What's important that not only does it protect against phishing but from replays too. Naturally, the u2f device isn't standalone responsible for this, the verifying server implementation is a crucial part of the process.
Disclaimer: I'm not affiliated by Yubico, but have implemented U2F (the dreaded js part and backend part) in 2015, several months after Chrome 38 has been released, the first version supporting u2f protocol.
(If you use a Ledger device for U2F and subsequently restore a new (or a reset) device from your private seed the counter will be reset. Trezor has the same issue but allows you to manually set the counter to work around it.)
It's not pointless. Disregarding the counter only enables replay attacks, that is: the attacker must previously have captured a challenge/response. The phishing resistance is still retained because it relies on the browser passing the origin to the u2f device and the browser can't be fooled by similar URLs while a human entering a TOTO token can.
That does assume the thing communicating with the security key is not compromised though (the browser/OS).
Not entirely sure about how Google Smart Lock on iOS works in this regard - since it communicates over BLE and could specify any origin, and then the process of going between the app/web on iOS can't be secured particularly well.
Also unclear how origins would work with regards to non-web applications. e.g. URL schemes aren't unique/owned/defendable.
(Sorry if this comes across as RTFM, but I figured the source is better than my attempt at explaining)
So, if by accident you're on phishing.com, the key will generate a signature containing phishing.com. When the attacker forwards it to google.com, Google won't verify the signature, and thus the attacker won't get access to your account.
There's an applet you can load into your own JavaCard presumably https://github.com/LedgerHQ/ledger-u2f-javacard
I wish they were more useful and programmable without a Java applet (eg. Custom C/C++ driver)
> They died when everybody ditched Java applets for web.
How's that relevant? Are you saying no one uses Java anymore?
It's just didn't catch on, there's no thriving community around it. You have the software, you have the hardware (search for JCOP21 or JCOP3 on Aliexpress), but no users.
There is a similar thing in SIM-cards, but that is under-utilized as well.
I've yet to receive mine, so I don't know if they work or not..
Got any good resources on this subject? I'd be interested to know more :)
And you can buy smartcards in this formfactor e.g. https://www.cardomatic.de/epages/64510967.sf/en_GB/?ObjectPa...
I believe mobile payment also uses the SIM card, but I'm not sure.
For a more general overview see https://en.wikipedia.org/wiki/Universal_integrated_circuit_c....
There is apparently a toolkit to design applications to run on the SIM https://en.wikipedia.org/wiki/USIM_Application_Toolkit
I don't know of any other implementations. An SSH auth module using SIM would be pretty cool.
Also did I mention that if you use 2FA, Google's "find my phone" functionality asks you to use your phone to authorize the login before you can find it?
Yes. You heard that right. Use your phone to find your phone. Don't ask me how I know.
If someone has my password and is trying to track me down, all you have to do is let me know that this is happening with a notification on my phone so I can rapidly change my password by using the device I am holding in my hand.
If you are the kind of person where this small time window of physical location disclosure is a significant vulnerability, you shouldn't carry a Google-manufactured phone without putting it in an RF-proof bag anyway.
If my other method works, even if it's a huge pain (e.g. maybe my other FIDO key is at home and I've flown to Tokyo) then I do still have options. Maybe not options I _love_ but that's security for you. If you have a massive house fire that both destroys your bank documents and leaves you so badly burned you can hardly talk let alone sign your name, good luck the first time you try to withdraw cash from that bank account.
If you set Advanced Protection both distinct ways have to be FIDO Security Keys, and whilst it's conceivable you could own a phone that functions as _one_ FIDO key (I would expect Apple to do this for example) it doesn't make any sense to have a single phone serving as _both_ your keys, even a non-technical person can hopefully spot that.
A few years ago I was travelling abroad and had my suitcase stolen - which had my laptop (with FDE) and passport inside. Fortunately I had my phone and wallet on me, otherwise I really don't know what I would have done - I probably wouldn't even have remembered the name of the hotel I was staying at that night.
Advanced Protection uses a stricter implementation than
Google has offered in the past: Only those physical
keys—along with a password—will unlock your account. If
you lose them, you can't use a printed out backup code
Good point, although I think the parent comment was about Google's existing 2FA rather than the new advanced protection setting.
I guess keeping the codes with my at all times would work, but doesn't seem like that great an option (paper is easy to lose and easy to destroy). Maybe I should memorize some of them?
I am going to be switching to a USB-C one soon, though, and it only now occurred to me that I haven't really been keeping a "list" of all the sites where I've got them registered. Right now, not a lot of sites support it so it won't be too tough to find them. But I should probably be keeping a list so I at least can be sure when I've definitely replaced registered the new Yubikey in all places the old one was registered.
That doesn't really address your question as to what happens when you lose your keys, but it's perhaps relevant that there are a few warts around replacing even keys you haven't lost.
The specifics depend on the use case, but even if you fall back to something less secure like an email and TOTP, you still come out ahead overall because most authentications are done by U2F.
I'd pay $2-3 max per piece. Especially since you need more than a few in order not to cause yourself more trouble than this is worth (to someone who already uses random unique passwords and emails for services).
U2F to me is useful for protecting from stolen credentials on the web. These attackers will never have physical access to my u2f device to clone it.
If you fear people getting physical access to your u2f, they can simply use it if they also know the password. No need for cloning. And if they don't know the password, it's still useless. From that attacker's perspective, your security was reduced to password only. Which is still good, if you take care of your passwords.
Like what if i want to register 3 or 4 keys for advanced protection?
The spec strongly encourages providers to allow multiple keys, and allow you to nickname them.
As far as I know everyone allows as many keys as you like except Vanguard and Amazon AWS (which both also only accept Yubico keys)
Most sites do. Google and Github both allow multiple U2F keys (note that this means any key can be used to unlock your account, not that all are required).
Of all the large and popular sites that support U2F, the only one that I'm aware of that only allows one key is Twitter.
Well, right now I use passwords of the form j6lqPKQKQ1RHv87PES4iy5; it'd be nice if using U2F meant that I could securely switch to something like 'correct horse battery staple' instead …
In my view, this makes 2FA an essential security feature, not just a nice improvement over 1FA.
If your company decides that the New York employee named "Steve Smith" and the London employee "Stephen Smith" are the same person, and either should be able to request account password reset for email@example.com, both of these chaps are going to have a bad time. Google can _tell_ them this is a terrible idea, but GSuite is a company product, so ultimately it's their terrible idea if that's what they want to do.
The _technical_ features of Advanced Protection seem to be mostly: Use FIDO Security Keys (U2F/ WebAuthn), disable stuff we know is useful but insecure. You can opt into those technical changes for your GSuite, either for everybody or a selected group e.g. "Company Security Nerds" or "Executive Level Employees" or "Everybody except Pamela. Damn it Pamela". But the non-technical feature is hard and probably just not replicable at all.
So you can have an equivalent set of options configured, but it isn't exactly the same.
I’m interested in Advanced Protection for my work account - Can I enroll a G Suite Account in Advanced Protection?
With the help of your Administrator, it’s possible to replicate the features of Advanced Protection on a G Suite account. Take a look at this help center article to get started.
the help center link: https://support.google.com/a/answer/9010419
If you were to create a new GSuite domain today, it'd be allowed for all users.
I see no mention of SS7 attacks, is that a solved problem?