That sounds nice. But while using U2F/FIDO for a few years (with two HyperFIDOs, one for "daily" use attached to my key-chain, the other as a backup in a safe), I found the most common problem was that websites/services don't tread these keys as first class citizens. For example GitHub: I have my two keys setup there, but I can't opt-out of SMS authentication.
If I knew I could use my keys at more services, I would already have upgrade the daily driver to a NFC variant (so I could use it with my phone).
So, to any webdevs on HN reading this: Take that shit serious and implement 2FA ;-)
//edit: GitHub is just a single example; and it's possible to opt-out of SMS, if a authenticator app is used instead. That's what I mean by second class citizen: Security Key(s) + X is possible, but while X alone can be configured, Security Key(s) alone is(/are) not allowed.
Last year I got the Google Titan security keys and connected it with all of my work + personal accounts that support it.
The #1 weakness is the simple fact that many services don't allow you to disable alternate forms of 2fa.
Github is an example, you can always trigger the fallback SMS 2fa code.
Dashlane is another example (and arguably the most important). It's impossible to make your security key the only form of 2-FA. If you have a security key on your account, you must also have regular app-based 2-FA enabled as a fall back.
What's the point of using security keys with services that require regular SMS or app-based 2-FA as a fallback?
Edit: G Suite is one of the few services that got it right. G Suite has an optional security control that when enabled, forces authentication using security keys (explicitly forbidding alternate 2-fa methods).
Fastmail allows you to do this too. They have a long section in the documentation that strongly discourages it, and it seems like they will refuse to restore your account if you lose your 2FA, which is exactly what I want:
>> Why do I have to add a recovery phone number to set up two-step verification?
> Keeping your account safe from attackers is very important. But so too is making sure you don't get locked out of your own account. We are aware that SMS is not the most secure of methods for 2FA, and has been deprecated by NIST. However, for the majority of users, the risk of losing their two-step verification device is far greater than the risk of someone hacking their SMS. If you lose your phone, the TOTP key is lost but normally you can get a new SIM card with the same number from your carrier. We therefore believe requiring a phone as a backup option strikes the best balance of confidentiality (no one else can read your data) and availability (you can read your data) for the majority of our users.
> Please note, if two-step verification is enabled, access to the phone number itself is not sufficient to gain access to an account: you still need two factors (your password AND the SMS).
> Advanced users that understand the risk may remove the phone number from their account once two-step verification is enabled. Once the recovery phone is removed from the account, SMS is no longer an option as the second factor for login. If you choose to do this, we strongly recommend you write down or print your recovery code and store it in a safe location, and that you set up at least two security keys or authenticator devices. Should you lose access to all two-step verification devices and not have your recovery code, you may be permanently locked out of your own account.
Fastmail used to have a mechanism where you could hold down <alt> (I believe) on that screen and phone number would no longer be a required field to continue creating your account. Maybe that's still possible.
I don't remember phone number being required while creating an account, they only wanted to collect it when enabling 2FA. If I recall correctly they had a little info button that said something like "we need this to start but you can remove it later if you _really_ want to."
> What's the point of using security keys with services that require regular SMS or app-based 2-FA as a fallback?
It addresses the problem with phishing. The main problem with Authenticator is not that an attacker knows the one-time password, but that an attacker tricks user into entering the one-time password to their UI and uses that right away to take control of the user's account.
My understanding is the issues that lead to U2F being considered to be better than TOTP were mainly about it being phished easily compared something which will only dispense the right code with the right challenge.
But if you never actually login with TOTP, and always use your U2F key, then does it actually decrease security to have it as a backup/removal option and you know that's the only reason you'd enter it.
It feels like its competition there is "convince phone support with a sob story" and TOTP feels like a clear step up from that.
TOTP is one thing. Having your backup TOTP key locked in a safe effectively stops it from being abused.
SMS 2FA, on the other hand, has real security issues[0][1]. In my experience, SMS 2FA is most commonly the required type of 2FA, before you can add TOTP/U2F as a secondary.
SMS 2FA is the hardest to lose or break, so forcing everyone to keep it enabled minimizes support costs for the provider.
The other thing driving SMS 2FA is that this is a clean way for the site/org to get their hands on a real phone number for you with no extra effort.
SMS 2FA has been repeatedly proven to be trivial to break with a simple phone call to a mobile provider, since there so far is no downside at all that I am aware of to the providers. If someone jacks your SMS and drains your bank account, at least in the US your mobile provider just goes "oops". Until there is some penalty for them allowing your number to be ported without your consent, SMS is essentially useless for real 2FA security. Even if the account security was foolproof, though, it still is vulnerable to SS7 routing attacks.
Using Authy may be a good idea. There is a slight risk of exposing the TOTP key to the cloud, but having it available on all devices (current and future) is very convenient. Authy has an option to encrypt the key using a user specified passphrase, so it is at least better than storing the raw TOTP key in the cloud.
There's a small downside to TOTP even if you never use it in anger.
For U2F/ WebAuthn the relying party doesn't end up knowing any secrets. So if for example I get a month old database backup of Facebook, I don't learn how to log in with U2F as any of the users whose credentials I have, because Facebook can't do that either, only the legitimate users can.
For TOTP that stolen data gets me in, because I can synthesize correct TOTP codes for any user whose secret key I have stolen.
But you are correct that social engineering attacks will be far more common. The particular concern with SMS is that the attack can target your mobile phone company, so that all the security at say Facebook or Twitter doesn't help you because your security was blown up at T-mobile or AT&T.
Interesting point - it's essentially a long lived secret.
Actually what would happen if [large_comapny] had their TOTP secret revealed? Would they be forced to invalidate everyones TOTP? They can't just disable it they would have to somehow authenticate you a third way....
Ooh that's interesting! Coincidentally my very close relative works for their F16 program! Yeah they're one of the largest military contractors - not a company anyone wants to have a breach.
It's not one secret, it's a secret per pair. Imagine Anne and Barry both use TOTP with Google and Facebook, there would be one secret for (Anne,Google), one for (Anne,Facebok), one for (Barry,Google) and one for (Barry,Facebook).
A good implementation chooses the secret randomly. But both sides need to know what it is. In the example above Anne needs to know both her secrets (they'd be inside her Google Authenticator) and Facebook needs to know both their secrets (probably in a SQL database).
Stealing Facebook backups would get me the (Anne,Facebook) and (Barry,Facebook) secrets, but not the Google ones.
I know the client key is derived from the server key (So if Facebook's was released, your Google's TOTP is still safe).
I'm guessing it depends on the implementation, are the server secrets normally derived per user or shared across users (Talking about just a single service such as facebook)?
> I know the client key is derived from the server key
No. TOTP is a shared secret system, there is only one key.
> I'm guessing it depends on the implementation, are the server secrets normally derived per user or shared across users
Again no. The only reasonable thing to do is pick secrets entirely at random for each user. If the same secret is used then users can trivially impersonate each other, which makes no sense.
TOTP, unlike a password, can't be brute-forced. If the secret is lost, its game over, but if the secret stays protected, you can't guess it because you can only test the codes, not the secret itself. Brute-force for TOTP would only be possible if you could test 2^6 OTPs in the 60-second window, which would be mitigated by rate-limiting.
So it really doesn't do any hard to have TOTP enabled as a fallback if you never store the key anywhere.
It’s worth noting that TOTP secrets are more vulnerable than passwords on the service-provider side.
They have relatively tiny key space (generally 6 digits), so if the provider doesn’t protect against brute-force, it’s quick and easy to slam the whole key space until you win.
They also need the secret in a reversible format, since the service provider has to use the secret to calculate valid codes. So the most common password brute-forcing pathway (theft of a database full of password hashes) is rendered moot: you don’t have to brute-force the hashes because they aren’t hashes, they’re just secrets. At best, the TOTP seeds are encrypted at rest, but since whatever server validates tokens has to decrypt them to do so, most implementations in the wild either don’t encrypt or keep the decryption key right next to the data.
If an attacker can (for whatever reason, be it rate limiting or cost) only brute force 100 guesses per time period (say, 30s), they'll expect to be able to 'win' the challenge in about 58 hours.
The probability of guessing the challenge _wrong_ once is (1 - (100 / 1000000)). Every 30 seconds you get another chance. The probability of guessing the challenge wrong N times in a row is (1 - (100/1000000)) ^ N. Around chance number 7000, the probability that you've guessed it wrong all N times goes lower than 50/50. ~7000 * 30s in hours is around 58 hours.
At some point the 2FA protected system should stop accepting guesses entirely for a period - the same way you would lock an account for incorrect password guesses, or at worst rate limit down to a single guess per time period after a certain number of failed guesses.
regular "app based" is just OATH-TOTP in most cases. This uses a shared secret and the time to generate one time codes.
The Yubikey explicitly supports TOTP, and will happily store your secret on the key. You can then use Yubioath to pull codes from the Yubikey as needed.
I'm a huge fan of the ability to use either U2F or TOTP with the same hardware token.
> Github is an example, you can always trigger the fallback SMS 2fa code.
You can setup OTP (no SMS) and either keep it as a backup, or destroy the key. I don't have SMS on Github.
Admittedly, it won't solve issues with all services. From the service perspective, dealing with users who lost their security key (or don't have a backup one) is too painful wrt to forcing you to have an alternative.
FYI, a YubiKey (but not a lower-end Security Key by Yubico) supports OTP in addition to U2F. I have my GitHub account set up with U2F and OTP as the two 2FA options, both from my YubiKey. No SMS. This also works for pretty much any site that supports app-based 2FA; e.g., AWS doesn't support U2F at all, but I use it with my YubiKey via OTP.
Yubico's OTP requires you to register your secret on their server and adds a mandatory third party to the authentication process, so it's literally the second worst authentication method a Yubikey supports right after a static password.
Yubico’s OTP protocol requires that. But Yubikeys also support storing TOTP and HOTP secrets, which do not add any 3rd party to the process.
Also, to say that using Yubico OTP is “the second worst method” is potentially true, but in a relatively uninteresting way: adding Yubico-OTP-based MFA is so much better than just using a static password that the gap between 1st and 2nd place is huge. The comparative benefit between Yubico OTP and TOTP is tiny by comparison.
I think the trouble is key loss. Even more than falling back to SMS is the fact that if you lose a key and don't have a recovery key file, you can basically beat the whole thing with social engineering.
In a corporate environment, it can be a bit different since if someone breaks their phone or Yubikey and needs me to disable MFA, you can make them call you, tell you the last board game you played together or the last project you helped them with, and verify their identity. But that only works in companies of ~100 developers. You get larger than that, and you can't even do that anymore, and need to fall back to pre-set security questions.
Services like Google, Github, etc. don't have huge support staffs for their free projects, so it makes sense (for them) to have these fallbacks.
Honestly, having security and unique passwords for everything (especially e-mail accounts that get password resets) is probably more important than MFA for these big services.
Hm, but you have an Authenticator App enabled. If I "Edit" my SMS Number, I get to choose whether I want to proceed with "Set up using app" or "Set up using SMS", but not "Set up using security key".
The point of having two keys is that there is no need for neither an authenticator app nor SMS.
It would rock to be able to avoid having the Authenticator app, but I think “U2F plus a TOTP device” is pretty solidly better than “U2F plus SMS”, and this comment tree is suggesting that GitHub doesn’t allow for disabling SMS.
If you do U2F with 2 keys, TOTP, and throw away the TOTP secret after you activate it, you’re exposed to a slight additional risk because the TOTP secret is still stored by Github, but that’s radically better than a world where they mandate SMS.
True, I just amended my initial comment to reflect the fact that GitHub is not as bad as I made it look like, but the second class citizen point still stands, even if some workaround like "throw away the TOTP" is applied.
I agree with your overall point. I’m hopeful that bringing U2F/FIDO2 to iPhone will help continue to push these standards into being first-class citizens. Right now, I use TOTP for a ton of sites because I’m using an iPad Pro as my primary workstation, despite the fact that it has a USB-C port and I’ve got a pile of USB-C Yubikeys.
> The point of having two keys is that there is no need for neither an authenticator app nor SMS
I think they are striking a reasonable compromise. I am OK with using U2F and having TOTP authenticator as a backup plan.
I am definitely NOT OK with requiring SMS for authentication, though, which is unfortunately the direction all the corporate sheep seem to be taking (namely, banks).
In addition to what you're saying, I also would like to register multiple keys to services such that any one would work, not that both are required. I don't need them to be nuclear keys... I want a backup key stored in a safe and one on my keychain.
This seems to be very uncommon amongst service providers.
I’d say this is actually the most common. I’m not aware of any websites that allow >1 U2F/FIDO key in the configuration where you need to use all-of-them to log in. Sites either only support 1 key, or support multiple keys and you need 1 of them to log in.
Yep, I recently got Google's Titan pair of keys, and anywhere that supports hardware keys, I was able to register both, and I'm only required to use one to login.
Can you create one that doesn't forward to your phone? Could a US friend create one for you in a new account and hand that account over to you?
It's more about sinking that second factor somewhere that can't be redirected and only you can theoretically access by logging into Google Voice. You don't have to forward the messages to your actual phone.
> print your recovery code and store it in a safe location, and that you set up at least two security keys or authenticator devices. Should you lose access to all two-step verification devic
but this can be configured to not forward the text/voicemail.
> Those are harder to port as long as you created it on Google Voice.
I hope this myth goes away soon. It is trivial for any bad actor to force-port a number, even if it is reclaimed a day or two later. If I wanted to get fired, I could yank your GV number (which is, most of the time, really at Bandwidth.com) in a few minutes; the medium-sized voice provider I am at does several of these types of ports a week to help customers trying to leave recalcitrant providers.
You're right, but I dislike that route for three reasons:
1. It should just be plain unnecessary to apply such a hack.
2. There is still a single weak point for all services.
3. While this prevents against the run-of-the-mill "attacker activates another SIM card for my mobile number and steals my crypto cash from $online_wallet" coming up here every few month, a more sophisticated attacker can just route the SMS to himself via SS7 (I suppose the SIM variant is easier to pull off with less traces, and obviously requires much less technical expertise).
I agree with all of your points. Really Github should just let you remove any dependency on SMS entirely. I lucked out in that I never configured SMS as a fallback. I have backup codes printed out in a safe.
Porting is not disabled, though presumably it would be harder for an attacker to figure out your number if you only use it for 2FA purposes.
The main issue with Twilio numbers is that they can't receive from short codes. If the service you're using sends 2FA codes from a short code, they'll just disappear into the ether.
Of course, if you're just using this as a placeholder because the site requires SMS backup, then maybe it's ok that you can't receive on them.
Also, I have yet to find an Electron based app that supports it. On the other hand I was positively surprised when it just worked on Android (for Google login).
Say I want an USB-A key, I can choose from three and they all have different prices. The Security Keys are less secure than a FIPS or a 5? Maybe I'm just too end user but the multiple choices to me are the exact opposite of the easy to use principle. The whole product page is just confusing.
You want the Yubikey FIPS if you're using it in a context where FIPS compliance matters, such as US government. If not (such as for personal use), then don't bother.
The Security Key series is the budget minimal "works as a FIDO authenticator". It lacks the bells and whistles that come with the 5 series. AFAIK it is not "less secure," it just lacks bells and whistles.
The 5 series works as a FIDO authenticator, but it also includes PIV capabilities (smartcards, PGP, etc.), which allow you to do stuff like this: https://developers.yubico.com/PGP/SSH_authentication/.* The PIV capabilities matter more in the context of enterprise/government applications where smartcards have been in common use as a 2nd factor for many years, which makes having an all-in-one smartcard+FIDO device very convenient.
* Caveat: I recommend against using your Yubikey for SSH private/public key authentication unless your SSH setup also requires a password. Single physical factor is worse than having a password protected key pair in your ~/.ssh folder.
The Yubikey requires a password by default to use the ssh key stored on it and it will lock itself after 3 failed attempts. So I don’t think your caveat is valid.
I rather have my encryption key on hardware design to keep anyone who finds it from brute forcing it than just password protected on a hard drive.
It’s not entirely obvious from the description, but the main difference is (I believe) that the cheap keys can only do FIDO 2-factor authentication whereas the expensive ones have secure storage so you do things like putting your pgp private keys on them.
If all you want is true 2-factor authentication for Gmail / Facebook / Github etc then the cheap keys are exactly what you need.
I was an early, enthusiastic adopter of Yubikeys at my work. Above and beyond the other issues people have mentioned, though, the one that kills the product for me is the frankly stupid OS integration. The key behaves like “just” a special kind of keyboard which types a long string of gibberish and then hits enter any time you touch the trigger.
I can’t tell you how many times I have accidentally bumped the thing and thereby entered my secret key in:
- text editors
- the URL bar of my browser (!)
- Slack chats (!!)
The solution seems obvious to me: make a new type of input field at the browser and OS level which accepts U2F input, then reject that input in any text field that doesn’t opt in.
This one issue has made the key way more of a liability than a simple authenticator app for me.
You can disable this OTP mode of the YubiKey with either the "Yubikey Manager" or more advanced "Yubikey Personalization Tool" software. It's unrelated to FIDO U2F. It's the first thing I do with any new key as I find it similarly annoying.. I accidentally tapped it once while working in Adobe Lightroom and it triggered a sequence of actions that went on for 30 seconds that I couldn't undo..
We've got a few people asking for double USB-A+C when we launched Solo, but I personally don't feel particularly excited about a dual plug key.
This said, if anyone here would like to give it a shot... our hardware is open source so relatively easy to add both plugs, firmware should just work, and I'm happy to support with manufacturing. I would highly recommend to test the market first (landing page, or even crowdfunding campaign).
I use USB-C devices almost exclusively and opted the other way around, carrying a USB-A variant with a C-A adapter. A few reasons:
* The A variant yubikey is crush resistant and mechanically simple for a longer life.
* The adapter acts as a form of connector saver for my yubikey, since I usually keep it physically attached to the yubikey itself. With this approach both wear-prone sides (C male and A female) are on the adapter, which costs as little as 1/50th the price of the yubikey.
I was looking into something like this, and while this probably works fine these aren't allowed under the USB spec, so you're hoping whoever designed these did it right and its not going to misbehave with some devices.
Strong advise anyone considering putting one of these on their keychains to consider otherwise. The actual connector of my usb-c version has warped in my pocket over time and it’s now not recognised.
The USB-A version is great. I've been using it for more than a year now and it has taken all kinds of abuse.
I'm considering keeping it and using a small USB-C -> USB-A dongle. I have to live a dongle life anyway (because of the stupid Apple decision to go all-in on USB-C, with users as hostages), so it doesn't matter that much.
This was my main concern when I bought my USB-C version as well. But what actually happened was the plastic casing crumbled (!?), eventually enough of the casing came off that the USB-C connector was only held on by some solder pads, which eventually failed.
For a pretty critical device, the USB-C version is rubbish.
There is, but the number of USB classes supported by iOS is limited.
I think implementing new ones is not possible for applications without licensing MFi (which is incurring fees per unit sold, as far as I understand).
The way they are doing it, Yubico doesn't need to pay/pass through that fee on each key sold, but only on the ones actually likely to be used with iOS devices.
While limited today, there are a bunch of applications in beta testing that leverage the key. I personally have a small keychain usb-c to A adapter that I use with my Yubikey Neo. I guess all in they decided a Lightning and USB A option wasn't as smart given most mobile devices are Lightning or USBC moving forward.
Is there any security cards that just use NFC (with physical button, obviously)?
I think government issued cards are good contender for this. Perhaps it could even replicate certificate authority chain principles - certain cards could sign other cards and then can be invalidated if compromised.
My local id card is absolutely pathetic. I have no idea where to get a reader (although they are generic) and worst part is requirement to run Java applet in your browser - something that has been dead for over 10 years...
> I think government issued cards are good contender for this.
For this, devices would have to support real NFC in the first place. iPhones/iPads don't allow app usage of NFC, and the flagship Samsung tablets don't ship with it at all.
Most modern Android phones, or at least the Samsung ones and probably Google ones I have used, allow for virtually full read/write NFC access as far as I can tell.
That would require NFC support in the first place. Just imagine: all flagship tablet models of Samsung do not ship with NFC. What kind of nonsense is that?!
What would the benefits of NFC cards be? To me it seems more insecure given that anybody walking closely by could theoretically communicate with the card.
Typically for most keys used over NFC there's still a PIN challenge or some physical interaction with the device. There are some exemptions to this for usability (e.g. building access keys stored in PIV cards or the PIV applet in yubikeys), but in those cases too the keys are segregated by type to prevent misuse.
There's still one huge disadvantage with hardware-based FIDO U2F tokens: There's no good way to migrate from one to another. I've got three(!) Yubikeys of different generations on my keyring because I'm not sure whether I have enrolled the two newer ones to all the services I'm using.
The absolute dream is to have a single nonprofit OAuth identity provider against which people can prove their identities with FIDO U2F, then use the issued tokens to auth with services of their choosing. Building this network is incredibly hard, though (what website would accept an identity provider without any users, and what users would use an identity provider not accepted by any websites?) so the most popular implementations are hosted by Google or Facebook - but there you have all the obvious privacy issues.
This is something I'm trying to simplify with my product. The foundational security features of U2F (you can't interrogate the device to find out what other services are set up) make it basically impossible to migrate them to new devices, but all the other capabilities of the Yubikeys can be moved across.
This is really why U2F falls over in the Enterprise, at least from what I've seen. Customers want centralised management, but the U2F protocol just can't support that.
I thought Yubikey NEO supported NFC on iPhones. I remember reading there was some flaky support for a while, then a new SDK was released for iOS 11. Is there any advantage to using the pluggable Lightning Yubikey over the NEO? Perhaps better app support?
NFC support is spotty at the moment. The problem is that it's somewhat simpler to integrate FIDO/U2F via browser using the WebAuthn API than it is to implement a non-native FIDO SDK into your native app in a secure fashion. Once mobile browsers and iOS/Android web UI widgets get support for BLE and NFC WebAuthn, things should improve somewhat.
FIDO (WebAuthn) via browser abstracts some nuances from you such as "how do you know you can trust the server you're talking to?" and "how do you deal with the difference between FIDO and FIDO2 authenticators?" alongside other details that the native SDK's don't abstract away from you.
(For background: part of my day job is implementing authentication systems.)
The NEO is a Yubikey v3. It supports NFC. v3 is the last FOSS one, but it does not support FIDO2. If you want a YubiKey with NFC which supports FIDO2, you need a YubiKey 5 (NFC version). Or a Solo with NFC (the Solo support FIDO2 and is FOSS).
I happen to have one of the InCharge chargers as keychain [1] and what is interesting is that it is 3 chargers in one: one's always USB-A, other one is either USB-C or microUSB/lightning. So they combine microUSB with lightning on the same connector. I wonder why YubiCo did not go with a similar design.
In my experience too with testing the NEO, they're just not as reliable to work with compared to the later models (both over NFC and USB). I believe the NEO was the first to bring in new features for the Yubikey, and the kinks weren't all ironed out yet.
If you have a NEO still, I'd recommend at least upgrading to a 4 if you can.
My NEO is my backup key. My YubiKey Nano 4 is my main key. I also got 2 Solo, but they only do Fido 2 (not SC) so they're not very useful for me. Though one will, once I got my next phone (will have NFC).
Does the 5C/Nano work with Android phones with USB-C?
I thought I was waiting for a 5C NFC, but maybe I've been overlooking the obvious. For some reason it never occured to me that it might work plugged directly into my phone if only it had the right shape.
(I have a 5 Nano, which was great until I got a USB-C Macbook and broke several keychain adapters before giving in and buying one of those hubs that stick on one side converting the 2x USB-C to the same + USB-A, HDMI, and SD.)
This is what I do. The USB-C one's work on Android just fine but it's a little less intuitive; the YubiAuth app still complains if NFC is disabled and you might have to turn on OTG for the key to be recognised (which auto turns off on some phones after 10 mins).
Just out of curiosity, is Apple migrating away from Lightning toward USB-C? $70 for one key is a bit steep especially if Apple eventually shelves Lightning on iPhones.
Regardless of what Apple plans to do (which really isn't an answerable question), I'd think what really matters is, what port does your phone have? If the choice is between a new $1,000 phone and this security key, $70 isn't that much.
If you're already planning to buy a new iPhone in September, you should probably wait to buy any accessories. (And in truth, my gut is that Apple will stick with lightning on iPhones for some time.)
It would be deeply frustrating if Yubico were to spend years coming up with a 2FA product that works with iDevices, and then a few months later Apple were to throw out the interface that product depends on and thus instantly make it completely obsolete.
(One would hope that Yubico and Apple have been in touch with each other at least the minimal amount that would be required to avoid such a fiasco. But given Apple's penchant for secrecy, who knows?)
I actually had the chance to try out a prototype at Blackhat. I was actually able to have the USB-C portion recognized on my Android Phone (via the USB port), and it was recognized in Firefox and lsusb (though for some reason I was unable to register it, and I has the u2f enabled in about:config).
I am tempted to buy it to see if I could get it to work on my phone, I would much rather have the USB-C work on my phone than an NFC.
Any usb-c key should work on Android. Firefox has been recently updated to support webauthn, so you no longer have to turn u2f on. Note that some sites, like Google, only allow you to register the key on Chrome, but then you can use it on Firefox too.
Partly, does't support new iPad pro and only in Brave browser
> At launch, it’ll support these well-known password managers and single sign-on tools: 1Password, Bitwarden, Dashlane, Idaptive, LastPass, and Okta. And when using the Brave browser for iOS, the YubiKey 5Ci can be used as an easier way to log into Twitter, GitHub, 1Password’s web app, and a couple other services.
> Security keys offer almost unbeatable security and can protect against a variety of threats, including nation-state attackers.
Alright, I'm not a security expert, but I'm not completely illiterate to basic computer security. Anyone care to chime in how this is much more secure than a two-factor app?
Sure there's the obvious, nobody can just copy the two-factor app off my phone with all the codes and have the same codes (I've upgraded phones and taken my codes with me before...) but whose to stop someone from cloning a Yubico key?
Again I'm not an expert, but I do want to know if this is purely marketing hype or if there's some security to Yubikey and friends that I'm not aware of.
Also pardon me if I confused two-factor as the Google Authenticator app. Too many "factor" terms get a bit confusing after a while.
> whose to stop someone from cloning a Yubico key?
That is precisely what these devices are designed to stop. The device has a private key stored in hardware in a way that it cannot be retrieved by software. When you use one of these devices you dramatically decrease your number of attack vectors because now the attack has to happen physically. Someone has to actually steal your physical key. And because this is your "second factor", if that happens they still also have to have your password.
Two-factor apps get closer to this, but they usually can be copied. For instance, 1Password can be set up to mimic Google Authenticator.
> I do want to know if this is purely marketing hype
This is most definitely NOT marketing hype. It is the current security best practice.
I didn't know about 1Pass, I've been a paying member for a while. I'll try that out! But to put a finer point on your actual statement, I usually screenshot the MFA setup in case my phone gets lost, so I can easily re-set it up.
For those who aren't aware, the author of the post works for them. Maybe a founder; it's a little hard to tell. I've run across him doing exactly this in the past. Bad form, and (having looked at it in the past) the product he's flogging is amateurish.
Not quite - if someone steals your 1Password (or equivalent) database with TOTP seeds in, they need to crack the password on that, then have full access to everything. If they get the password from the other end (e.g. the site you log into), they can probably log into that specific site (they will have the TOTP seed), but not anything else. In general, there are more attackers looking at the site end than at the client password DB end.
Probably wouldn't be a bad idea to have a distinct password vault for TOTP seeds, although they'll be stored on whatever device you generate codes from anyway (hopefully in a secure way). At the very least, it might be helpful for moving between TOTP devices without needing to do the random steps required by different services!
Since, once unlocked, 1password as of version 7 stores everything de-crypted in memory, ANY attack on a host with an unlocked 1password keychain which can exfiltrate memory across processes can steal everything.
It's pretty obvious why this changed, however, it is a major increase in exposure and a terrible change overall. It is discussed in [1] and mostly the answers aren't very satisfying as demonstrated in [2].
Lockheed has a great deal of their MFA keys compromised because the factory that manufactured them had been breached for a while and nobody had noticed. Supply chain attacks are performed constantly against large, known entities and this case shows why they are so pedantic about security and justified in their paranoia. The problems have been execution of the policy and the costs of compliance.
Ah this sounds familiar... Probably saw the article here on HN ages back and forgot. I mostly ask cause I don't have one of these keys but if I were to consider getting one I'd want to know what a good option would be.
You should look for a vendor which understands that knowing the secret key inside the Security Key (that's how all the vaguely cheap ones work, they have a random secret AES key inside them, that's enough to do everything else securely) is a terrible idea and so they should arrange for the key to be chosen randomly and never recorded at all.
With SecurID and similar technologies vendors technically didn't need to retain the secrets inside those devices after they'd been manufactured and shipped, but you can see the practical temptation.
On a smaller scale, since the system doesn't use a shared secret you should just swap a brand new Security Key with somebody else or if deploying to an organisation just muddle them and let people pick whichever one they want. You don't care which key you have, the more random the better.
> have they had external security firms try to steal the private key?
Definitely, it's an expensive vulnerability to simply reveal though. YubiKeys are proprietary so they can't be audited by non-contracted third parties. There are FOSS security keys that don't have that problem though.
The two factor app (say, TOTP) involves a _secret_ used to generate a changing value and so bad guys can do either of two things:
1. They can steal the secret, from you or from the other party (the service you're authenticating to)
2. They can relay your answer, without knowing the secret and since the answer is genuine their relayed answer will be accepted. Example you type the One Time Code into https://fakebank.example/ thinking it is your bank, then the bad guys simply paste that same code into https://realbank.example the actual site of your bank and it works - they get into your account.
FIDO Security Keys don't use a shared secret, instead they use public key crypto. Your Security Key proves it is still the same key as it was when you registered, but the remote site doesn't have any secrets so they could even _publish_ the data they store to authenticate you and that wouldn't matter to the system's security.
And it relies on the web browser to tie the FQDN into the data signed. If you press the button on the Security Key on https://fakebank.example/ the results are only proof for fakebank, operated by the bad guys, if they forward it to https://realbank.example/ it won't work because the name is wrong and it doesn't match.
Does that help? There are a bunch of other cool things about Security Keys, but those are the main ways it's simply better for security than a conventional TOTP-style second factor.
Simple: security keys combat phishing, two-factor apps do not. A security key doesn't just authenticate you to a site: it also authenticates the site to you.
I think I get what you are trying to express, but rather than "it authenticates the site to you", I think that should be phrased "it prevents you from authenticating at a site pretending to be some other site.
evil.com can still claim to be mysite.com and trick users into using their security key there, no?
It would presumably have to trick the user into "registering again" since there is no valid key handle for FIDO2 (not sure about classic U2F), but I doubt that users would notice the difference in flows.
The real win is that this spoof would still not give evil.com a credential scoped to mysite.com, but a user would be none the wiser that they are on a different site than they expect (which can still be problematic depending on the nature of the site).
Suppose the key allows mysite.com to let you see your emails, send more emails, and send money to people, etc. Basically mysite.com let's you see and do interesting things after you've authed.
Even if evil.com gets you to register and present your key, they cannot forward it to mysite.com. Even if they go to all the trouble to completely mitm you and the site looks identical to mysite.com, they cannot get the emails or get it to send money.
This is because evil.com cannot pretend to be you when they interact with mysite.com no matter what you've given them.
It's going to be hard to trick me into thinking I've logged into my gmail if none of my emails are there!
Unless they somehow convince you to put your yubikey in the mail and physically send it to them...
True, this would defend against many practical attacks and is a huge security win by itself.
I'd just be careful about overly relying on this property or calling it anything like mutual authentication:
If an attacker can make an educated guess about a user's account contents, they could still convince them to provide additional personal information once they let their guard down after authenticating.
Security keys are two factor as well. In fact there are multiple different types of 2FA with different security levels. SMS 2FA is one of the weakest. Push Login is moderate. Scanning encrypted barcodes is highest and also phishing proof like FIDO keys.
> Scanning encrypted barcodes is highest and also phishing proof like FIDO keys.
I have never seen a solution like this, but I can't see how it would avoid simple relay phish, it seems as though bad guys can just get themselves a legitimate encrypted barcode and have their victim scan it.
Maybe you haven't explained this solution properly, do you have a link?
Speaking as someone who writes code for authentication systems, it's not marketing hype. These physical U2F authenticators are more secure by design than the number based authenticator apps.
Since the Yubikey (and friends) operates by generating a private key on the device itself, inaccessible to software, it means that the key is hard to clone. They'd have to steal it off your person. Combined with generating a unique key pair for every website and checking that the website is what it claims to be, this means that the key is resistant to replay attacks, which number based authenticator apps are vulnerable to.
That isn't to say that it's immune to phishing, but it's much, much harder to phish someone who is using a Yubikey because an attacker would need to compromise someone's DNS configuration or SSL configuration to impersonate the website, at which point... why are they bothering with phishing?
These are basically tiny hardware security modules. The premise of an HSM is that you have a hardened processor that contains the secrets and performs asymmetric crypto operations on request. Ideally, the device is designed such that the secrets never leave a crypto boundary. The big expensive ones will offer facilities to transfer keys, but only over an authenticated, encrypted connection to another device certified by the manufacturer[1].
Yubico has some FIPS certified devices[2], which means that they've presented a design that shows the device has mechanisms to prevent secrets from being extracted, and they're only using algorithms known by NIST not to leak secrets.
> Also pardon me if I confused two-factor as the Google Authenticator app.
Multi-factor authentication is about managing risk, and discussions about risk are naturally fuzzy and vague.
I'll try a concrete analogy; consider firearms safety.
Some typical rules[3]: 1. keep the weapon pointed down range at all times, 2. keep your finger out of the trigger well, 3. treat the weapon as loaded at all times.
Each rule is a factor, and to accidentally hurt someone you have to violate all the rules at once.
Multiple factors work best if they are orthogonal, that is, when a given action results in only breaching a single factor. That's why factors tend to be phrased as "something you know," "something you are," "something you have".
The authenticator app and a Yubikey are doing the exact same thing: they're establishing the "something you have" factor.
Since the two factors work when an attacker must both obtain the device and get your password, if your phone has both passwords and authenticator apps, the additional factors aren't minimizing that risk.
[1]: The automatic vendor lock-in makes it a great business model...
It isn't. SW can be updated and is usually open-source and mostly secure.
HW not so.
At least two HW backdoors have been found in chips such as this one in earlier versions, the first using an improper RSALib RSA variant (developed by NSA or Mossad). The source for the second, SCADA, is unknown.
So any nation state wanting the most people to use it, will advertise it as such. "unbeatable security and can protect against a variety of threats, including nation-state attackers."
Obviously they are biased – they are in the business of selling hardware, not software, but their reasoning still makes sense to me.
It's arguable whether "destructive updates" that invalidate all existing registrations should be allowed, but that could have its own problems in terms of availability if not clearly communicated.
On a related note, I wonder if there is actually a more secure way of preventing illegitimate firmware updates in JavaCard/GlobalPlatform than randomizing the card manager keys, which is what Yubico does for the NEO: https://www.yubico.com/2016/05/secure-hardware-vs-open-sourc...
It might not be impossible, but surely non-trivial. Add to that that it has to be done in a undetectable way and the key owner needs to be parted for the key in the time it takes to make the clone.
You'd probably destroy the key's shell in the process of getting to the microchip (you can't get to the private key via the USB connector by design, so decapping is probably the only way to do it), so you would have to have a clone ready that looks exactly like the key you've just disfigured to get to the secret key.
Moreover, if a physically undetected clone is managed, it will be detected soon through the spec. The WebAuthn spec includes monitoring an always increasing counter for each key/site pair. One of the clones will start to fail.
So really there's no point in cloning. Straight up theft is the bigger concern.
The Yubikey website is vague, but it seems like the lightning end only works with a few apps (1Password, Brave, etc). What do I do if I want to sign in to anything else that needs 2FA? Do I still need a TOTP app?
From what I've heard from Yubico the next version of iOS is going to make it far easier to communicate with the device. Integrations will probably still need to be added by the app developers though to take full advantage.
It's hard to say until we see it in action. I'm optimistic, but with Apple's track record of hobbling integrations in annoying ways I'm still a little hesitant.
I really do want them to work fully though so I can extend my product to mobile too. I know a lot of people now that have gotten rid of their computers and just use their phones for everything.
This is awesome, been waiting for this for quite a while. The Google Titan bluetooth support never worked on my phone so it’s cool that I can plug it in directly now with this YubiKey.
On an iPhone, will this work when you try to login to your google account in system prefs? It's been my experience with Bluetooth Fido keys that this does not work.
I've had a YubiKey5 which uses the standard USB for a while and never had issues. They’re also cheap enough to replace easily but I doubt it'll break by it being in your pocket with other keys unless you severely bend it or something.
If I knew I could use my keys at more services, I would already have upgrade the daily driver to a NFC variant (so I could use it with my phone).
So, to any webdevs on HN reading this: Take that shit serious and implement 2FA ;-)
//edit: GitHub is just a single example; and it's possible to opt-out of SMS, if a authenticator app is used instead. That's what I mean by second class citizen: Security Key(s) + X is possible, but while X alone can be configured, Security Key(s) alone is(/are) not allowed.