If I knew I could use my keys at more services, I would already have upgrade the daily driver to a NFC variant (so I could use it with my phone).
So, to any webdevs on HN reading this: Take that shit serious and implement 2FA ;-)
//edit: GitHub is just a single example; and it's possible to opt-out of SMS, if a authenticator app is used instead. That's what I mean by second class citizen: Security Key(s) + X is possible, but while X alone can be configured, Security Key(s) alone is(/are) not allowed.
The #1 weakness is the simple fact that many services don't allow you to disable alternate forms of 2fa.
Github is an example, you can always trigger the fallback SMS 2fa code.
Dashlane is another example (and arguably the most important). It's impossible to make your security key the only form of 2-FA. If you have a security key on your account, you must also have regular app-based 2-FA enabled as a fall back.
What's the point of using security keys with services that require regular SMS or app-based 2-FA as a fallback?
Edit: G Suite is one of the few services that got it right. G Suite has an optional security control that when enabled, forces authentication using security keys (explicitly forbidding alternate 2-fa methods).
>> Why do I have to add a recovery phone number to set up two-step verification?
> Keeping your account safe from attackers is very important. But so too is making sure you don't get locked out of your own account. We are aware that SMS is not the most secure of methods for 2FA, and has been deprecated by NIST. However, for the majority of users, the risk of losing their two-step verification device is far greater than the risk of someone hacking their SMS. If you lose your phone, the TOTP key is lost but normally you can get a new SIM card with the same number from your carrier. We therefore believe requiring a phone as a backup option strikes the best balance of confidentiality (no one else can read your data) and availability (you can read your data) for the majority of our users.
> Please note, if two-step verification is enabled, access to the phone number itself is not sufficient to gain access to an account: you still need two factors (your password AND the SMS).
> Advanced users that understand the risk may remove the phone number from their account once two-step verification is enabled. Once the recovery phone is removed from the account, SMS is no longer an option as the second factor for login. If you choose to do this, we strongly recommend you write down or print your recovery code and store it in a safe location, and that you set up at least two security keys or authenticator devices. Should you lose access to all two-step verification devices and not have your recovery code, you may be permanently locked out of your own account.
It addresses the problem with phishing. The main problem with Authenticator is not that an attacker knows the one-time password, but that an attacker tricks user into entering the one-time password to their UI and uses that right away to take control of the user's account.
But if you never actually login with TOTP, and always use your U2F key, then does it actually decrease security to have it as a backup/removal option and you know that's the only reason you'd enter it.
It feels like its competition there is "convince phone support with a sob story" and TOTP feels like a clear step up from that.
SMS 2FA, on the other hand, has real security issues. In my experience, SMS 2FA is most commonly the required type of 2FA, before you can add TOTP/U2F as a secondary.
SMS 2FA is the hardest to lose or break, so forcing everyone to keep it enabled minimizes support costs for the provider.
SMS 2FA has been repeatedly proven to be trivial to break with a simple phone call to a mobile provider, since there so far is no downside at all that I am aware of to the providers. If someone jacks your SMS and drains your bank account, at least in the US your mobile provider just goes "oops". Until there is some penalty for them allowing your number to be ported without your consent, SMS is essentially useless for real 2FA security. Even if the account security was foolproof, though, it still is vulnerable to SS7 routing attacks.
For U2F/ WebAuthn the relying party doesn't end up knowing any secrets. So if for example I get a month old database backup of Facebook, I don't learn how to log in with U2F as any of the users whose credentials I have, because Facebook can't do that either, only the legitimate users can.
For TOTP that stolen data gets me in, because I can synthesize correct TOTP codes for any user whose secret key I have stolen.
But you are correct that social engineering attacks will be far more common. The particular concern with SMS is that the attack can target your mobile phone company, so that all the security at say Facebook or Twitter doesn't help you because your security was blown up at T-mobile or AT&T.
Actually what would happen if [large_comapny] had their TOTP secret revealed? Would they be forced to invalidate everyones TOTP? They can't just disable it they would have to somehow authenticate you a third way....
RSA got hacked for their SecurID information, so that the attackers could then turn around and get into Lockheed Martin:
Among other things, LM makes the F-35 fighter:
A good implementation chooses the secret randomly. But both sides need to know what it is. In the example above Anne needs to know both her secrets (they'd be inside her Google Authenticator) and Facebook needs to know both their secrets (probably in a SQL database).
Stealing Facebook backups would get me the (Anne,Facebook) and (Barry,Facebook) secrets, but not the Google ones.
I'm guessing it depends on the implementation, are the server secrets normally derived per user or shared across users (Talking about just a single service such as facebook)?
No. TOTP is a shared secret system, there is only one key.
> I'm guessing it depends on the implementation, are the server secrets normally derived per user or shared across users
Again no. The only reasonable thing to do is pick secrets entirely at random for each user. If the same secret is used then users can trivially impersonate each other, which makes no sense.
So it really doesn't do any hard to have TOTP enabled as a fallback if you never store the key anywhere.
They have relatively tiny key space (generally 6 digits), so if the provider doesn’t protect against brute-force, it’s quick and easy to slam the whole key space until you win.
They also need the secret in a reversible format, since the service provider has to use the secret to calculate valid codes. So the most common password brute-forcing pathway (theft of a database full of password hashes) is rendered moot: you don’t have to brute-force the hashes because they aren’t hashes, they’re just secrets. At best, the TOTP seeds are encrypted at rest, but since whatever server validates tokens has to decrypt them to do so, most implementations in the wild either don’t encrypt or keep the decryption key right next to the data.
Everything is fed into an HMAC after all. It's only the last step where truncation occurs.
The probability of guessing the challenge _wrong_ once is (1 - (100 / 1000000)). Every 30 seconds you get another chance. The probability of guessing the challenge wrong N times in a row is (1 - (100/1000000)) ^ N. Around chance number 7000, the probability that you've guessed it wrong all N times goes lower than 50/50. ~7000 * 30s in hours is around 58 hours.
The Yubikey explicitly supports TOTP, and will happily store your secret on the key. You can then use Yubioath to pull codes from the Yubikey as needed.
I'm a huge fan of the ability to use either U2F or TOTP with the same hardware token.
You can setup OTP (no SMS) and either keep it as a backup, or destroy the key. I don't have SMS on Github.
Admittedly, it won't solve issues with all services. From the service perspective, dealing with users who lost their security key (or don't have a backup one) is too painful wrt to forcing you to have an alternative.
TOTP is secure, unlike SMS 2FA.
Also, to say that using Yubico OTP is “the second worst method” is potentially true, but in a relatively uninteresting way: adding Yubico-OTP-based MFA is so much better than just using a static password that the gap between 1st and 2nd place is huge. The comparative benefit between Yubico OTP and TOTP is tiny by comparison.
It’s also worth noting that Yubico has released the spec for validating their OTPs, so it’s possible to do so without a 3rd party: https://www.yubico.com/products/services-software/open-sourc...
In a corporate environment, it can be a bit different since if someone breaks their phone or Yubikey and needs me to disable MFA, you can make them call you, tell you the last board game you played together or the last project you helped them with, and verify their identity. But that only works in companies of ~100 developers. You get larger than that, and you can't even do that anymore, and need to fall back to pre-set security questions.
Services like Google, Github, etc. don't have huge support staffs for their free projects, so it makes sense (for them) to have these fallbacks.
Honestly, having security and unique passwords for everything (especially e-mail accounts that get password resets) is probably more important than MFA for these big services.
The point of having two keys is that there is no need for neither an authenticator app nor SMS.
If you do U2F with 2 keys, TOTP, and throw away the TOTP secret after you activate it, you’re exposed to a slight additional risk because the TOTP secret is still stored by Github, but that’s radically better than a world where they mandate SMS.
I think they are striking a reasonable compromise. I am OK with using U2F and having TOTP authenticator as a backup plan.
I am definitely NOT OK with requiring SMS for authentication, though, which is unfortunately the direction all the corporate sheep seem to be taking (namely, banks).
This seems to be very uncommon amongst service providers.
One thing you can do to mitigate that problem is to create a Google Voice number. Those are harder to port as long as you created it on Google Voice.
It's more about sinking that second factor somewhere that can't be redirected and only you can theoretically access by logging into Google Voice. You don't have to forward the messages to your actual phone.
but this can be configured to not forward the text/voicemail.
I hope this myth goes away soon. It is trivial for any bad actor to force-port a number, even if it is reclaimed a day or two later. If I wanted to get fired, I could yank your GV number (which is, most of the time, really at Bandwidth.com) in a few minutes; the medium-sized voice provider I am at does several of these types of ports a week to help customers trying to leave recalcitrant providers.
1. It should just be plain unnecessary to apply such a hack.
2. There is still a single weak point for all services.
3. While this prevents against the run-of-the-mill "attacker activates another SIM card for my mobile number and steals my crypto cash from $online_wallet" coming up here every few month, a more sophisticated attacker can just route the SMS to himself via SS7 (I suppose the SIM variant is easier to pull off with less traces, and obviously requires much less technical expertise).
The main issue with Twilio numbers is that they can't receive from short codes. If the service you're using sends 2FA codes from a short code, they'll just disappear into the ether.
Of course, if you're just using this as a placeholder because the site requires SMS backup, then maybe it's ok that you can't receive on them.
Also can anyone explain the multiple models too me? https://www.yubico.com/products/yubikey-hardware/
Say I want an USB-A key, I can choose from three and they all have different prices. The Security Keys are less secure than a FIPS or a 5? Maybe I'm just too end user but the multiple choices to me are the exact opposite of the easy to use principle. The whole product page is just confusing.
This might be more useful for you: https://www.yubico.com/products/yubikey-hardware/compare-pro...
You want the Yubikey FIPS if you're using it in a context where FIPS compliance matters, such as US government. If not (such as for personal use), then don't bother.
The Security Key series is the budget minimal "works as a FIDO authenticator". It lacks the bells and whistles that come with the 5 series. AFAIK it is not "less secure," it just lacks bells and whistles.
The 5 series works as a FIDO authenticator, but it also includes PIV capabilities (smartcards, PGP, etc.), which allow you to do stuff like this: https://developers.yubico.com/PGP/SSH_authentication/.* The PIV capabilities matter more in the context of enterprise/government applications where smartcards have been in common use as a 2nd factor for many years, which makes having an all-in-one smartcard+FIDO device very convenient.
* Caveat: I recommend against using your Yubikey for SSH private/public key authentication unless your SSH setup also requires a password. Single physical factor is worse than having a password protected key pair in your ~/.ssh folder.
If all you want is true 2-factor authentication for Gmail / Facebook / Github etc then the cheap keys are exactly what you need.
I can’t tell you how many times I have accidentally bumped the thing and thereby entered my secret key in:
- text editors
- the URL bar of my browser (!)
- Slack chats (!!)
The solution seems obvious to me: make a new type of input field at the browser and OS level which accepts U2F input, then reject that input in any text field that doesn’t opt in.
This one issue has made the key way more of a liability than a simple authenticator app for me.
The entire point of the default configuration is that the secret key is stored only on the device and never leaves it.
This said, if anyone here would like to give it a shot... our hardware is open source so relatively easy to add both plugs, firmware should just work, and I'm happy to support with manufacturing. I would highly recommend to test the market first (landing page, or even crowdfunding campaign).
Or just go 'full compat' and do NFC/USB-A/C. I could see something like that being handy for IT folks.
* The A variant yubikey is crush resistant and mechanically simple for a longer life.
* The adapter acts as a form of connector saver for my yubikey, since I usually keep it physically attached to the yubikey itself. With this approach both wear-prone sides (C male and A female) are on the adapter, which costs as little as 1/50th the price of the yubikey.
You might know what you are doing, but are you sure that's true for everybody who might find and use it in the future?
Please don't buy/produce these things.
I'm considering keeping it and using a small USB-C -> USB-A dongle. I have to live a dongle life anyway (because of the stupid Apple decision to go all-in on USB-C, with users as hostages), so it doesn't matter that much.
For a pretty critical device, the USB-C version is rubbish.
I have an YubiKey 5 NFC on the keychan and it has handled the abuse over the years just fine.
Seems the main questions to ask yourself are:
* is NFC desired?
* do you need/want USB-A or USB-C?
This product adds a Lightning option.
I think implementing new ones is not possible for applications without licensing MFi (which is incurring fees per unit sold, as far as I understand).
The way they are doing it, Yubico doesn't need to pay/pass through that fee on each key sold, but only on the ones actually likely to be used with iOS devices.
I think they can actually reduce their offerings to two:
* USB-C and USB-A
* USB-C and Lightning
NFC on both variants.
I think government issued cards are good contender for this. Perhaps it could even replicate certificate authority chain principles - certain cards could sign other cards and then can be invalidated if compromised.
My local id card is absolutely pathetic. I have no idea where to get a reader (although they are generic) and worst part is requirement to run Java applet in your browser - something that has been dead for over 10 years...
For this, devices would have to support real NFC in the first place. iPhones/iPads don't allow app usage of NFC, and the flagship Samsung tablets don't ship with it at all.
I know they already let certain apps to read pom’s passports. Just a matter setting a standard?
Sounds like from the others ios 13 will resolve this.
This is really why U2F falls over in the Enterprise, at least from what I've seen. Customers want centralised management, but the U2F protocol just can't support that.
FIDO (WebAuthn) via browser abstracts some nuances from you such as "how do you know you can trust the server you're talking to?" and "how do you deal with the difference between FIDO and FIDO2 authenticators?" alongside other details that the native SDK's don't abstract away from you.
(For background: part of my day job is implementing authentication systems.)
I happen to have one of the InCharge chargers as keychain  and what is interesting is that it is 3 chargers in one: one's always USB-A, other one is either USB-C or microUSB/lightning. So they combine microUSB with lightning on the same connector. I wonder why YubiCo did not go with a similar design.
If you have a NEO still, I'd recommend at least upgrading to a 4 if you can.
I thought I was waiting for a 5C NFC, but maybe I've been overlooking the obvious. For some reason it never occured to me that it might work plugged directly into my phone if only it had the right shape.
(I have a 5 Nano, which was great until I got a USB-C Macbook and broke several keychain adapters before giving in and buying one of those hubs that stick on one side converting the 2x USB-C to the same + USB-A, HDMI, and SD.)
If you're already planning to buy a new iPhone in September, you should probably wait to buy any accessories. (And in truth, my gut is that Apple will stick with lightning on iPhones for some time.)
(One would hope that Yubico and Apple have been in touch with each other at least the minimal amount that would be required to avoid such a fiasco. But given Apple's penchant for secrecy, who knows?)
I am tempted to buy it to see if I could get it to work on my phone, I would much rather have the USB-C work on my phone than an NFC.
> At launch, it’ll support these well-known password managers and single sign-on tools: 1Password, Bitwarden, Dashlane, Idaptive, LastPass, and Okta. And when using the Brave browser for iOS, the YubiKey 5Ci can be used as an easier way to log into Twitter, GitHub, 1Password’s web app, and a couple other services.
Alright, I'm not a security expert, but I'm not completely illiterate to basic computer security. Anyone care to chime in how this is much more secure than a two-factor app?
Sure there's the obvious, nobody can just copy the two-factor app off my phone with all the codes and have the same codes (I've upgraded phones and taken my codes with me before...) but whose to stop someone from cloning a Yubico key?
Again I'm not an expert, but I do want to know if this is purely marketing hype or if there's some security to Yubikey and friends that I'm not aware of.
Also pardon me if I confused two-factor as the Google Authenticator app. Too many "factor" terms get a bit confusing after a while.
That is precisely what these devices are designed to stop. The device has a private key stored in hardware in a way that it cannot be retrieved by software. When you use one of these devices you dramatically decrease your number of attack vectors because now the attack has to happen physically. Someone has to actually steal your physical key. And because this is your "second factor", if that happens they still also have to have your password.
Two-factor apps get closer to this, but they usually can be copied. For instance, 1Password can be set up to mimic Google Authenticator.
> I do want to know if this is purely marketing hype
This is most definitely NOT marketing hype. It is the current security best practice.
I was storing backup codes in 1P before I realized that I was putting all my eggs in one proverbial basket.
Probably wouldn't be a bad idea to have a distinct password vault for TOTP seeds, although they'll be stored on whatever device you generate codes from anyway (hopefully in a secure way). At the very least, it might be helpful for moving between TOTP devices without needing to do the random steps required by different services!
It's pretty obvious why this changed, however, it is a major increase in exposure and a terrible change overall. It is discussed in  and mostly the answers aren't very satisfying as demonstrated in .
I guess it makes sense. By the time an adversary gets your key you would of noticed and have locked that key from your account.
With SecurID and similar technologies vendors technically didn't need to retain the secrets inside those devices after they'd been manufactured and shipped, but you can see the practical temptation.
On a smaller scale, since the system doesn't use a shared secret you should just swap a brand new Security Key with somebody else or if deploying to an organisation just muddle them and let people pick whichever one they want. You don't care which key you have, the more random the better.
Definitely, it's an expensive vulnerability to simply reveal though. YubiKeys are proprietary so they can't be audited by non-contracted third parties. There are FOSS security keys that don't have that problem though.
1. They can steal the secret, from you or from the other party (the service you're authenticating to)
2. They can relay your answer, without knowing the secret and since the answer is genuine their relayed answer will be accepted. Example you type the One Time Code into https://fakebank.example/ thinking it is your bank, then the bad guys simply paste that same code into https://realbank.example the actual site of your bank and it works - they get into your account.
FIDO Security Keys don't use a shared secret, instead they use public key crypto. Your Security Key proves it is still the same key as it was when you registered, but the remote site doesn't have any secrets so they could even _publish_ the data they store to authenticate you and that wouldn't matter to the system's security.
And it relies on the web browser to tie the FQDN into the data signed. If you press the button on the Security Key on https://fakebank.example/ the results are only proof for fakebank, operated by the bad guys, if they forward it to https://realbank.example/ it won't work because the name is wrong and it doesn't match.
Does that help? There are a bunch of other cool things about Security Keys, but those are the main ways it's simply better for security than a conventional TOTP-style second factor.
For example, Google reported that deploying U2F blocked all phising attempts of their employees : https://krebsonsecurity.com/2018/07/google-security-keys-neu....
evil.com can still claim to be mysite.com and trick users into using their security key there, no?
It would presumably have to trick the user into "registering again" since there is no valid key handle for FIDO2 (not sure about classic U2F), but I doubt that users would notice the difference in flows.
The real win is that this spoof would still not give evil.com a credential scoped to mysite.com, but a user would be none the wiser that they are on a different site than they expect (which can still be problematic depending on the nature of the site).
Even if evil.com gets you to register and present your key, they cannot forward it to mysite.com. Even if they go to all the trouble to completely mitm you and the site looks identical to mysite.com, they cannot get the emails or get it to send money.
This is because evil.com cannot pretend to be you when they interact with mysite.com no matter what you've given them.
It's going to be hard to trick me into thinking I've logged into my gmail if none of my emails are there!
Unless they somehow convince you to put your yubikey in the mail and physically send it to them...
I'd just be careful about overly relying on this property or calling it anything like mutual authentication:
If an attacker can make an educated guess about a user's account contents, they could still convince them to provide additional personal information once they let their guard down after authenticating.
I have never seen a solution like this, but I can't see how it would avoid simple relay phish, it seems as though bad guys can just get themselves a legitimate encrypted barcode and have their victim scan it.
Maybe you haven't explained this solution properly, do you have a link?
Since the Yubikey (and friends) operates by generating a private key on the device itself, inaccessible to software, it means that the key is hard to clone. They'd have to steal it off your person. Combined with generating a unique key pair for every website and checking that the website is what it claims to be, this means that the key is resistant to replay attacks, which number based authenticator apps are vulnerable to.
That isn't to say that it's immune to phishing, but it's much, much harder to phish someone who is using a Yubikey because an attacker would need to compromise someone's DNS configuration or SSL configuration to impersonate the website, at which point... why are they bothering with phishing?
Yubico has some FIPS certified devices, which means that they've presented a design that shows the device has mechanisms to prevent secrets from being extracted, and they're only using algorithms known by NIST not to leak secrets.
> Also pardon me if I confused two-factor as the Google Authenticator app.
Multi-factor authentication is about managing risk, and discussions about risk are naturally fuzzy and vague.
I'll try a concrete analogy; consider firearms safety.
Some typical rules: 1. keep the weapon pointed down range at all times, 2. keep your finger out of the trigger well, 3. treat the weapon as loaded at all times.
Each rule is a factor, and to accidentally hurt someone you have to violate all the rules at once.
Multiple factors work best if they are orthogonal, that is, when a given action results in only breaching a single factor. That's why factors tend to be phrased as "something you know," "something you are," "something you have".
The authenticator app and a Yubikey are doing the exact same thing: they're establishing the "something you have" factor.
Since the two factors work when an attacker must both obtain the device and get your password, if your phone has both passwords and authenticator apps, the additional factors aren't minimizing that risk.
: The automatic vendor lock-in makes it a great business model...
: There are many more, but take a class on it rather than depend on the Internet.
At least two HW backdoors have been found in chips such as this one in earlier versions, the first using an improper RSALib RSA variant (developed by NSA or Mossad). The source for the second, SCADA, is unknown.
So any nation state wanting the most people to use it, will advertise it as such. "unbeatable security and can protect against a variety of threats, including nation-state attackers."
This happened with the Infineon backdoor, the SCADA backdoor, the earlier Crypto AG backdoors.
Obviously they are biased – they are in the business of selling hardware, not software, but their reasoning still makes sense to me.
It's arguable whether "destructive updates" that invalidate all existing registrations should be allowed, but that could have its own problems in terms of availability if not clearly communicated.
On a related note, I wonder if there is actually a more secure way of preventing illegitimate firmware updates in JavaCard/GlobalPlatform than randomizing the card manager keys, which is what Yubico does for the NEO: https://www.yubico.com/2016/05/secure-hardware-vs-open-sourc...
This should be impossible.
So really there's no point in cloning. Straight up theft is the bigger concern.
I really do want them to work fully though so I can extend my product to mobile too. I know a lot of people now that have gotten rid of their computers and just use their phones for everything.
To be clear, I'm not knocking the use of these keys. I have Yubi Nano on my laptop and love it.
The USB-C versions look much more fragile though...