I love the principle, but I can't use it with AWS, I can't use it with my bank, I can't use it with my domain registrar, and I can't use it with Office 365.
That's 0/4 of the high priority targets for me.
Edit: If anyone from AWS is around, please consider this. Your organisation has made huge headway in the field of security, but the AWS console logon is a very high profile target.
Similarly I remember the first organisation that really tried to get me as a user to care about account security with a hardware token was .. World of Warcraft. Followed a bit later by the use of TOTP in Steam. Banks meanwhile are either stuck in the past or have tried bad proprietary solutions.
Domain registrars are an odd one, because so rarely does one need to log in or do anything - but when you do it may be urgent.
> Banks meanwhile are either stuck in the past or have tried bad proprietary solutions.
Or they have a standardized API for accessing all accounts, and use your existing debit card’s security module to sign and authorize all messages you send via this API.
That’s FinTS, and every German bank supports it. Standardized, univerals, open, and secure.
Oh, and KMyMoney and GNUCash also have plugins to directly access it, and there’s even a taskbar widget for KDE to always show your account balance.
Banks in the UK have been handing out hardware fobs. Not USB devices you put in, but a calculator like interface where you input the challenge they give you and you type the response back.
I guess it works since it doesn't need anything else special and can easily work with web flows.
Banks in the US also hand out hardware devices, especially for business accounts.
I have RSA token device from Wells Fargo for example.
The problem with these is that I need a special device for every single account. There are two business accounts I have access to and I have to deal with two hardware devices.
The ones in the UK are a standard - every account I have uses the same ones and they are interchangeable. Everyone I know had one - they are ubiquitous as the chip and pin standard is here.
I have 2 bank accounts in the UK, and neither of them provided an hardware fob.
It's frankly a bit disappointing, since storing secrets in their mobile app means that at the end of the day, when you do banking on your phone, you're really using only 1-factor authentication
That's interesting - I have accounts with three of the major banks and all of them have me them (they do use it different amounts, some only for signing transfers, some for everything including logging in).
I work for Duo Security. We have an SSO product that allows you to use U2F Yubikey tokens with AWS, Office 365, and anything else that supports SAML. Other companies offer similar products.
The highest profile target is your personal email account, because that's generally the account that can reset the others. Does your email have support for U2F or at least an authenticator app?
Edit: I've always felt strongly about having things like the AWS logon connected to a dedicated and otherwise unused mailbox, it helps minimise the risk discussed here.
> Edit: I've always felt strongly about having things like the AWS logon connected to a dedicated and otherwise unused mailbox, it helps minimise the risk discussed here.
Do you check this mailbox frequently? If not, won't you miss important communication?
I have my email protected by an SSH key plus TOTP (using the cool Google Authenticator PAM module), but I just realized I haven't enabled 2FA on the VPS control panel, so everything is protected only by a single password. sigh
When you are hosting your email / whatever on a VPS... access to your system is possible via a whole bunch of "not 2FA enabled" routes.
I would treat that as the least trusted storage available.
Yeah, maybe. My "threat model" is mostly drive-by automated attacks (like malware stealing my not-very-protected SSH keys) or overly broad warrants, not people specifically targeting me.
The website (AWS) doesn't use U2F in that scenario, so you don't get the phishing-resistance U2F brings to the table. It's just regular TOTP, with the secret stored on your Yubikey instead of your phone (or wherever else you'd store it normally).
As an addendum to this, I'll relate my own experiences.
The YubiKey NEO-n (and I'll assume the more recent 4 Nano) are phenomenal if you can (semi-)permanently spare a USB-A port. The ability to generate TOTPs and FIDO tokens without having to dig out your keys is an amazing convenience.
Unfortunately I can't say the same for their 4C, which I'm using with the new USB-C-only MacBook Pro. The plastic it's made from begins cracking apart within a month of use, and completely disintegrates within three months, rendering the device inoperable. To their credit, Yubico has replaced my device twice so far, but how this made it past their quality control I have no idea.
I deeply hope they fix the plastic durability on this device, as well as offer a nano version with USB-C support that I can leave in my laptop permanently.
I've got several (~7?) Yubikeys: one that Mt. Gox sent me several years ago (that would only work with their web site!), an old "Symantec VIP" key, a NEO that's permanently plugged into the keyboard on my workstation, a couple of Nanos (almost always plugged into a pair of laptops), and a couple of the U2F-only ones.
I don't think I've ever had any issues with any of them (there was a "bug" a few years back and they had to "re-issue" some keys) and I really don't have any big complaints other than they're a little expensive (but I keep buying them so apparently not too expensive), just a few minor things.
I've never actually used the U2F ones (browser usage is ~90% Chromium, ~10% Firefox, on Linux exclusively) but maybe I'll get to someday. The NEO and Nanos get used dozens of times a day as my SSH keys (GPG subkeys) are on them. They're also used for unlocking LUKS containers at boot (challenge/response, with a "passphrase"). I do wish it was easier to load (SSL/TLS) certificates on them -- and I wish they held more of them! -- but I have a bunch of physical ("real") smart cards so I just use those instead.
Ideally, I'd be able to put certificates on them and use them to authenticate (Open)VPN clients on both Linux and Windows. That might even be possible today but, if it is, it's likely way more complicated than it should be.
Oh, also I used the VIP and NEO at one point with LastPass and they worked great (but I switched from LastPass when LogMeIn bought them).
> I do wish it was easier to load (SSL/TLS) certificates on them
What do you mean by "easier"? Yubico PIV manager is super simple to use. Of course you have to generate the certificate with OpenSSL yourself but that's part of the process. And the PIV certificates are automatically seen by OS. I'm using them in browser, git can be switched to use system certificate store effectively having hardware protected access to repos. SSH can also use them either via OpenSC or Putty CAC (depending on your OS). I've done that and it works, it's not much harder than the OpenPGP setup.
The downside to doing this on my Nano is that the CCID mode causes the light on the Yubikey to blink constantly. The only way to shut it off is to disable CCID mode. Who thought this was a good idea?!?
> Oh, also I used the VIP and NEO at one point with LastPass and they worked great
Have you tried logging into LastPass on your mobile device? It doesn't require the token - effectively bypassing the point of requiring a hardware token in the first place ..
That was my problem first too, but you can make it require the token on mobile too. I have it set up that way and I need to tap my Yubikey NEO to authenticate using NFC on Android. Works great.
I haven't, actually, but I'm not really that surprised (I moved off of LastPass two years or so ago, a few hours after they were bought). I think you can restrict access via TouchID (on the iPhone) but don't quote me on that.
FTA: "U2F Zero ... the only token on Amazon that has open source firmware (and hardware designs) ... you value the coolness factor of it being open-source"
Coolness factor? The openness of software and hardware is very important for a security device as this.
We should expect something so critical and simple to be easily auditable.
It's a myth that open source tools are more secure than closed source tools.
Use whatever tool is put out by the team with the most expertise and the least reason to do something untrustworthy (or alternatively: the most to lose). A backdoor can persist in open source tools just as easily as it can persist in proprietary ones, and teams which make open source tools are frequently smaller and underfunded for secure feature engineering when compared to teams which develop closed source tools.
> It's a myth that open source tools are more secure than closed source tools.
It's a myth that they are always more secure, but my experience has been that they are generally more secure. I'd love it if we had some statistics behind this.
I would argue that foss tools have an inherent interest in being more secure, while closed-source tools have an inherent interest in being less secure.
The "open source" development model is based on the claim that it produces superior software. Of course that claim will be made. But I agree that it is a fallacious claim. _However_, there are strong benefits of anyone being able to audit the code rather than just privileged entities (see below).
Security is also about trust. I can't trust non-free software as a matter of principle---the developers, by keeping something proprietary, are hiding something from their users. The intent is irrelevant---"trade secrets" or not, the fact is there.
Back to code audits: a common rebuttal is that you can audit certain proprietary software through the signing of nondisclosure agreements. The trust then shifts to the auditor, performing an extremely complicated job (depending on the software). Nobody can verify their work, or even the integrity of the work.
Certain types of changes can be "audited" by chance in Free systems: malicious commits/patches to free software is an extremely risky operation because of the chance that you may be found out. You can set yourself up for plausible deniability, but that can hurt your reputation, and the reputation of the project.
Since trust is an important aspect of security, non-free software is by default less secure _to me_. It isn't even an option.
> A backdoor can persist in open source tools just as easily as it can persist in proprietary ones
Just as easily? Citation needed. Especially when we are talking about a simple security device that can ship a tiny, well written, well documented codebase.
> teams which make open source tools are frequently smaller and underfunded for secure feature engineering when compared to teams which develop closed source tools
This is besides the point. The same code, developed by the same team, can be released, or simply made publicly auditable.
It is really a pity that FOSS is not yet a strict requirement for every security-relevant component. This is especially strange given that "Security by Obscurity doesn't work" and "Nobody can analyze code which they can't easily build and modify" are well established concepts by now.
Yes it does! This meme is both widespread and incorrect. Security by obscurity is a great layer for defense in depth. The only time it's not sane is when it's used as the only method. But that's also like saying whitelisted input or parameterized queries don't work - they don't work on their own, but all things being equal you can have very secure systems that function in an opaque way while still getting strong assurances through formal audits.
> Nobody can analyze code they can't build and modify.
Essentially no one analyzes open source code regardless, unless they have a financial incentive. Even if you are one of the vanishingly small number of people who inspect the code for every FOSS tool you use, you are, with probability approaching 1, incapable of identifying all potential backdoors or critical security failures.
You need to balance the Platonic ideal of open software for audits with the fiscal reality that secure software is highly expensive and typically emerges as the product of well funded teams with significant expertise. This describes very few open source projects.
These are currently Chrome only. The Firefox extension requires an external binary and doesn't work in Firefox 57 anyway.
I like the idea of a physical token, but it would need to be more universally compatible, which seems difficult unless you go the route of one of those RSA tags with an LCD display on it. But there's no open standard for those, except for TOTP, and if you wanted hardware TOTP you'd need a separate dongle for every site. Inconvenient. (I wonder if you could make a TOTP dongle that can store multiple keys?)
I do use TOTP, but I use 1Password, which means my keys are not confined to a single device. I wonder how much less secure this makes them, but it's probably still better than not using 2FA at all.
TOTP is phishable. U2F isn't. It wouldn't be good for the world if TOTP became universal, as phishing would adapt.
However, the single TOTP hardware device for multiple sites that you postulate converges on the U2F design if you wrap the OTP in signatures so both sides know who they're talking to.
An upcoming Firefox release will get built-in U2F support, no need for an add-on.
It already supports a softtoken (I.e., implemented in software) behind a pref. Works well for me on e.g., GitHub. But that's unlikely ever to be enabled by default in a Firefox release.
Even with current version (55), one would expect lot of the groundwork to be in place. And sure enough, searching for "u2f" in about:config brings up a nice triplet of security.webauth.u2f* keys.
Somebody please make a Yubikey Nano like key for USB-C. Don't know what magic would be needed but need something that I can just carry with my laptop that doesn't stick out all the time. Or make it work over BLE.
It's mentioned in the last sentence of the article:
> We are working on an additional smaller YubiKey form factor with a USB-C design akin to the YubiKey 4 Nano, but do not yet have a time frame for availability.
Unfortunately they are basically unusable on Android. Only Yubikey NEO has NFC interface and others cannot be used even by USB-C/OTG interface. The support would have to be implemented in Google Authenticator but it seems they are not interested in it.
Teachable moment: Five paragraphs explaining what security keys are and why they're good, then it's off to Amazon to buy one with the search phrase "U2F security token". The term U2F appears nowhere in the previous five paragraphs.
I agree that the author should've at least explained what U2F stands for (at the least) but, FWIW, both Google and Facebook just refer to them generally as "security keys" (I presume because they {do|intend to} support other types of security keys?).
I think the decision to call them 'security keys' is to avoid jargon, and to give a helpful metaphor to how they work (you need a security key to unlock your account like you need a car key to unlock your car).
What I'd like to have is a TOTP hardware device that can store multiple keys.
Even though U2F is much more secure, it seems like the adoption is a bit slow, unlike TOTP which is broadly supported in my opinion.
However using TOTP with e.g. Google Authenticator makes it a pain when you lose/reset your phone, and it's harder to share, for example for administration of certain services at a company (say Heroku).
If I had a chance to buy a multi-key TOTP hardware device I could enable it on every service and then give the administrators one of the devices.
So yeah, ideally I'd prefer broader U2F support, but in the meantime I'd love a multi-key TOTP device, which I haven't been able to find unfortunately.
Closest thing I know about is the Protectimus Slim NFC [0], but I would to need to buy two of them for each service, which gets expensive and unwieldy very fast.
Yubikey Neo can store multiple TOTP keys. Have to use Yubico Authenticator app to load or read them on Android phone through NFC. There is also desktop app for USB.
I was not aware of that and it does sound like a solution.
However I'd love to have a hardware device that I don't need an app to use. To load the keys and configure it I have no problem with, but to actually read the TOTP it would be great if I could just click a button and read a display instead of having to use extra infrastructure.
Kind of like my 2FA token for my bank, which just requires me to push a button and that's it.
Yeah, there are definitely workarounds, but I'd rather have a hardware device than a phone, in particular so I don't have to ask employees/administrators to load stuff on their phones.
I keep asking this on every key thread and don't get great answers - shouldn't we care which ones are actually audited by a 3rd party? Should we bother at all if they aren't? It appears that most aren't, yubikey and nitrokey are the two that are.
Of course we should. I'd also prefer that the code/firmware was open as well.
I would hope there are specific requirements in the spec/standard WRT protecting the keys and such but I haven't checked to see if that's the case.
Yubico is, according to their website, working towards FIPS 140-2 validation for at least a few of their devices if that has any value to you (no idea about any of the others).
That is incorrect. U2F relies on the browser to correctly report the domain you are signing into. If your browser is compromised, you can be phished. But it would have your cookies/tokens already, so probably no need to.
Then why not use a software token built into the browser or OS?
Having a hardware dongle is absolutely about preventing malware on the PC from making off with your key material. That's also why you usually have to press a button to complete the U2F flow.
You're not wrong, but for most sites, there is very little difference between extracting the U2F key and a session key, from the victim's point of view.
Exceptions to that rule include sites that do things like what GitHub calls "sudo mode", where you have to confirm certain security-sensitive actions with another U2F confirmation. This would require more effort on the attacker's side, as they'd have to trick the victim into performing a U2F confirmation. More effort, but far from impossible: simply display a fake login prompt for the victim, but instead of logging in, perform whatever malicious action you want to perform. Session keys might also be less persistent (they're limited to something like 30 days on Gmail, for example), so that's another small advantage if the attacker wants to keep their access over long periods of time.
Still, for the vast majority of sites and threat models, hardware keys aren't a whole lot better. If it's easier to get adoption for soft keys, that might be a worthwhile trade-off. Natively supported TPM/SEP-backed keys would probably hit the security/UX sweet-spot.
TOTP keeps authentication secure in the face of password reuse/compromise.
If you add a password manager to the mix, it keeps you safe in the event that only your password manager is compromised, unless you're storing your TOTP secrets in it.
Has anyone build a U2F + PIV device?
I need a device that does both u2f and digital signatures (like in standard PKI)
Only nitrokey says they are working on something like that, and Yubico doesn't care to answer emails about this.
U2F/UAF/Webauthn are a really interesting chicken/egg problem.
Right now, not too many providers support these auth protocols, even though they are more secure than current 2fa alternatives and provide a better user experience. The aren't widely supported because that costs development time and many of their customers don't have security keys.
Customers won't buy security keys because it's an added cost that isn't supported by many websites.
The author of the post briefly mentions SoftU2F at the bottom of the post, but it's important to recognize how significant SoftU2F is to the U2F ecosystem.
We're just now starting to see consumer hardware come with HSM's built in. Apple SEP, Intel SGX, etc. are examples of this. SoftU2F _will_ be able to leverage these consumer HSM's to do secure crypto operations - see the pull request at [0] about storing keys in the SEP. This will effectively put U2F and UAF capability into the hands of your average consumer with no additional cost. Things can be just _built-in_, which is what it'll take to start seeing increased protocol adoption across browsers and service providers.
I'm stoked about the future of security keys and the associated protocols. Both external keys and HSM's built directly into consumer hardware.
TPMs are more complex. They can be used as a virtual smart card. TPM can be used to store client certificate used in HTTPS. U2F on the other hand is super simple by design. For example Yubikey uses one symmetric secret to generate P-256 keys deterministically. That means they don't need to store actual asymmetric keys, they can derive them from secret on demand.
What does it use to generate the key? The secret and the URL/realm? I guess it's implicit that you won't ever need to regenerate one key without regenerating others?
Unfortunately solutions based on SGX lose two advantages of hardware tokens: no physical touch button and no portability.
SGX doesn't provide any real advantage. For example Chrome team was evaluating storing Token Binding keys in SGX but eventually ended in using software store due to their (quite reasonable) threat model.
While I absolutely agree with you, it seems far more likely that native implementations by Apple, Google and Microsoft will dominate the market. Windows Hello is a great early example of this.
SoftU2F was designed fully conscious of this. It was largely developed to help push forward the idea of a software based token with every hope that, over time, native browser and operating system implementations push things fully into the mainstream.
I hope they do. Wouldn't that be incredible? Having _native_ U2F/UAF built right into the browser.
The hard part is that there's another more subtle chicken and egg problem when it comes to software implementations and consumer HSM. Google/Apple/Microsoft will likely only really push forward native implementations when there is enough market share for consumer hardware having HSM's built in to make it worth it with feasible fallback options.
My main concern is getting access when the security key breaks or I lose it.
For this, it's imperative that there's either another way to get access or that I associate a 2nd security key with the service I need access to. Of course, it's possible that the 2nd key breaks or is lost.
At least with hard drives, it's common for two hard drives manufactures around the same time to fail around the same time, so I'd be concerned with the possibility of that happening with the two security keys as well.
You might argue "well why bother with U2F, if you are going to set up TOTP anyway", to which I respond that using U2F is still a net win, because for the times you use U2F, you are safe from phishing attacks.
That in an emergency situation you have to use TOTP, and thus be vigilant that you aren't being phished, does not negate the benefits from having used U2F previously.
I can see that by enabling TOTP as a second-factor, it increases your attack surface. That is, you now have to care about whether your TOTP secret has been leaked. I consider this cost to be small, compared to the benefit of being able to fallback to TOTP. Others may decide this tradeoff isn't worth it.
It's still not clear to me how u2f prevents phishing. If I was being proxied through a site with a valid tls cert, they still gain my password and a session key.
U2F keys are linked to the associated domain (e.g. google.com or dropbox.com), so your U2F would not present your google.com key to a U2F prompt on googlehax.com
This stops the proxy attack you describe getting a session key, but not getting your password. Of course, the password alone is insufficient.
The browser passes the origin (protocol + hostname + port) to the key. This field is included in the data the key signs. The signature would not match for an authentication response obtained via phishing because of that.
Your original description is a bit ambiguous: were you referring to a case where the actual domain was being successfully man-in-the-middle'd with a valid TLS certificate? That's not something U2F aims to solve (and not something that people generally refer to as phishing).
Edit: Small correction: U2F makes use of TLS ChannelIDs, which, if I understand correctly, would help in that scenario (MitM with a trusted-but-attacker-controlled certificate on the "real" origin). I'm not sure if anyone other than Google makes use of this yet, as it requires a new TLS extension on the server.
No, you answered my question. I accepted that any/most/all authentication breaks down in some way of you can't trust tls certs or the browser/user agent.
I have 2 keys, one stored in a safe place. They are cheap enough that you can have several, but then it becomes a challenge to keep your various accounts up to date. I wish that openid was still well supported as I would like to have one authentication provider for all my online accounts.
So, when you join a new site that supports U2F do you have to retrieve your "extra" key and join it to the site? That seems quite fragile and fraught with danger. :(
But you wouldn't be using both hard drives at the same time. You put one in a safe (place) and only take it out for the occasional setup, maybe swap them every couple months.
This holds true for any 2FA solution, and is generally solved by services that use this kind of technology (e.g., in the form of FIDO U2F) to allow you to register more than one device (so you can have a backup key) and crucially, to allow you to generate a large set of security codes that can be used once to reset access to the service. These codes you would ideally print out and store with your other valuables, or backup in some other manner not dependant on your security keys.
I've been using Yubikey as my 2-nd factor password source for years. It's great. I would have even thrown away second factor if yubikey could have unlocked macos FileVault.
What's 2-nd factor password? Well, basically yubikey stores just long text string, and another, shorter string, is stored in my brain. When i login i enter short string, then press yubikey.
To steal my data you don't only need to steal yubikey but also get my part of the password from me.
[-You would be correct if-] It seems you are correct when the protocol works as described in the parent comment.
[-But it so happens that the-] Of course in OTP mode, the YubiKey protocol protects against replay attacks by using a counter on the YubiKey. This (authenticated) counter value is included in the messages that are exchanged during the authentication - and hence any replays can be detected/ignored as the counter value will be less than or equal to the last received counter value.
Edit (deletions marked with [- -]): I had no idea people used modes other than OTP with their YubiKey...
Unfortunately parent is right. What you describe is using generated one time passwords. But there is no way (to my knowledge) to incorporate 2nd string into it.
Ex: right now - myPass<boop the yubikey><long password from yubikey followed by linefeed>
with otp - myPass<boop><hashed and signed one time password that no nothing about myPass>
I should note that currently i'm thinking to migrate to OTP and use brain-string (password that i remember) for filevault and mac login. I will try using OTP for sudo, maybe keychain, will try to add gpg subkey there and see how it'll go.
I just recently bought a Feitian ePass FIDO NFC U2F security key. It works great with Google, Dropbox, GitHub and Facebook.
Considering Feitian ePass FIDO security key's pretty good quality (injection molded and sealed key), great price point (with the special, it was almost half the price of lowest Yubico keys yet still NFC capable), it's gonna be my next go-to choice for future U2F keys.
The last time I tried to use U2F on Firefox, I had install an extension, and I had to change my user agent string to Chrome to even be offered U2F on most sites, including my employer's DUO based auth system.
This caused no end of headaches, because my employer would ... helpfully... send me nagging email about using an out-of-date browser, due to the user agent strings specifying some older version of Chrome.
The nano is my favorite one as well. It's so unintrusive. Now I hope agl is writing this because he's looking into making these keys more useful. Because right now the usecases can be counted on one hand.
I prefer my HyperFIDO Mini. The Nano sends OTP text if you accidentally brush into something, and it's very awkward to get out. The light is "blinding" ??? That is ridiculous...
In phishing, bad guys lure users to site B while users believe they are at site A.
Users enter their credentials and game over.
The important point is that users are deceived, but not the web browser. The browser knows perfectly well, that the URL is amazon.com.something.com, the problem is that it has no expectation which URL is the one the user has in mind.
With a U2F key it's the same scenario. But the browser tells the key "this is for amazon.com.something.com" the key incorporates that into it's cryptographic input, so it's "bound" to that URL.
Later the bad guy presents the key's output to amazon.com, and now the server has an expectation that's not fulfilled: "I'm sorry, that's not for me".
Also, the scheme you describe requires the user to trust the web browser. But, if the web browser is, say, on a public computer, it can't necessarily be trusted.
They would also have to get a publicly-trusted TLS certificate for that domain. U2F is HTTPS-only, at least on Chrome. If the attacker can do that, you have bigger problems.
> Also, the scheme you describe requires the user to trust the web browser. But, if the web browser is, say, on a public computer, it can't necessarily be trusted.
No one has really solved secure authentication on a compromised system. I dare say it's not a problem that can be solved.
Can you reuse tokens, or use previous ones? It seems that, even if the browser were compromised, the key should still be safe, as the user has to use one response to log in. If the response can't be reused, and the challenge can't be reused to generate a new response, the user should be safe, no?
Basically, if the site sends a random nonce to sign, I don't see how you can reuse things. The most you can do is have the browser sneakily and remotely authenticate you on your computer (if you're the attacker), but the user still needs to press the button.
I think you're making this too complicated, unless I'm missing something. If your system is compromised, the attacker can steal your session key (from your cookies or whatever else the site uses). That's just about as good as your U2F key, with some defense-in-depth exceptions that I mentioned here[1].
Interactions with the device include the site domain (as determined by the browser). If you're browsing google.com.totallynotaphishingsite.example.org, the device won't use your Google.com enrolled key to auth, because the domain doesn't match.
It's not impossible to get a phishing site on the same domain as the real site, but that requires a site security error, or active mitm with a improper cert. A much higher bar than simply tricking users.
I love the principle, but I can't use it with AWS, I can't use it with my bank, I can't use it with my domain registrar, and I can't use it with Office 365. That's 0/4 of the high priority targets for me.
Edit: If anyone from AWS is around, please consider this. Your organisation has made huge headway in the field of security, but the AWS console logon is a very high profile target.