* The hardware basically consists of a secure microprocessor, a counter which it can increment, and a secret key.
* For each website, e.g., GitHub, it creates a HMAC-SHA256 of the domain (www.github.com) and the secret key, and uses this to generate a public/private keypair. This is used to authenticate.
* To authenticate, the server sends a challenge, and the security key sends a response which validates that it has the private key. It also sends the nonce, which it increments.
If you get phished, the browser would send a different domain (www.github.com.suspiciousdomain.xx) to the security key and authentication would fail. If you somehow managed to clone the security key, services would notice that your nonces are no longer monotonically increasing and you could at least detect that it's been cloned.
I'm excited about the use of FIDO U2F becoming more widespread, for now all I use it for is GitHub and GMail. The basic threat model is that someone gets network access to your machine (but they can't get credentials from the security key, because you have to touch it to make it work) or someone sends you to a phishing website but you access it from a machine that you trust.
Most places that allow it require that you have a fallback method available.
Since 2FA only comes into play for protection if the password is compromised, if you're using a password manager that should mean that data breaches at unrelated sites shouldn't be a risk.
So we're down to phishing and malware/keyloggers being the most likely risk -- and TOTP offers no protection against that. If you're already at the point that you're keying your user/pass into a phishing site, you're not going to second guess punching in the 2FA code to that same site. I'd even argue push validation like Google Prompt would be at a significant risk for phishing, unless you are paying close attention to what IP address for which you're approving access.
Sounds a little obvious to write it out, but it protects against someone stealing your password some way that the password manager / unique passwords doesn't protect you against. Using a PM decreases those risks significantly, mostly because how enormous the risks of password reuse and manual password entry are without one, but it certainly doesn't eliminate them entirely.
I know having someone malicious get into your account multiple times vs once is likely worse, but its hard to quantify how much worse it is - and of course using that one login to change your 2FA setup would make them equivalently bad.
Both seem true, and what to do to protect yourself more depends on what kinds of attacks you're interested in stopping and at what costs. Personally, PM + U2F seems the highest-security, fastest-UI, easiest-UX by far — https://cloud.google.com/security-key/
The best answer I have for where TOTP can provide value: you can limit a potential attack to a single login.
I wanted to say you could stop someone doing MitM decryption due to timing (you use the 2FA code before they can), but if they're decrypting your session they can most likely just steal your session cookie which gets them what they need anyway.
Logging in to a site on a public computer and the browser auto-remembers the password you typed
A border agent forcing you to log into a website (this scenario only works if you leave your second factor, which will most likely be your phone, at home)
Man in the middle attacks of course, which are possible on insecure connections. With the prevalence of root certificates installed on people's computers as a corporate policy, by shitty anti-viruses, etc, it's very much possible to compromise HTTPS connections.
The TOTP 2FA code acts as a temporary password that's only available for a couple of seconds. A "one time password" if you will.
Yes, it still strengthens security.
Read 1Password's article about it: https://blog.agilebits.com/2015/01/26/totp-for-1password-use...
Phishing sites collecting and using the 2FA creds in real time was discussed here, among other places: https://security.stackexchange.com/questions/161403/attacker...
With available open source like https://github.com/ustayready/CredSniper readily available, you're only going to stop lazy phishing attempts.
You only get protection if you assume the scripts are just passively collecting information for use at a later time. If they're actively logging in to establish sessions while they're phishing, it's game over.
Shoot, you're right. Not sure what I was thinking. My bad.
There's obviously a class of attack that hardware tokens protect against (malware) that password managers can't entirely (unless your operating system has good sandboxing, like Chrome OS for example.) But it really does protect against phishing to a degree, as well as certain attacks (key loggers or malicious code running on a login page on the browser)
Hardware tokens are the winning approach, but even when you put TOTP into a password manager it is far from useless.
U2F defends against that sort of phishing as well.
As TOTP use has increased, the basic phishing toolkit has evolved to match. Attackers want accounts, not passwords, so they're just adjusting to get working sessions. The passwords were only ever just a means to an end.
Where I'm working now, we deal with several credential loss incident each month. Invariably, our users are tricked into authentication via a bogus site. 2FA would protect the credentials from being used by unauthorised people. Our staff are encouraged to use password managers, but that does not help this situation.
It doesn't prevent any sort of active phishing campaign, because the login process can just ask for and immediately use the TOTP credential. User gets a possible failure (or just content based on what they thought they were accessing), phisher gets account access.
Most of the smartphone based solutions are two-step auth -- it's just a different kind of secret that you know. If you use 1Password or Authy, your token is your 1Password/Authy credential.
The hardware based token approach is always going to be better, because the secret is part of the device and isn't portable. The Yubikey and challenge/response tokens are great as you need to have it present, you cannot easily get away with hijinks like putting a webcam on your RSA token.
I don't see a way in which having the possession factor be on my keys is stronger than having it be in my laptop. In fact, for sites that require it my U2F key is in my laptop (Yubikey nano-C).
(Aside: That doesn't limit the usefulness of having a possession factor that is portable between devices, just I don't think it is necessarily stronger)
This is actually why I very rarely opt into the 2FA features of websites - I figure I already have two factors protecting me, but not necessarily factors recognized by the site.
Marking something as "2FA enabled" is super easy in comparison.
I was saying doing that is easier than adding 2FA through 1Password.
Does it not protect against your password being compromised in some other channel? Sure you're probably not reusing passwords, but what if they compromised it some other way? What if the website had a flaw that allowed someone to exfiltrate plaintext passwords but not get at other application data?
Or to put it another way, if you're using a password manager, why use TPOP codes at all if you believe there are no other attack vectors to get the password that TPOP protects against?
TOTP is very useful! Just use a TOTP authenticator app on your phone, and don't put them in 1Password.
I was fully in that camp before I started talking with friends on red teams that were allowed to actually start using realistic phishing campaigns. Now I'm fully in the "U2F, client certs, or don't bother" camp.
Maybe I'm jaded, but it feels like the exploit landscape has improved enough that TOTP is as hopeless as enabling WEP on a present-day wireless network. Not only does setting it up waste your time, you're presumably doing so because you have the false belief it will actually offer protection from something. It may have been useful at one point, but those days are disappearing in the rearview mirror.
The only place I see TOTP still offering value is for people who re-use passwords, but only because it becomes the single site-unique factor for authentication.
Again, is there no attack vector that exists that makes TPOP worthwhile when you're already using a password manager that makes it not worthwhile if it's in your password manager?
I know I'm wrong because you know everything but I can't get past this particular one. unless the argument is, attackers aren't that lame anymore, then sure.
2-3 minutes is more realistic for real sites than 30 seconds, because there is usually a margin allowed for clock skew. But yes each OTP expires and that's a difference for an attacker who doesn't know the underlying secret.
TOTP is also not supposed to be re-usable. A passive keylogger gets the TOTP code, but only at the same moment it's used up by you successfully logging in with it. Implementations will vary in how effectively they enforce this, but in principle at least it could save you.
Caveat: The system may issue a long-lived token (e.g. a session Cookie) in exchange for the TOTP code which bad guys _can_ trade unlike the token itself.
I think there's also a difference with passwords on the other side of the equation. If I get read access to a credentials database (e.g. maybe a stolen backup tape) I get the OTP secret and so I can generate all the OTP codes I need, but in a halfway competently engineered system I do not get the password, only a hash. Since we're talking about 1Password, this password will genuinely be difficult to guess, and guessing is the only thing I can do because knowing the hash doesn't get me access to the live system. In this case 1Password is protecting you somewhat while my TOTP code made no difference. If you have a 10 character mixed case alphanumeric password (which is easy with 1Password), and the password hash used means I only get to try one billion passwords per second, you have many, many years to get around to changing that password.
Still, FIDO tokens are very much a superior alternative, their two main disadvantages are fixable. Not enough people have FIDO tokens, and not enough sites accept them.
[Edited to make the password stuff easier to follow]
In the threat scenario we're discussing bad guys aren't "stealing data from my password manager" they just have the password and OTP code that were filled out, possibly by hand. They can do this using the same tools and techniques that work for password-only authentication, including making phishing sites with a weak excuse for why the auto-fill didn't work. We know this works.
Possibly by hand? You are definitely not discussing the same scenario as everyone else. They're talking about password and OTP being stored in the same password manager, both filled out at the same time all in software.
A key logger is stealing those bytes right out of the password manager's buffers. It takes more sophistication to dump the database, but it's a very small amount more.
In the real world users go "Huh, why didn't my autofill work? Oh well, I'll copy it by hand".
A "key logger" logs keypresses. That's all key loggers do. There are lots of interesting scenarios that enable key logging. You've imagined some radically more capable attack, in which you get local code execution on someone's computer, and then for reasons best known to yourself you've decided that somehow counts as a "key logger". I can't help you there, except to recommend never going anywhere near security.
If you put your TOTP secret on your phone (or Yubikey), then both your TOTP secret store (that is your phone or keychain) and 1Password store must be compromised in order to gain access to your account. TOTP is useful in this scenario.
If you put your TOTP secret in 1Password along with your site password, then only your 1Password store needs to be compromised. This is the scenario where TOTP becomes pointless.
Most MITM scenarios are going to result in giving up at least one TOTP code -- and that TOTP code will be used to obtain a long-lived HTTP session (I can't remember when Google last had me auth).
I think it's common for folks to think that TOTP means it's safe to lose a code because it is short-lived and invalidated when a subsequent code is used (usually), but it just takes one code for a long-lived session cookie to be obtained.
If an attacker is in the position to intercept your password via MITM, Phising, whatver, they're in position to intercept your TOTP code. They're not going to sit on that code -- they're going to immediately use it to obtain their long-lived session while reporting an error back to you.
Note that a prover may send the same OTP inside a given time-step window multiple times to a verifier. The verifier MUST NOT accept the second attempt of the OTP after the successful validation has been issued for the first OTP, which ensures one-time only use of an OTP.
We're not talking about stealing single codes, but the entire secret.
With HOTP the answer is yes, because of ratcheting. A clone of the secret doesn't let you impersonate the original device, because their counters will conflict as both are used.
With TOTP the answer is no. You can make codes freely, and the clone is indistinguishable from the original.
The rule you cite is basically irrelevant. It just means that original and clone can't log in at the exact same time.
Getting obsessed with a single unlikely threat leads to doing things that are actively counter-productive, because in your single threat model they didn't make any difference and you forgot that real bad guys aren't obliged to attack where you've put most effort into defence.
Second, any theoretical advantage still has nothing to do with ratcheting...
If I tell my phone number to my bank, my mom and my hairdresser, and you steal it from the hairdresser, this doesn't give you information about my bank account number, even though the bank stored that with the phone number.
Bad guys successfully phish passwords plus OTP codes. We know they do this, hopefully you agree that in this case they don't have the OTP secret. So in this case 1Password worked out as well as having a separate TOTP program.
Bad guys successfully steal form credentials out of browsers using various JS / DOM / etcetera flaws. Again, they get the OTP code but don't get the OTP secret regardless of whether you use 1Password
Bad guys also install keyboard monitors/ logs/ etcetera. In some cases they could just as easily steal your 1Password vault, but in other cases (depending on how they do it) that isn't an option. I believe it's "very unlikely" in reality that they'll get your 1Password vault unless it's a targeted attack.
A passive TLS tap also gives the bad guys the password plus OTP code but not the OTP secret. Unlike the former three examples this is going to be very environment specific. Your work may insist on having a passive TLS tap, and some banks definitely do (this is why they fought so hard to delay or prevent TLS 1.3) but obviously your home systems shouldn't allow such shenanigans. Nevertheless, while the passive tap can't be used to MITM your session it can steal any credentials you enter, again this doesn't include the OTP secret.
Second: A ratchet enables us to recover from a situation where bad guys have our secret, forcing the bad guy to either repeat their attack to get a new secret or show their hand. TOTP lets us do this when bad guys get one TOTP code but not the underlying TOTP secret.
I'm just going to focus on this, because it's not based on opinions of likelihood but simple facts. TOTP does not have a ratchet. If you copy the secret, you can use it indefinitely.
A ratchet is a counter (or similar) that goes up per use, so you can detect cloning. TOTP does not have this. It does not store any state. If I log in every day, and the attacker logs in every day, you can't look at the counters to see that something is very wrong, because there is no counter.
In the situation we care about (which you think hardly matters, but I believe evidence shows to be extremely common) bad guys do NOT have the TOTP Shared Secret, it's in your 1Password Vault and the bad guys can't access that.
What they do have is a code, a One Time Password typically six digits long.
Because TOTP produces a _One Time_ Password, if I use that code, or any subsequent code, the one the bad guys have is now useless even if it has not yet expired. This forms a ratchet.
Ratchets aren't about detecting cloning, they're about what happens if bad guys temporarily get access. Can we recover? In many systems we're permanently screwed, if there's a ratchet we may be able to recover. For example this is essential to the design of OTR and the Signal Protocol.
The idea is that if I lose access to my phone, I can decrypt that saved copy of the secret, and load it into 1Password temporarily until I get my phone back or get a new phone and get everything set back up.
Lots of reasonable people back up their secrets, or even clone them into multiple authenticator applications. I try not to.
Because if they lose access to the 2FA secrets, you lose access to your account. If that's just one account, recovery might be doable (depending on who ultimately is root on the machine). If its your Bitcoin wallet or FDE though, you're toast.
There's also a variety of protocols used for 2FA. I've seen: USB2, USB3, USB-C, BlueTooth, NFC.
As for how people do this: they use a second key, save their key on a cryptosteel(-esque) device  (IMO overpriced, YMMV), USB stick, a piece of paper, or gasp CDROM. Where its saved differs. Could be next to a bunch of USB sticks, in a safe, at a notary (my recommendation though does cost a dime or two), in a basement under a sack of grain, ...
> How is this not an insane product concept?
I thought sanity died years ago.
Could you elaborate on how you do this in practice?
Any one gives you access. So you take one with you and put one in a drawer at home.
FWIW, this has not been my experience with 1Password at all.
Basically it becomes "just" replay prevention. Which is a nonzero benefit, but totally agreed that it's not at the same level as a separate generator of some kind.
Sure, you also get some additional resistance when your machine is hacked, but it's pretty marginal compared to the phishing benefit.
If they plug in their hardware token, the browser will give the token the real domain name which won't match the legitimate domain name, so the attacker can't use the response from the key to log in.
Sending a push notification requires GA to register for push notifications with a server that has the Apple APNS certificate or firebase key. Google would likely have to run this central server and provide a portal/cloud console API for developers to register for sending these push notifications.
Authy already does this, providing both the TOTP and the ability to send "is this you signing in? yes/no" push notifications, however, charges for it: https://www.twilio.com/authy/pricing which is likely why not many providers actually use Authy and just generate a standard GA-compatible TOTP token.
The problem is that those push solutions require that the company have some means of communicating with the app that you're using to trigger the push and the confirmation (as far as I can tell). This technology works around that by letting the browser talk to the plugged in device, circumventing all of the network/api bits.
My company has something like that, through Symantec. When you need to authenticate, it sends a notification to your phone over the network for you to acknowledge.
It's terrible though: cell signal is horrible in our building, so the people who use it are constantly dealing with delays and timeouts. I opted for a hard token that has no extra network dependencies, and I'm happy with my decision.
It's probably because setting this up is more involved for the backend, setting up that key which you have to type in is fairly simple technically.
All parties involved must have the secret (this isn't public key crypto).
That means an app that can accept TOTP offline has the secret stored locally where it can be extracted.
But with regular TOTP and a software device on a smart phone I can print out backup codes in case you lose your phone. This allows one to log in and reset their 2FA token. What happens if you lose your Yubikey or similar? I guess this doesn't matter as much in an enterprise setting where there is a dedicated IT department but for for individual use outside of the enterprise doesn't TOTP and a software device have a better story in case of loss of the 2FA device?
Get two, leave one in your safe deposit box. Every service I've seen that supports U2F supports multiple tokens.
It's convenient only when you physically have the security key; it's a hassle if you forgot or lost it.
If I've lost my keys, I have bigger problems.
You just need a U2F key to try it.
At least a year ago or so (last time when I checked) most services didn't appear to check the nonce and worked fine when the nonce was reset.
And how is it complicated to store a single integer per account and perform a comparison if `counter <= previousValue` at each authentication to see if it's not monotonically increasing? They already store that user's public key and key handle, they can store another 4 bytes.
In fact, the WebAuthn spec makes verifying this behavior mandatory. 
But there are many downsides, including:
Devices now need state, pushing up the base price. We want these things to be cheap enough to give away.
The counter make tokens weakly trackable. If Facebook knows your token's counter was 205 when you signed in at work this morning and 217 when you signed in from your iMac this evening, somebody who visited GitHub at midday with counter 213 might be you, someone with counter 487 definitely isn't you, or at least not with the same token.
State is only expensive when it adds a significant amount of die area or forces you to add additional ICs. If you need a ton of flash memory, you can't put it on the same die because the process is different, and adding a second IC bumps up the cost. However, staying with the same process you used for your microcontroller, you can add some flash with much worse performance... which is a viable alternative if you only need a handful of bits. Your flash is slower and needs more die area, but it's good enough.
> The counter make tokens weakly trackable. If Facebook knows your token's counter was 205 when you signed in at work this morning and 217 when you signed in from your iMac this evening, somebody who visited GitHub at midday with counter 213 might be you, someone with counter 487 definitely isn't you, or at least not with the same token.
What kind of ridiculous threat model is this? "Alice logs into Facebook and GitHub, and Bob, who has compromised both Facebook and GitHub's authentication services..." Even then, it's not guaranteed, because the device might be using a different counter for Facebook and GitHub.
At least for YubiKey, it appears to use a global counter:
> There is a global use counter which gets incremented upon each authentication, and this is the only state of the YubiKey that gets modified in this step. This counter is shared between credentials.
Having a global counter does seem like it could weaken the ability to detect cloned keys. If an attacker could clone the key and know the usage pattern (e.g., there are usually 100 uses of non-Facebook services between uses of the Facebook service), then they might be able to use it for a while without being detected. Though, having service-specific counters may have worse security ramifications (e.g., storing which services the key has been used with).
Though if an attacker is going to that much trouble, they may as well just use the wrench-to-the-head method.
Maybe sites colluding with their ad providers to track people is not part of your threat model, but it definitely is for some people. Yes, I know Github does not host ads, so isn't a good example of this threat model.
the only yubikey that works with mobiles (NFC) is $50.
the cheapest u2f I could find (it only has usb-a port) is $20.
The latter is an open source / open hardware design:
You can buy pretty decent (16Mhz, 2K EEPROM, 8K flash) microcontrollers for less than twenty cents (my numbers are from 7 years ago, things are probably cheaper, faster and bigger now). A few bytes of stable storage -- whatever you need to safely increment a counter and not lose state to power glitches -- are not going to add significantly to the cost of a hardware token.
> Authenticators may implement a global signature counter, i.e., on a per-authenticator basis, but this is less privacy-friendly for users.
Since you can have multiple keys on the same site, you could go one better, and have a per-key offset. When the key is rederived from the one-time nonce sent from the server, you'd also derive a 16-bit number to add to the 32-bit global counter. But even that wouldn't actually be enough to make correlating them impossible.
A large but finite set of independent global counters is a great idea, though. 256 32-bit integers is just 1 KiB of storage.
Our internal gmail might not require it every day, but most systems at Google do. You can't get very far without it.
Do you know why GNUK (the open source project used by Nitrokey and some other smart cards) chooses not to support U2F? I don't understand the maintainer's criticisms and I'd like to probe someone knowledgeable to find out more.
The point of GNUK is to move your GnuPG private key to a more secure device so it doesn't have to sit on your computer. With GnuPG, users are basically in control of the cryptographic operations: what keys to trust, what to encrypt or decrypt, etc.
With U2F, in order to comply with the spec you are basically forced to make a bunch of decisions that don't necessarily line up with GNUK's goals. You have to trust X.509 certificates and the whole PKI starting from your browser (CA roots and all that). Plus, U2F is basically a cheaper alternative to client certificates, but with GNUK you already have client certificates, so why go with something that provides less security?
To elaborate: With GnuPG, the reason you trust that Alice is really Alice is because you signed her public key with your private key. You can then secure your private key on a hardware device with GNUK. With FIDO U2F and GMail, you have to trust that you are accessing GMail, which is done through pinned certificates and a chain of trust starting from a public CA root. This system doesn't offer you much granularity for trusting certificates. Adding FIDO U2F to a system designed to support a GnuPG-like model of trust dilutes the purpose of the device. By analogy, imagine if you used your credit card to log in to GMail, maybe by using it as the secret key for U2F. The analogy isn't great, but you can imagine that even if you can trust that (through the protocol) GMail can't steal your credit card number, the fact that you are waving your credit card about all the time makes it a little less secure.
In general, people who work on GnuPG and other similar cryptography systems tend to be critical of the whole PKI, and I'm sympathetic to that viewpoint.
Unlike your GnuPG key, the FIDO token isn't an identity. The token is designed so that a particular domain can verify that it is talking to the same token it talked to last time, but nothing more. So e.g. if you use your GnuPG key for PornHub, GitHub and Facebook, anyone with access to all three systems can figure out that it's the same person. Whereas if you use the same FIDO token, that's invisible even to someone with full backend access to all three systems.
Here, you are saying that GNUK won't add FIDO U2F because the lead dev is critical of the whole PKI system. Thus, the GNUK user doesn't get defaults which allow them to easily bootstrap into the web services that are used by a large portion of the population.
I mean, that's fine and justifiable as individual projects go. But one could just as easily imagine the approach of these two projects switched so that Emacs reflexively reacted by choosing the most secure TLS settings for defaults, and GNUK being liberal with which protocols they add.
So what's the point of the Gnu umbrella if users and potential devs essentially roll a roulette wheel to guess how each developerbase prioritizes something as critical as security?
But if so, I don't see how that solves the problem of "user goes to site X for the first time, mistakenly thinking it's Github." That registers a new entry for the site and sends different credentials/signatures than it would send to Github. But site X doesn't care that they're invalid, and logs you in anyway, hoping you will pass on some secret information.
Am I missing something?
Verifying the authenticity of a site is something that has been demonstrated both to be nontrivial and also something that the majority of users cannot do successfully.
U2F/WebAuthn tie the identity of the requesting site into the challenge - by requiring TLS and by using the domain name that the browser requested. So if the user is being phished, the domain mismatch will result in a challenge that cannot be used for the legitimate site.
The system makes it impossible for phishing sites to log in to your account using your credentials. That's the threat model it guards against.
Entering 'secret information' that isn't user credentials just plain isn't part of it. Though wouldn't anyone phished by e.g. FakeGmail already get suspicious if they don't see any of their actual emails that they remember from the last time they logged in to Gmail?
So it's an additional factor for authentication, not a way of identifying fraudulent sites to the user? Okay, but you also said:
>Entering 'secret information' that isn't user credentials just plain isn't part of it.
Which is it?
>Entering 'secret information' that isn't user credentials just plain isn't part of it. Though wouldn't anyone phished by e.g. FakeGmail already get suspicious if they don't see any of their actual emails that they remember from the last time they logged in to Gmail?
You would think, but people have definitely entered information in similar circumstances. Also, there's always "sorry, server problem showing your emails, feel free to send in the mean time".
Although, actually reading the spec, it can actually double as a bit of extra authenticator of the website. Any site has to first request registration (client uploads a new, unique, opaque key handle/'credential id' to the server, along with its matching public key) before it can request authentication (server provides credential id and challenge, client signs challenge).
A credential ID is a unique nonce from the device's global counter, signed by a MAC. The real site will already have a registered credential ID, which the device will take, verify it, and use the nonce to HMAC the private key back into existence.
A phishing site you've never visited before will have no credential ID. Any fake ones it tries to generate will be rejected since the MAC would be invalid. One from the real website won't be accepted either, because the MAC incorporates the website's domain, too. They'd have to get user consent to create a new key pair entirely, which a user could notice is completely different from what's normally requested at login. Then they'd have to consent again to actually authenticate.
That was my original question: presumably there has to be some way for new websites to be registered on the system. Does it just categorically reject anything not on a predefined list? I mean, there are legit reasons to visit not-github! And new sites need to be added.
In order to say something is “fake”, that has to be defined relative to some “real” site you intend to visit, and I don’t see how this system even represents that concept of “the real” site. Phishing works on the basis that the site looks like the trusted one except that I’ve never been to it.
Put simply: I click on a link that takes me to a site I’ve never been to. Where does this system intervene to keep me from trusting it with information that I intend to give to different site, given that the new site looks like the trusted one, and my computer sees it as “just another new site”?
>What contradiction? It just plain isn't part of the threat model. Was that not clear?
Not at all. You said it “stops phishing sites from using your credentials to log in”. That implies some other secret that’s necessary to log in (making the original credentials worthless in isolation), and yet the next quote rejected that.
If you were just repeating a generic statement of the the system’s goals, which wasn’t intended to explain how it accomplishes them, then I apologize for misunderstanding, but them I’m not sure how that was supposed to clarify anything.
Late edit: as in the other thread, I think I’m just being thrown off by this being mislabeled as a phishing countermeasure, when it’s just another factor in authentication that also makes use of phished credentials harder. Not the same as direct “detection of fake sites”.
The signed challenge-response you give to the phishing site cannot be forwarded to the real site and accepted, because you used the domain name as part of your response, and as part of key generation, so it doesn't match. That's all that meant. 'Credentials' included the use of a public/private key, not just the typed password.
During Registration the Relying Party (web site) picks some random bytes, and sends those to the Client (browser). The Client takes those bytes, and it hashes them with the site's domain name, producing a bunch of bytes that are sent to the Security Key for the registration.
The Security Key mixes those bytes with random bytes of its own, and produces an entirely random private elliptic curve key, let's call this A, the Security Key won't remember what it is for very long. It uses that private key to make a public key B, and then it _encrypts_ key A with its own secret key (let's call that S, it's an unchangeable AES secret key baked inside the Security Key) to produce E
The Security Key sends back E, B to the Client, which relays them to the Relying Party, which keeps them both. Neither the Client nor the Relying Party know it, but they actually have the private key, just it's encrypted with a secret key they don't know and so it's useless to them.
When signing in, E is sent back to the Client, which gives it to the Security Key, which decrypts it to find out what A was again and then uses A to sign proof of control with yet more random bytes from the Relying Party.
This arrangement means if Sally, Rupert and Tom all use the same Security Key to sign into Facebook, Facebook have no visibility of this fact, and although Rupert could use that Key to get into Sally's account, the only practical way to "steal" the continued ability to do so would be to steal the physical Security Key, none of the data getting sent back and forth in his own browser can be used to mimic that.
(In Yubikeys' case, E is actually a Yubikey-generated random nonce that's used to generate the private key by HMAC-ing it with S and the domain name, not an encrypted private key, but that's all opaque implementation details. E can be anything as long as it reconstructs the key.)
This challenge field, as well as the origin (determined by the client and thus protected from phishing) are turned into JSON in a specified way by the client. Then it calculates SHA-256(json) and it sends this to the Security Key along with a second parameter that depends on exactly what we're doing (U2F, WebAuthn, etcetera)
You can see this discussed at the low level in FIDO's protocol documentation:
The Security Key doesn't get told the origin separately, it just gets the SHA256 output, this allows a Security Key to be simpler (it needn't be able to parse JSON for example) and so the entropy from the Relying Party has been stirred in with the origin before the Security Key gets involved.
As well as values B and E, a Security Key actually also delivers a Signature, which can be verified using B, over the SHA-256 hash it was sent. The Client sends this back to the Relying Party, along with the JSON, the Relying Party can check:
That this JSON is as expected (has the challenge chosen by the Relying Party and the Relying Party's origin)
That SHA256(json) gives the value indicated
That public key B has indeed signed the SHA256(json)
The reason they go to this extra effort with "challenge" and confirming signatures during registration is that it lets a Relying Party confirm freshness. Without this effort the Relying Party has no assurance that the "new" Registration it just did actually happened in response to its request, I could have recorded this Registration last week, or last year, and (without the "challenge" nonce from the Relying Party) it would have no way to know that.
Thanks for correcting me on how Yubico have approached the problem of choosing E such that they don't need to remember A anywhere.
[edited: minor layout tweaks/ typos]
It doesn't identify fraudulent sites (tls is the tool for that). but it won't give a properly signed login response for gmail.com to a request from the site fakegmail.com.
That's a poorly worded answer to your question, but here are some slides I made to explain the specification:
Can one usb device work on two separate accounts for a given domain, (e.g. work gmail and personal gmail), or do you need two of them?
Is there a requirement that FIDO be implemented on a hardware device?
Sites need to move to WebAuthn, which works with the same tokens and browsers (well, Chrome, Firefox, Edge) have either shipped or demo'd with a promise to ship. But right now today U2F works in a lot of places if you have Chrome whereas WebAuthn is mostly tech demos. The most notable website that has WebAuthn working today is Dropbox, seamless in Chrome or Firefox, any mainstream platform (including Linux) and all the tokens I have work. That's what everybody needs to be doing.
Also, YubiKey 4 is a great device. Set it up with GnuPG and you have "pretty good privacy" — with convenience. I recommend this guide for setting things up: https://github.com/drduh/YubiKey-Guide
The great thing about YubiKeys is that apart from U2F, you also use them for OpenPGP, and the same OpenPGP subkey can be used for SSH. It's an all-in-one solution.
And, if you lose your fob or your backup fob you're boned.
> The issue weakens the strength of on-chip RSA key generation and affects some use cases for the Personal Identity Verification (PIV) smart card and OpenPGP functionality of the YubiKey 4 platform. Other functions of the YubiKey 4, including PIV Smart Cards with ECC keys, FIDO U2F, Yubico OTP, and OATH functions, are not affected. YubiKey NEO and FIDO U2F Security Key are not impacted.
Structurally, actually making these tokens should be commoditized anyway. So on the software side, it needs to be not absolutely painful to rotate credentials. Something like a one-time-pad that you can use in "in case of fire break glass" situations.
You can register as many keys as you like within reason, you can give them names like "Yubico" or "Keyfob" or "USB Dildo" and any of them works to sign in.
Once signed in you can remove any you've lost or stopped using, and add any new ones.
The keys themselves have no idea where you used them (at least, affordably priced ones, you could definitely build a fancy device that obeys FIDO but actually knows what's going on rather than being as dumb as a rock) and there's no reason for your software like a browser to record it. Crypto magic means that even though neither browser nor key remembers where if anywhere you've registered, when you visit a site and say "I'm munchbunny, my password is XYZZY" it can say "You're supposed to have one of these Security Keys: Prove you still do" and it'll all just work.
The point I was getting at was "if your one Yubikey is stolen, what do you do?" If you fall back on password authentication, then your Yubikey based system was only as secure as the password mechanism protecting your account recovery mechanism.
The answer might be "provision two keys and stick one in a bank deposit box", etc. Regardless, there's an inherent problem that you want your recovery mechanism to be as hard to crack as your primary authentication mechanism, but you need it to not be an absolute pain.
I don't consider losing a Yubikey to be a serious problem, though it's important not to use it to generate RSA keys, as then you will not be able to make any backups. Generate your keys in GnuPG and load them onto the key, keeping backups in secure offline locations.
They also tend to propose you provision several other 2FA mechanisms, such as SMS or TOTP OTP. But yes, I always begin by enrolling two Security Keys, and then one of them goes back in my desk drawer of emergencies.
The vulnerability is real and still exists. There was even someone in this HN thread that was planning to use an old key fob Arstechnica sent him, specifically for the OpenPGP feature.
I should have split my backup and vulnerability comments into two, because they've sparked two unrelated debates. It started out as such a simple comment! :)
It maybe you're talking about U2F applet of Yubikey? Then it's not affected by the bug you posted. And you should have backup codes enabled.
There are lots of different ways to skin a cat but no one has established a definitive solution or made it easy or obvious. Something like a YubiKey is only one part of a solution, and without something more you are at risk. Or, perhaps there's a way to create an encryption with redundancies built in so you're never in that situation to begin with. What if the concept of a backup was built into the key exchange and losing your original didn't necessarily lock you out.
App also support SSH keys.
Works very well for me and the service is free. https://krypt.co/
Last month I tried to make an e-banking account in South Europe. In 2018.
- They required "6-12 characters as a password, and no special characters". You can't hash special chars?
- Apparently it's okay, because "2FA". Which is a "changeable via a call" 4-digit-code, which the bank employee knows "only" two digits.
I'd be far more inclined to trust Twitter or GitHub than my bank with my data.
For regular guys like me, I can't think of any online service more important to protect than my bank account.
The problem is that all of these things are a PITA to administer.
I wanted a VPN between our two offices. Cool. I'll buy some YubiKeys, type some command line magic on Linux and I'll be good to go ...
This stuff is fine if you have 100+ people and the resources to administer.
If you simply want to manually distribute stuff to <10 people, it's a nightmare.
Until I can set up something easily at the 10-person level and scale it gradually to 100+, this stuff is going to remain tractionless.
The way the article is written, it makes it sound like the physical key is a replacement for 2FA instead of just a hardware device that handles the second factor (while leaving the password component in place).
You can already use the same process on your GMail if you have a compatible U2F key.
Of course, more sensitive stuff (access to production, access to pay stubs, access to $cloud_erp) requires re-entering password plus the security key.
Sometimes 'replacing passwords' is used to mean 'replacing the traditional username and password login' as well.
This is a common misconception. The threat model of 2FA is not "I lost my device, and it is now in the hands of someone who knows the password".
The threat model of 2FA is one of:
1) "An attacker has gained remote access to my computer, but not physical access"
2) "I have been targeted by a sophisticated phishing attack, and I trust the machine that I am currently using"
TOTP (and even SMS) protects against (1) in most cases, though U2F is still preferable. U2F is the only method that protects against (2).
A bit of clarification: U2F protects against phishing attacks by automatically detecting the domain mismatch when a link from a phishing email sends you to g00gle.com rather than google.com, which is something that a human might overlook while they're typing in both their password and the second factor they've been sent via SMS. However, if someone were to use a password manager and exclusively rely on autocomplete to fill in their passwords, then that would also alert them to the fact that something was fishy when their browser/password manager refuses to autocomplete their Google password on g00gle.com. So this isn't exactly the only method that protects against the second scenario above... though I will concede that using a password manager in this way sort would sort of change 2FA from "something you know and something you have" to "these two somethings you have" (your PC with your saved passwords and your USB authenticator), which is something that might be worth considering.
Regardless, these physical authenticators are a huge step up from SMS and I'm very happy that an open standard for them is being popularized and implemented in both sites and browsers.
Lots of websites do weird modal overlays, domain changes and redirects, redesigns, or other tricks that break password autocompletion. I've never seen a secure password manager that's robust enough against all of these that it would eliminate the human factors causing the phishing opportunity here.
Apparently Google hasn't either, because that was their motivation behind developing these schemes.
Would you be able to elaborate on this? I'm not understanding the difference between TOTP and the physical key from the article for this scenario.
With U2F, there is communication between the browser and the device, requesting authentication for a specific origin hostname -- that can't (shouldn't) be fooled by a phish hosted at Google.com-super-secure-phishing.net
You say that, but the overwhelming body of evidence from real-life phishing attacks and red-team exercises demonstrates that even very technologically-literate engineers will not consistently notice.
Any "type in a code" or "approve this login (yes/no)?" authentication factor is technically vulnerable. All the phishing site needs to do is proxy the authentication to the actual site in real time.
These guys put together a great overview of the approach: https://www.wandera.com/bypassing-2fa/
I always thought the 2FA threat model was "Someone acquired my password" or else "someone has access to my email account and may try to do password resets by email."
> Google has not had any of its 85,000+ employees successfully phished on their work-related accounts since early 2017, when it began requiring all employees to use physical Security Keys in place of passwords and one-time codes, the company told KrebsOnSecurity.
A more in-depth quote is later in the article: "Once a device is enrolled for a specific Web site that supports Security Keys, the user no longer needs to enter their password at that site (unless they try to access the same account from a different device, in which case it will ask the user to insert their key)."
The parenthetical seems to imply that they're doing initial auth (and thus cookie generation) with password + U2F, and then re-validating U2F for specific actions / periodically without re-prompting for the password, similarly to how GitHub prompts for "sudo mode" when you do specific account actions.
They support x509 certs, which use PINs. Whether it needs the PIN once-per-session, once-per-15seconds, or once-per-use is configurable. The number of failures before it locks is also configurable. More details can be found here: https://developers.yubico.com/PIV/Introduction/
They also support TOTP/HOTP, where the computer asks the device for a code based on a secret key that the device knows. This can require a touch of the button.
EDIT: TOTP/HOTP modes do support a password, as cimnine pointed out. I'd forgotten about that setting.
Yubico OTP is similar to TOTP/HOTP, and is the mode where the yubikey emits a gibberish string when pressed. The string gets validated by the server against Yubico's servers. This does not require a PIN.
The U2F mode does challenge/response via integration with Chrome and other apps. The app provides info to the USB device about the site that's requesting auth, then you press the device and it sends a token back. This is critical to the phishing protection: barring any vulns in the local app (which have happened before), you can't create a phishing site that will ask my Yubikey for a challenge that you can then replay against the real site. This mode requires physical touch but no PIN.
The PIN is used to configure the YubiKey itself.
The key (sic) thing about U2F isn't that it is new and special (it isn't -- it's plain old 2FA as used for more than a decade) but rather that it is practical to deploy for smaller organizations. You don't need to buy large quantities of keys. You don't need a special server. You don't need staff with special skills to deploy it. It works with "cloud" providers like Google and Facebook, out the box (same key as you use for your internal services).
However the U2F and Fido spec requires a Cryptographic assertion (with all that replay attack mitigation stuff like Nonces) that makes it so that an attacker can reuse a Token touch. I'd probably encourage a glance over this https://fidoalliance.org/specs/fido-u2f-v1.0-ps-20141009/fid...
Sadly the Wikipedia article doesn't have a good layman's explanation yet, but I'm sure it'll will soon.
Yes at a high level, its still 2FA but like most options in any factor of Auth. It can be improved upon. (For a simple case, take Fingerprint readers and look at the advances of liveliness checks and how many unique points it requires.)
Any other 2FA method can.
Short version: the keys are matched directly from the device to the site making it virtually impossible to phish unless you control the site itself.