Hacker News new | comments | show | ask | jobs | submit login
Google: Security Keys Neutralized Employee Phishing (krebsonsecurity.com)
775 points by sohkamyung 63 days ago | hide | past | web | favorite | 409 comments



For those interested, I recommend reading how FIDO U2F works. There's more in a security key than just FIDO U2F, but FIDO U2F is easily the most ergonomic system that these security keys support. Simplified:

* The hardware basically consists of a secure microprocessor, a counter which it can increment, and a secret key.

* For each website, e.g., GitHub, it creates a HMAC-SHA256 of the domain (www.github.com) and the secret key, and uses this to generate a public/private keypair. This is used to authenticate.

* To authenticate, the server sends a challenge, and the security key sends a response which validates that it has the private key. It also sends the nonce, which it increments.

If you get phished, the browser would send a different domain (www.github.com.suspiciousdomain.xx) to the security key and authentication would fail. If you somehow managed to clone the security key, services would notice that your nonces are no longer monotonically increasing and you could at least detect that it's been cloned.

I'm excited about the use of FIDO U2F becoming more widespread, for now all I use it for is GitHub and GMail. The basic threat model is that someone gets network access to your machine (but they can't get credentials from the security key, because you have to touch it to make it work) or someone sends you to a phishing website but you access it from a machine that you trust.


It's also tremendously more efficient to tap your finger on the plugged in USB than it is to wait for a code to be sent to your phone or go find it on an app to type in. I've added it to everything that allows it, more for convenience than security at this point.

Most places that allow it require that you have a fallback method available.


It's more efficient, but remember the point of this story, which is that it mitigates phishing attacks, which code-generating 2FA applications do not.


1password does mitigate it to some extent by automatically copying the code to your clipboard after filling the form, these 2 things only work on the right domain. Of course you can still copy the values from the app but at least it hints at things being wrong.


I always wondered what was the point of using 1Password for 2FA. After all, if you store your 2FA secrets in 1Password to generate codes, you've just reduced your 2FA to one factor?


If you're using a password manager to have unique passwords for every site, what does TOTP 2FA even protect you against?

Since 2FA only comes into play for protection if the password is compromised, if you're using a password manager that should mean that data breaches at unrelated sites shouldn't be a risk.

So we're down to phishing and malware/keyloggers being the most likely risk -- and TOTP offers no protection against that. If you're already at the point that you're keying your user/pass into a phishing site, you're not going to second guess punching in the 2FA code to that same site. I'd even argue push validation like Google Prompt would be at a significant risk for phishing, unless you are paying close attention to what IP address for which you're approving access.


> If you're using a password manager to have unique passwords for every site, what does TOTP 2FA even protect you against?

Sounds a little obvious to write it out, but it protects against someone stealing your password some way that the password manager / unique passwords doesn't protect you against. Using a PM decreases those risks significantly, mostly because how enormous the risks of password reuse and manual password entry are without one, but it certainly doesn't eliminate them entirely.


It's not at all obvious to me, because 1Password passwords are stored in the exact same places that 1Password-managed TOTP codes are. You might as well just concatenate the TOTP secret to your password.


Having a TOTP secret would protect against theft of credentials in transit. The TOTP is only valid once, so that credential exchange is only valid once. They wouldn't be able to create any additional login sessions with the information they've attacked. However, there's a good chance if they could see that they might also be able to see a lot of other information you're exchanging with that service.


It creates a race condition in transit - if they can use the code before you, then they win. I can intercept at the network level, but also via phishing attacks - there is no domain challenge or verification in TOTP.

I know having someone malicious get into your account multiple times vs once is likely worse, but its hard to quantify how much worse it is - and of course using that one login to change your 2FA setup would make them equivalently bad.


Not quite exactly "equivalently bad", since a user is more likely to notice a 2FA setup change than they are a phishing site's login error and then everything working as usual, but yeah, perhaps it's splitting hairs at that point.


which is why I'm wary of using my password manager for OTP, and use a separate one. Not sure if it's too paranoid, but it doesn't make sense to me to keep the 2 in the same place.


There appear to be two points being conflated — 1/ 2FA via secrets stored on a separate device from your primary device with a PM provide more security than those stored on one device, and 2/ once you use a PM with unique password for every site, much of what OTP helps with for is already mitigated.

Both seem true, and what to do to protect yourself more depends on what kinds of attacks you're interested in stopping and at what costs. Personally, PM + U2F seems the highest-security, fastest-UI, easiest-UX by far — https://cloud.google.com/security-key/


If you're using 1p password for storing your passwords, then yeah, it would make sense to use something else for your TOTP.


This is the thing I struggle with: name a scenario where you would have your unique site password compromised but not have at least 1 valid 2FA code compromised at the same time.

The best answer I have for where TOTP can provide value: you can limit a potential attack to a single login.

I wanted to say you could stop someone doing MitM decryption due to timing (you use the 2FA code before they can), but if they're decrypting your session they can most likely just steal your session cookie which gets them what they need anyway.


Because you accidentally type your password for site A into the login for site B.


Someone “hacking” the 1Password web service

Logging in to a site on a public computer and the browser auto-remembers the password you typed

A border agent forcing you to log into a website (this scenario only works if you leave your second factor, which will most likely be your phone, at home)


Usually in a higher security environment, we'll make sure the authenticator is a separate device (phone or hard token) and expressly forbid having a soft token on the same device that has the password safe.


> If you're using a password manager to have unique passwords for every site, what does TOTP 2FA even protect you against?

Man in the middle attacks of course, which are possible on insecure connections. With the prevalence of root certificates installed on people's computers as a corporate policy, by shitty anti-viruses, etc, it's very much possible to compromise HTTPS connections.

The TOTP 2FA code acts as a temporary password that's only available for a couple of seconds. A "one time password" if you will.

Yes, it still strengthens security.

Read 1Password's article about it: https://blog.agilebits.com/2015/01/26/totp-for-1password-use...


This would make sense if virtually every website in the world didn't react to the short-term TOTP secret by handing back a long-term HTTP secret.


If there's no point improving client authentication until you've improved website security and no point improving website security until you've improved client authentication then neither will ever get better.


If there's a MitM attack, you've already lost. Sure, they can only login one time, but they're in once you provide the authentication steps.

Phishing sites collecting and using the 2FA creds in real time was discussed here, among other places: https://security.stackexchange.com/questions/161403/attacker...

With available open source like https://github.com/ustayready/CredSniper readily available, you're only going to stop lazy phishing attempts.

You only get protection if you assume the scripts are just passively collecting information for use at a later time. If they're actively logging in to establish sessions while they're phishing, it's game over.


But don't many sites require a second authentication to modify access to the account (change password, add collaborator, etc)? In that case, an attacker would need a second one-time code.


Normally I believe they just require the password. The threat model there is someone leaving their account logged in.


> Normally I believe they just require the password.

Shoot, you're right. Not sure what I was thinking. My bad.


Yeah that's why codes don't make for a good second factor. You should use something like Fido or a client cert such that a MitM can't continue to impersonate the client.


The point is that one time passwords are only valid once. If your password is stolen, it's stolen. If a TOTP code is stolen, it's probably not even useful because it's already invalid when they log in (including for time based, in well-designed software.)

There's obviously a class of attack that hardware tokens protect against (malware) that password managers can't entirely (unless your operating system has good sandboxing, like Chrome OS for example.) But it really does protect against phishing to a degree, as well as certain attacks (key loggers or malicious code running on a login page on the browser)

Hardware tokens are the winning approach, but even when you put TOTP into a password manager it is far from useless.


It only protects against the most naive phishing attacks, where the attacker just accumulates passwords for use at some later date. More sophisticated phishing attacks will just copy the OTP in real time:

https://www.schneier.com/blog/archives/2015/08/iranian_phish...

U2F defends against that sort of phishing as well.


Sure, but most people aren't targeted by advanced adversaries, so using your password manager for TOTP can be a lightweight way to make most hackers completely disinterested in attacking your account. U2F requires an additional investment. Depending on the type of physical security you want, it's normally a good idea to invest in at least n+1 U2F keys, so you have a spare key you can keep with you and permanent keys in all of your devices. (Obviously, the latter means that your U2F can be stolen easier, but the reality is that this is not nearly as big of a deal as stealing a password, since you can unprovision a U2F key immediately upon realizing that it's gone.)


Proxying the authentication isn't really an "advanced" attack. In a 19 minute video[0] the author of CredSniper[1] gives a complete walk-through for setting up his proof of concept tool, including building the login pages and registering for LetsEncrypt SSL certs. The hardest part still remains choosing the domain name and getting people to click the link, and still people find ways to overcome those hurdles.

As TOTP use has increased, the basic phishing toolkit has evolved to match. Attackers want accounts, not passwords, so they're just adjusting to get working sessions. The passwords were only ever just a means to an end.

[0] https://www.youtube.com/watch?v=TeSt9nEpWTs [1] https://github.com/ustayready/CredSniper


That attack doesn't work when using 1password. 1password refuses to fill on the wrong domain.


You may not be the best example of how this can help, sounds like you have good security sensitivity.

Where I'm working now, we deal with several credential loss incident each month. Invariably, our users are tricked into authentication via a bogus site. 2FA would protect the credentials from being used by unauthorised people. Our staff are encouraged to use password managers, but that does not help this situation.


TOTP can protect against knowledge leakage as it is a second factor. For example, it will prevent someone successfully using a shared password a LinkedIn, associated with a corporate email address, to log into Gmail/O365.

It doesn't prevent any sort of active phishing campaign, because the login process can just ask for and immediately use the TOTP credential. User gets a possible failure (or just content based on what they thought they were accessing), phisher gets account access.


While that's true because you have a single point of failure, I think it's more likely that your passwords get leaked through site security than 1Pass security (depending how you sync/if you use their cloud version) so it's still more (not the most) secure because if they find your password in a database they still don't have your 2FA code.


It's not multi-factor auth.

Most of the smartphone based solutions are two-step auth -- it's just a different kind of secret that you know. If you use 1Password or Authy, your token is your 1Password/Authy credential.

The hardware based token approach is always going to be better, because the secret is part of the device and isn't portable. The Yubikey and challenge/response tokens are great as you need to have it present, you cannot easily get away with hijinks like putting a webcam on your RSA token.


I’d say that a separate phone app with MFA codes that are only stored offline qualifies as a second factor, as you need both the phone and it’s access code (fingerprint etc.) to see the code.


It can, but users have the ability to undermine those controls in many cases via Authy, 1Password, etc.


I consider possession of my device and a knowledge challenge in the form of the password and pin to be two factors. Use of a biometric in lieu of password is also two factors.

I don't see a way in which having the possession factor be on my keys is stronger than having it be in my laptop. In fact, for sites that require it my U2F key is in my laptop (Yubikey nano-C).

(Aside: That doesn't limit the usefulness of having a possession factor that is portable between devices, just I don't think it is necessarily stronger)

This is actually why I very rarely opt into the 2FA features of websites - I figure I already have two factors protecting me, but not necessarily factors recognized by the site.


You could also use two separate systems for password and TOTP storage - one gets passwords, one gets TOTP, and the one with passwords explicitly does NOT get the password for your TOTP storage.


I think 1password discourages it too, the option to add a 2fa code is pretty hidden.

Marking something as "2FA enabled" is super easy in comparison.


If you're not tracking your 2FA code through 1Password you get a big warning banner[1].

[1] https://i.imgur.com/uENh9oL.png


Just add the "2FA" tag and the banner goes away :)

I was saying doing that is easier than adding 2FA through 1Password.


If you're using 1Password-generated passwords and storing TOTP codes in 1Password, how are the TOTP codes not just theater?


Normally I would reply back and explain, but you know more about this than I do, so instead I will ask a question.

Does it not protect against your password being compromised in some other channel? Sure you're probably not reusing passwords, but what if they compromised it some other way? What if the website had a flaw that allowed someone to exfiltrate plaintext passwords but not get at other application data?

Or to put it another way, if you're using a password manager, why use TPOP codes at all if you believe there are no other attack vectors to get the password that TPOP protects against?


The website and the password manager in this scenario are storing the exact same secrets. If you're going to store them in a password manager, it is indeed reasonable to ask "why bother"?

TOTP is very useful! Just use a TOTP authenticator app on your phone, and don't put them in 1Password.


> TOTP is very useful! Just use a TOTP authenticator app on your phone, and don't put them in 1Password.

I was fully in that camp before I started talking with friends on red teams that were allowed to actually start using realistic phishing campaigns. Now I'm fully in the "U2F, client certs, or don't bother" camp.

Maybe I'm jaded, but it feels like the exploit landscape has improved enough that TOTP is as hopeless as enabling WEP on a present-day wireless network. Not only does setting it up waste your time, you're presumably doing so because you have the false belief it will actually offer protection from something. It may have been useful at one point, but those days are disappearing in the rearview mirror.

The only place I see TOTP still offering value is for people who re-use passwords, but only because it becomes the single site-unique factor for authentication.


U2F addresses phishing and password theft. TOTP just addresses password theft. That doesn't make TOTP useless; password theft is a serious problem, routinely exploited in the real world.


But the secrets serve different purposes, they aren't the same. So why not keep them in the same place? I'll admit that it is less secure of course, since someone could compromise your 1Password. But it is still more secure that not using TPOP at all, is not?

Again, is there no attack vector that exists that makes TPOP worthwhile when you're already using a password manager that makes it not worthwhile if it's in your password manager?


I'm not really sure I see how storing TOTP secrets in 1Password is materially any more secure than not using TOTP secrets and just letting 1Password generate a random password for that site.


a keylogger sniffs your password for site X, now they have your password for that site and can log in. If you also had a TOTP code, they can only log in for the next 30 seconds using that TOTP code, but they can't send out an email with your password in a CSV file to their friends and expect it to be usable.

I know I'm wrong because you know everything but I can't get past this particular one. unless the argument is, attackers aren't that lame anymore, then sure.


Best not to assume anyone is infallible. If you don't put people on pedestals you've got less cleaning up to do later when they inevitably fall off. Yes that includes you (and of course me).

2-3 minutes is more realistic for real sites than 30 seconds, because there is usually a margin allowed for clock skew. But yes each OTP expires and that's a difference for an attacker who doesn't know the underlying secret.

TOTP is also not supposed to be re-usable. A passive keylogger gets the TOTP code, but only at the same moment it's used up by you successfully logging in with it. Implementations will vary in how effectively they enforce this, but in principle at least it could save you.

Caveat: The system may issue a long-lived token (e.g. a session Cookie) in exchange for the TOTP code which bad guys _can_ trade unlike the token itself.

I think there's also a difference with passwords on the other side of the equation. If I get read access to a credentials database (e.g. maybe a stolen backup tape) I get the OTP secret and so I can generate all the OTP codes I need, but in a halfway competently engineered system I do not get the password, only a hash. Since we're talking about 1Password, this password will genuinely be difficult to guess, and guessing is the only thing I can do because knowing the hash doesn't get me access to the live system. In this case 1Password is protecting you somewhat while my TOTP code made no difference. If you have a 10 character mixed case alphanumeric password (which is easy with 1Password), and the password hash used means I only get to try one billion passwords per second, you have many, many years to get around to changing that password.

Still, FIDO tokens are very much a superior alternative, their two main disadvantages are fixable. Not enough people have FIDO tokens, and not enough sites accept them.

[Edited to make the password stuff easier to follow]


The scenario of "attacker has a key logger but doesn't steal the entire password database" sounds like enough of an edge case to ignore. If someone's stealing data from my password manager I'm going to assume full compromise.


Do you have anything - statistics, examples of popular toolkits, something like that, to show this is actually just an "edge case" ?

In the threat scenario we're discussing bad guys aren't "stealing data from my password manager" they just have the password and OTP code that were filled out, possibly by hand. They can do this using the same tools and techniques that work for password-only authentication, including making phishing sites with a weak excuse for why the auto-fill didn't work. We know this works.


> In the threat scenario we're discussing bad guys aren't "stealing data from my password manager" they just have the password and OTP code that were filled out, possibly by hand.

Possibly by hand? You are definitely not discussing the same scenario as everyone else. They're talking about password and OTP being stored in the same password manager, both filled out at the same time all in software.

A key logger is stealing those bytes right out of the password manager's buffers. It takes more sophistication to dump the database, but it's a very small amount more.


You are, alas, not unusual in mistaking the autofill feature, which ordinary users are told is about convenience, for a security measure.

In the real world users go "Huh, why didn't my autofill work? Oh well, I'll copy it by hand".

A "key logger" logs keypresses. That's all key loggers do. There are lots of interesting scenarios that enable key logging. You've imagined some radically more capable attack, in which you get local code execution on someone's computer, and then for reasons best known to yourself you've decided that somehow counts as a "key logger". I can't help you there, except to recommend never going anywhere near security.


Ok, so you don't see any use for TPOP when using 1Password then?


The issue is with storing your TOTP secret in the same store as your password. The idea of using MFA is that multiple secret stores must be compromised in order to grant access to a service.

If you put your TOTP secret on your phone (or Yubikey), then both your TOTP secret store (that is your phone or keychain) and 1Password store must be compromised in order to gain access to your account. TOTP is useful in this scenario.

If you put your TOTP secret in 1Password along with your site password, then only your 1Password store needs to be compromised. This is the scenario where TOTP becomes pointless.


Isn't that a less likely scenario? Or at least, a subset of the possible compromises, meaning you have materially improved your security in _some but not all_ cases. I don't disagree that it's best to not have TOTP in 1password, but isn't it still _better_ than not having TOTP at all?


I understand that, but it isn't it still better to store it in 1Password than not have TPOP at all? At least you're still protected against other attacks, right?


Marginally sure, but what "other attacks" are you looking to protect against?

Most MITM scenarios are going to result in giving up at least one TOTP code -- and that TOTP code will be used to obtain a long-lived HTTP session (I can't remember when Google last had me auth).

I think it's common for folks to think that TOTP means it's safe to lose a code because it is short-lived and invalidated when a subsequent code is used (usually), but it just takes one code for a long-lived session cookie to be obtained.

If an attacker is in the position to intercept your password via MITM, Phising, whatver, they're in position to intercept your TOTP code. They're not going to sit on that code -- they're going to immediately use it to obtain their long-lived session while reporting an error back to you.


No, I use TOTP and I use 1Password. But my TOTP secrets live in Duo's iPhone application.


And I also store them separately. But don't you agree that storing them in 1Password is still better than nothing, as there are still some use cases that you are protected against that way?


No, that's where you lose me. If you're using 1Password to generate passwords in the first place, then I really don't see how using it for TOTP accomplishes anything. To me, it looks like you could literally concatenate the TOTP secret to the 1Pw-generated password and have the same level of security.


In particular OTP codes are intended to be single use they're a ratchet. If a site does this properly then any OTP code you steal from me is not only worthless when it naturally expires, it's also worthless once I use that code or a subsequent code to authenticate. If you used a passive keylogger that may mean by the time you get the key events that OTP is already useless. Likewise for shoulder surfing attacks.


TOTP != HOTP


Nevertheless, RFC 6238 (TOTP) specifically tells implementers that:

Note that a prover may send the same OTP inside a given time-step window multiple times to a verifier. The verifier MUST NOT accept the second attempt of the OTP after the successful validation has been issued for the first OTP, which ensures one-time only use of an OTP.


The question is whether there is any point in having an OTP secret if it's stored in the same location as the password.

We're not talking about stealing single codes, but the entire secret.

With HOTP the answer is yes, because of ratcheting. A clone of the secret doesn't let you impersonate the original device, because their counters will conflict as both are used.

With TOTP the answer is no. You can make codes freely, and the clone is indistinguishable from the original.

The rule you cite is basically irrelevant. It just means that original and clone can't log in at the exact same time.


You've short-circuited by assuming the threat model is a bad guy breaks into 1Password. But there's no reason to insist upon this very unlikely threat model, there are other threats that _really happen_ in which having both OTP and a password under 1Password saves you.

Getting obsessed with a single unlikely threat leads to doing things that are actively counter-productive, because in your single threat model they didn't make any difference and you forgot that real bad guys aren't obliged to attack where you've put most effort into defence.


First, I don't agree that if the attackers have access to the password, guessing that they have access to data stored with the password is "very unlikely".

Second, any theoretical advantage still has nothing to do with ratcheting...


First: Fuzzy thinking. The attackers have access to _a copy of the password_. The copy they got wasn't necessarily anywhere near the OTP secret.

If I tell my phone number to my bank, my mom and my hairdresser, and you steal it from the hairdresser, this doesn't give you information about my bank account number, even though the bank stored that with the phone number.

Bad guys successfully phish passwords plus OTP codes. We know they do this, hopefully you agree that in this case they don't have the OTP secret. So in this case 1Password worked out as well as having a separate TOTP program.

Bad guys successfully steal form credentials out of browsers using various JS / DOM / etcetera flaws. Again, they get the OTP code but don't get the OTP secret regardless of whether you use 1Password

Bad guys also install keyboard monitors/ logs/ etcetera. In some cases they could just as easily steal your 1Password vault, but in other cases (depending on how they do it) that isn't an option. I believe it's "very unlikely" in reality that they'll get your 1Password vault unless it's a targeted attack.

A passive TLS tap also gives the bad guys the password plus OTP code but not the OTP secret. Unlike the former three examples this is going to be very environment specific. Your work may insist on having a passive TLS tap, and some banks definitely do (this is why they fought so hard to delay or prevent TLS 1.3) but obviously your home systems shouldn't allow such shenanigans. Nevertheless, while the passive tap can't be used to MITM your session it can steal any credentials you enter, again this doesn't include the OTP secret.

Second: A ratchet enables us to recover from a situation where bad guys have our secret, forcing the bad guy to either repeat their attack to get a new secret or show their hand. TOTP lets us do this when bad guys get one TOTP code but not the underlying TOTP secret.


> Second: A ratchet enables us to recover from a situation where bad guys have our secret, forcing the bad guy to either repeat their attack to get a new secret or show their hand. TOTP lets us do this when bad guys get one TOTP code but not the underlying TOTP secret.

I'm just going to focus on this, because it's not based on opinions of likelihood but simple facts. TOTP does not have a ratchet. If you copy the secret, you can use it indefinitely.

A ratchet is a counter (or similar) that goes up per use, so you can detect cloning. TOTP does not have this. It does not store any state. If I log in every day, and the attacker logs in every day, you can't look at the counters to see that something is very wrong, because there is no counter.


I goofed by using the word "secret" in the ratchet description after earlier choosing "secret" to mean the TOTP Shared Secret.

In the situation we care about (which you think hardly matters, but I believe evidence shows to be extremely common) bad guys do NOT have the TOTP Shared Secret, it's in your 1Password Vault and the bad guys can't access that.

What they do have is a code, a One Time Password typically six digits long.

Because TOTP produces a _One Time_ Password, if I use that code, or any subsequent code, the one the bad guys have is now useless even if it has not yet expired. This forms a ratchet.

Ratchets aren't about detecting cloning, they're about what happens if bad guys temporarily get access. Can we recover? In many systems we're permanently screwed, if there's a ratchet we may be able to recover. For example this is essential to the design of OTR and the Signal Protocol.


How is that materially different to storing the password and TOTP in 1Password?


Obviously, because if you compromise my desktop, you still won't have my TOTP secrets.


But what if I seize your phone at a customs inspection, or a traffic stop? Don't I then have password and OTP?


Do you see any problem with using a phone TOTP authenticator, but when setting it up saving a copy of the TOTP secret in a file encrypted with my public gpg key?

The idea is that if I lose access to my phone, I can decrypt that saved copy of the secret, and load it into 1Password temporarily until I get my phone back or get a new phone and get everything set back up.


Before people started storing their TOTP secrets in desktop applications so they could auto-fill them in their browsers, this question used to be the front line of the 2FA opsec wars. I was a lieutenant in the army of "if you want to back up 2FA secrets, just enroll two of them; a single 2FA secret should never live in more than one place". I think that battle is already lost.

Lots of reasonable people back up their secrets, or even clone them into multiple authenticator applications. I try not to.


> Lots of reasonable people back up their secrets, or even clone them into multiple authenticator applications. I try not to.

Because if they lose access to the 2FA secrets, you lose access to your account. If that's just one account, recovery might be doable (depending on who ultimately is root on the machine). If its your Bitcoin wallet or FDE though, you're toast.

There's also a variety of protocols used for 2FA. I've seen: USB2, USB3, USB-C, BlueTooth, NFC.

As for how people do this: they use a second key, save their key on a cryptosteel(-esque) device [1] (IMO overpriced, YMMV), USB stick, a piece of paper, or gasp CDROM. Where its saved differs. Could be next to a bunch of USB sticks, in a safe, at a notary (my recommendation though does cost a dime or two), in a basement under a sack of grain, ...

[1] https://cryptosteel.com


What the actual fuck is this "cryptosteel" thing?


There's a FAQ on the bottom of the page.


I know, I read it. What the actual fuck is this? Who would spend money on this? How is this not an insane product concept?


> Who would spend money on this?

https://www.kickstarter.com/projects/zackdangerbrown/potato-...

https://en.wikipedia.org/wiki/Juicero

etc.

> How is this not an insane product concept?

I thought sanity died years ago.


It costs $199 and you can't even store '@' with it!



Thank you for the link, that was actually an informative comparison instead of cursing.


> if you want to back up 2FA secrets, just enroll two of them

Could you elaborate on how you do this in practice?


Just like the first one. Most U2F web sites let you register multiple keys.

Any one gives you access. So you take one with you and put one in a drawer at home.


Parent's argument is that it mitigates phishing - i.e your normal workflow is you go to a site and your credentials are automatically filled in, so you'd be suspicious if that doesn't happen. In my experience, the autofill breaks so much that I've started copying my password in manually all the time.


> In my experience, the autofill breaks so much that I've started copying my password in manually all the time.

FWIW, this has not been my experience with 1Password at all.


I use LastPass but have had the same experience - autofill is very, very reliable


The TOTP code does not add anything to phishing mitigation.


Depends on the exact attack. If its a full MITM (including TLS), no. If its a fake website who don't forward after password-based authentication, yes. U2F would also detect the domain is incorrect, though so does my password manager. Though that's based on a browser extension. I suppose if the browser gets mislead, as would the password manager. And that did happen with LastPass (XSS attack IIRC).


Seems like they'd still protect you from anything that records your password and TOTP, but doesn't gain access to your store? E.g. a website gets some JS injected that skims your login. Which doesn't seem all that unlikely.

Basically it becomes "just" replay prevention. Which is a nonzero benefit, but totally agreed that it's not at the same level as a separate generator of some kind.


They are time-limited, at least? But yes, I've had similar arguments with coworkers who've started using 1Password for TOTP in the same way.


It seems odd to focus on physical tokens, when this could just as easily be built into the Browser/OS.

Sure, you also get some additional resistance when your machine is hacked, but it's pretty marginal compared to the phishing benefit.


I think part of it is the secure element. Apple is moving that direction with TouchID on MacBooks.


Can somebody please explain to me why hardware tokens/U2F mitigate phishing whereas 2FA does not? My imagination fails to show me a mode of attack that would be relevant here...


You can phish someone by getting their username/password, using that to log in to the targetted service, and then convincing the user to type their 6 digit 2FA code into the phishing page.

If they plug in their hardware token, the browser will give the token the real domain name which won't match the legitimate domain name, so the attacker can't use the response from the key to log in.


A phishing attack can often involve local compromise, making the user install malware etc. In that case, it's a simple attack variant to spoof the USB communication and get valid credentials whenever the user uses the key.


It can, but at that stage it's no longer a phishing attack, it's a full remote compromise. Your average phishing attack is just a web page.


Thanks! I imagine instead of via USB to hardware token, the query could theoretically go via my PC's Bluetooth to my phone?


One thing I don't understand is why are apps like authy or google authenticator not using push notifications to allow you to directly auth via unlocking or touchID instead of having to go through the app. If you really want the user to type something then you can still use push notitication for easy app access


It's mainly an issue with infrastructure and syncing a 2fa code to a specific phone or app.

Sending a push notification requires GA to register for push notifications with a server that has the Apple APNS certificate or firebase key. Google would likely have to run this central server and provide a portal/cloud console API for developers to register for sending these push notifications.

Authy already does this, providing both the TOTP and the ability to send "is this you signing in? yes/no" push notifications, however, charges for it: https://www.twilio.com/authy/pricing which is likely why not many providers actually use Authy and just generate a standard GA-compatible TOTP token.


Ah! Thanks for this answer. Makes sense.


Some do. A lot of companies that use Duo will set that up for their internals.

The problem is that those push solutions require that the company have some means of communicating with the app that you're using to trigger the push and the confirmation (as far as I can tell). This technology works around that by letting the browser talk to the plugged in device, circumventing all of the network/api bits.


> One thing I don't understand is why are apps like authy or google authenticator not using push notifications to allow you to directly auth via unlocking or touchID instead of having to go through the app. If you really want the user to type something then you can still use push notitication for easy app access

My company has something like that, through Symantec. When you need to authenticate, it sends a notification to your phone over the network for you to acknowledge.

It's terrible though: cell signal is horrible in our building, so the people who use it are constantly dealing with delays and timeouts. I opted for a hard token that has no extra network dependencies, and I'm happy with my decision.


Thank god they don't. I had to recently extract the keys from Duo's store for precisely this reason. All the notification crap is proprietary and uses GCM. It won't work on AOSP.


Logging in to Google works exactly like this at the moment.

It's probably because setting this up is more involved for the backend, setting up that key which you have to type in is fairly simple technically.


TOTP is designed to be usable even while offline.


Lord, I hope there aren't apps trying to use TOTP offline. TOTP works by using a shared secret between user and service. This secret and the current time are used to generate a code.

All parties involved must have the secret (this isn't public key crypto).

That means an app that can accept TOTP offline has the secret stored locally where it can be extracted.


But is that trade off worth it? Is the ability to work offline worth giving up a simple prevention of phishing attacks?


Full circle here, since FIDO U2F has phishing-resistance like push notifications and lets you work offline like TOTP. "Offline" in the sense that everything besides whatever you're authenticating against can be offline.


Push notifications offer no phishing resistance. The attacker can present a fake login experience and conduct a real login behind the scenes at the same time. If you think you’re logging in, you’ll approve the push for them.


If you are offline, how do you transmit the TOTP value-of-the-moment to the location where the protected resource is?


That's the 'T' in 'TOTP': it's Time-Based (if the clocks aren't synced it doesn't work)


A push notification to help you open the app doesn't prevent the app being usable offline though.


Authy used to have push notification support for LastPass but it has stopped working for me.


Okta provides this capability.


MS authenticator does that


Thats the single reason I got a smart watch. Just to have my 2fa codes on my wrist instead of getting my phone out of my pocket (I'm using Authenticator+)


Well, remember the entire point of U2F is to not be phishable. Authentication codes—and your smart watch—are phishable.


Any specific recommendations for a smart watch exclusively for 2FA?


Pebbles are quite cheap and offer 7 days of battery. The company went bankrupt, but you can still use the devices with rebble. The Pebble 1 is about 40€ and the rare Pebble 2 about 120€.


I learnt from this thread that my smart watch (Garmin Vivoactive HR) also has 2FA apps available for it (e.g. https://apps.garmin.com/en-US/apps/c601e351-9fa8-4303-aead-4...). I love the watch (long battery life, tons of apps including one from the famous antirez, built-in GPS, etc.) so I am thrilled that I have 2FA options for it as well.


There are probably better options. I'm using a Huawei watch first gen. Rock solid construction, awesome (always on) display, enough battery life, android wear 2, no proprietary strap bs. It's about 100 USD for B Stock.


This is not normally considered a “hacker-friendly” option, but I use an Apple Watch. The above-mentioned 1Password has a watch app, so by using it for my passwords and TOTP codes, I maximize my personal convenience.


>"It's also tremendously more efficient to tap your finger on the plugged in USB than it is to wait for a code to be sent to your phone or go find it on an app to type in."

But with regular TOTP and a software device on a smart phone I can print out backup codes in case you lose your phone. This allows one to log in and reset their 2FA token. What happens if you lose your Yubikey or similar? I guess this doesn't matter as much in an enterprise setting where there is a dedicated IT department but for for individual use outside of the enterprise doesn't TOTP and a software device have a better story in case of loss of the 2FA device?


> What happens if you lose your Yubikey or similar?

Get two, leave one in your safe deposit box. Every service I've seen that supports U2F supports multiple tokens.


I see that does't help much if you're on the road travelling unfortunately. At least with ToTP backup codes, someone at home can read you a printed backup code in order disable and reset your TOTP.


Almost every site that I've set it up with actually requires you have a backup method (app, codes, sms, etc).


> I've added it to everything that allows it, more for convenience than security at this point.

It's convenient only when you physically have the security key; it's a hassle if you forgot or lost it.


I have one. It's attached to my key chain.

If I've lost my keys, I have bigger problems.

It's convenient.


You can (and generally do) have multiples of these tokens. They are easily revoked. The Fido only tokens are $15.


It doesn't have to be your only method.


If you're interested in seeing how it works in action, I built a U2F demo site that shows all the nitty-gritty details of the process - https://mdp.github.io/u2fdemo/

You just need a U2F key to try it.


> If you somehow managed to clone the security key, services would notice that your nonces are no longer monotonically increasing and you could at least detect that it's been cloned.

At least a year ago or so (last time when I checked) most services didn't appear to check the nonce and worked fine when the nonce was reset.


If you can reset the nonce without resetting the key you can probably retrieve the key easily if you can read the traffic. The service should not need to check the nonce, and adding that much state is going to be complicated.


It's not that kind of nonce. It's not even called that formally, it's called the 'signature counter.' It's just a part of the plaintext signed with the keypair. There is zero risk of what you're talking about.

And how is it complicated to store a single integer per account and perform a comparison if `counter <= previousValue` at each authentication to see if it's not monotonically increasing? They already store that user's public key and key handle, they can store another 4 bytes.

In fact, the WebAuthn spec makes verifying this behavior mandatory. [0]

[0] https://www.w3.org/TR/webauthn/#signature-counter


The counter feature is dubious. You correctly describe the upside - if Bob's device says 4, then 26, then 49 it's weird if the next number is 17, and we may suspect it's a clone.

But there are many downsides, including:

Devices now need state, pushing up the base price. We want these things to be cheap enough to give away.

The counter make tokens weakly trackable. If Facebook knows your token's counter was 205 when you signed in at work this morning and 217 when you signed in from your iMac this evening, somebody who visited GitHub at midday with counter 213 might be you, someone with counter 487 definitely isn't you, or at least not with the same token.


> Devices now need state, pushing up the base price. We want these things to be cheap enough to give away.

State is only expensive when it adds a significant amount of die area or forces you to add additional ICs. If you need a ton of flash memory, you can't put it on the same die because the process is different, and adding a second IC bumps up the cost. However, staying with the same process you used for your microcontroller, you can add some flash with much worse performance... which is a viable alternative if you only need a handful of bits. Your flash is slower and needs more die area, but it's good enough.

> The counter make tokens weakly trackable. If Facebook knows your token's counter was 205 when you signed in at work this morning and 217 when you signed in from your iMac this evening, somebody who visited GitHub at midday with counter 213 might be you, someone with counter 487 definitely isn't you, or at least not with the same token.

What kind of ridiculous threat model is this? "Alice logs into Facebook and GitHub, and Bob, who has compromised both Facebook and GitHub's authentication services..." Even then, it's not guaranteed, because the device might be using a different counter for Facebook and GitHub.


> the device might be using a different counter for Facebook and GitHub.

At least for YubiKey, it appears to use a global counter:

https://developers.yubico.com/U2F/Protocol_details/Key_gener...

> There is a global use counter which gets incremented upon each authentication, and this is the only state of the YubiKey that gets modified in this step. This counter is shared between credentials.

Having a global counter does seem like it could weaken the ability to detect cloned keys. If an attacker could clone the key and know the usage pattern (e.g., there are usually 100 uses of non-Facebook services between uses of the Facebook service), then they might be able to use it for a while without being detected. Though, having service-specific counters may have worse security ramifications (e.g., storing which services the key has been used with).

Though if an attacker is going to that much trouble, they may as well just use the wrench-to-the-head method.


The main benefit is that the number should always be increasing. The moment either key uses an old number, the service knows the security device was cloned. The attacker will have to be sure not to increment it more than the target or else an attempt by the target would notify of the cloned key.


Keep in mind that "Bob" in this threat model is Facebook (because they are one of the entities that tries to track what everyone is doing everywhere). So it only needs to get the Github nonce. Collusion on the part of Github is, I suspect, much more likely than Facebook compromising Github's login flow.

Maybe sites colluding with their ad providers to track people is not part of your threat model, but it definitely is for some people. Yes, I know Github does not host ads, so isn't a good example of this threat model.


U2F tokens are already cheap enough to give away. We carry them around in sacks and hand them out at events.


they are not that cheap.

the only yubikey that works with mobiles (NFC) is $50. the cheapest u2f I could find (it only has usb-a port) is $20.


There are at least two U2F keys I was able to find on Amazon which were under $10 USD:

https://www.amazon.com/dp/B01N6XNC01 https://www.amazon.com/dp/B01L9DUPK6

The latter is an open source / open hardware design:

https://github.com/conorpp/u2f-zero


Neither of them works with smartphone hence not of practical use.


I think you might be having trouble with the concept of a U2F token. The mail application on your phone doesn't need a token; it already has a long-term cryptographic binding to the server.


> Devices now need state, pushing up the base price.

You can buy pretty decent (16Mhz, 2K EEPROM, 8K flash) microcontrollers for less than twenty cents (my numbers are from 7 years ago, things are probably cheaper, faster and bigger now). A few bytes of stable storage -- whatever you need to safely increment a counter and not lose state to power glitches -- are not going to add significantly to the cost of a hardware token.


I can think of a few ways to reduce that tracking risk. The token could use a per-site offset (securely derived from the per-site key) added to the global counter, and/or could have a set of global counters (using a secure hash of the per-site key to select one). I don't know how much that would increase the cost, or if there's something on the standard mandating a single global counter.


Nothing mandates it. In fact, it's specifically discouraged in the WebAuthn spec:

> Authenticators may implement a global signature counter, i.e., on a per-authenticator basis, but this is less privacy-friendly for users.

Since you can have multiple keys on the same site, you could go one better, and have a per-key offset. When the key is rederived from the one-time nonce sent from the server, you'd also derive a 16-bit number to add to the 32-bit global counter. But even that wouldn't actually be enough to make correlating them impossible.

A large but finite set of independent global counters is a great idea, though. 256 32-bit integers is just 1 KiB of storage.


Touch to auth is also the part that Google ignores for some strange reason. Their high security Gmail program defaults to remembering the device! There isn't a way to disable it either.


I guess I don't need to touch-to-auth when I start every work day ;)

Our internal gmail might not require it every day, but most systems at Google do. You can't get very far without it.


Thanks for the explanation!

Do you know why GNUK (the open source project used by Nitrokey and some other smart cards) chooses not to support U2F? I don't understand the maintainer's criticisms[0] and I'd like to probe someone knowledgeable to find out more.

[0] https://lists.gnupg.org/pipermail/gnupg-users/2017-March/057...


I am having some trouble understanding as well. Here is what I understand.

The point of GNUK is to move your GnuPG private key to a more secure device so it doesn't have to sit on your computer. With GnuPG, users are basically in control of the cryptographic operations: what keys to trust, what to encrypt or decrypt, etc.

With U2F, in order to comply with the spec you are basically forced to make a bunch of decisions that don't necessarily line up with GNUK's goals. You have to trust X.509 certificates and the whole PKI starting from your browser (CA roots and all that). Plus, U2F is basically a cheaper alternative to client certificates, but with GNUK you already have client certificates, so why go with something that provides less security?

To elaborate: With GnuPG, the reason you trust that Alice is really Alice is because you signed her public key with your private key. You can then secure your private key on a hardware device with GNUK. With FIDO U2F and GMail, you have to trust that you are accessing GMail, which is done through pinned certificates and a chain of trust starting from a public CA root. This system doesn't offer you much granularity for trusting certificates. Adding FIDO U2F to a system designed to support a GnuPG-like model of trust dilutes the purpose of the device. By analogy, imagine if you used your credit card to log in to GMail, maybe by using it as the secret key for U2F. The analogy isn't great, but you can imagine that even if you can trust that (through the protocol) GMail can't steal your credit card number, the fact that you are waving your credit card about all the time makes it a little less secure.

In general, people who work on GnuPG and other similar cryptography systems tend to be critical of the whole PKI, and I'm sympathetic to that viewpoint.


U2F really, really isn't at all like client certificates. The certs baked into tokens are for _batches_ the specification says (and famous implementations like those from Yubico do this) that a batch of thousands of similar keys should have the same certificate, it exists to say e.g. "This is a Genuine Mattel Girl Power FIDO Token" versus "This a Bank Corp. Banking Bank Security Token". Relying parties are discouraged from examining it, since in most cases there's no reason to care.

Unlike your GnuPG key, the FIDO token isn't an identity. The token is designed so that a particular domain can verify that it is talking to the same token it talked to last time, but nothing more. So e.g. if you use your GnuPG key for PornHub, GitHub and Facebook, anyone with access to all three systems can figure out that it's the same person. Whereas if you use the same FIDO token, that's invisible even to someone with full backend access to all three systems.


In the last GNU related post about Emacs, a security person suggested changing defaults related to TLS to address many of the known dangers with the current PKI situation. There, the lead developer apparently didn't want to change the defaults because that would somehow be top-down aggression against user choice in the style of the TSA at the airport.

Here, you are saying that GNUK won't add FIDO U2F because the lead dev is critical of the whole PKI system. Thus, the GNUK user doesn't get defaults which allow them to easily bootstrap into the web services that are used by a large portion of the population.

I mean, that's fine and justifiable as individual projects go. But one could just as easily imagine the approach of these two projects switched so that Emacs reflexively reacted by choosing the most secure TLS settings for defaults, and GNUK being liberal with which protocols they add.

So what's the point of the Gnu umbrella if users and potential devs essentially roll a roulette wheel to guess how each developerbase prioritizes something as critical as security?


I appreciate the summary, but it's still a bit unclear to me. What do you mean by "for each website"? Certainly that doesn't mean every website in existence, so there must be some process by which a new website is registered with the hardware and the key communicated to the site?

But if so, I don't see how that solves the problem of "user goes to site X for the first time, mistakenly thinking it's Github." That registers a new entry for the site and sends different credentials/signatures than it would send to Github. But site X doesn't care that they're invalid, and logs you in anyway, hoping you will pass on some secret information.

Am I missing something?


Normal MFA is the user answering a challenge. Hopefully that challenge came from the expected site, but it is up to the user to verify the authentication/authenticity of the site. If the username/password/OTP challenge came from someone actively phishing the user, the phisher can use the user's responses to create a session for its own nefarious purposes.

Verifying the authenticity of a site is something that has been demonstrated both to be nontrivial and also something that the majority of users cannot do successfully.

U2F/WebAuthn tie the identity of the requesting site into the challenge - by requiring TLS and by using the domain name that the browser requested. So if the user is being phished, the domain mismatch will result in a challenge that cannot be used for the legitimate site.


Solely going by GP's summary, nothing needs to be 'registered with the hardware' because the public/private keypair is deterministically generated on-the-fly, cheaply, with a PRNG every time it's needed. Only two things are ever in the nonvolatile storage on the device: the secret key used as entropy to generate those keypairs, and the single global counter.

The system makes it impossible for phishing sites to log in to your account using your credentials. That's the threat model it guards against.

Entering 'secret information' that isn't user credentials just plain isn't part of it. Though wouldn't anyone phished by e.g. FakeGmail already get suspicious if they don't see any of their actual emails that they remember from the last time they logged in to Gmail?


>The system makes it impossible for phishing sites to log in to your account using your credentials.

So it's an additional factor for authentication, not a way of identifying fraudulent sites to the user? Okay, but you also said:

>Entering 'secret information' that isn't user credentials just plain isn't part of it.

Which is it?

>Entering 'secret information' that isn't user credentials just plain isn't part of it. Though wouldn't anyone phished by e.g. FakeGmail already get suspicious if they don't see any of their actual emails that they remember from the last time they logged in to Gmail?

You would think, but people have definitely entered information in similar circumstances. Also, there's always "sorry, server problem showing your emails, feel free to send in the mean time".


What contradiction? It just plain isn't part of the threat model. Was that not clear?

Although, actually reading the spec, it can actually double as a bit of extra authenticator of the website. Any site has to first request registration (client uploads a new, unique, opaque key handle/'credential id' to the server, along with its matching public key) before it can request authentication (server provides credential id and challenge, client signs challenge).

A credential ID is a unique nonce from the device's global counter, signed by a MAC. The real site will already have a registered credential ID, which the device will take, verify it, and use the nonce to HMAC the private key back into existence.

A phishing site you've never visited before will have no credential ID. Any fake ones it tries to generate will be rejected since the MAC would be invalid. One from the real website won't be accepted either, because the MAC incorporates the website's domain, too. They'd have to get user consent to create a new key pair entirely, which a user could notice is completely different from what's normally requested at login. Then they'd have to consent again to actually authenticate.

https://developers.yubico.com/U2F/Protocol_details/Key_gener...

https://fidoalliance.org/specs/fido-u2f-v1.2-ps-20170411/fid...


>A phishing site you've never visited before will have no credential ID. Any fake ones it tries to generate will be rejected since the MAC would be invalid.

That was my original question: presumably there has to be some way for new websites to be registered on the system. Does it just categorically reject anything not on a predefined list? I mean, there are legit reasons to visit not-github! And new sites need to be added.

In order to say something is “fake”, that has to be defined relative to some “real” site you intend to visit, and I don’t see how this system even represents that concept of “the real” site. Phishing works on the basis that the site looks like the trusted one except that I’ve never been to it.

Put simply: I click on a link that takes me to a site I’ve never been to. Where does this system intervene to keep me from trusting it with information that I intend to give to different site, given that the new site looks like the trusted one, and my computer sees it as “just another new site”?

>What contradiction? It just plain isn't part of the threat model. Was that not clear?

Not at all. You said it “stops phishing sites from using your credentials to log in”. That implies some other secret that’s necessary to log in (making the original credentials worthless in isolation), and yet the next quote rejected that.

If you were just repeating a generic statement of the the system’s goals, which wasn’t intended to explain how it accomplishes them, then I apologize for misunderstanding, but them I’m not sure how that was supposed to clarify anything.

Late edit: as in the other thread, I think I’m just being thrown off by this being mislabeled as a phishing countermeasure, when it’s just another factor in authentication that also makes use of phished credentials harder. Not the same as direct “detection of fake sites”.


There doesn't technically have to be a way to register new sites. There is, but theoretically there never actually had to be, given keys are generated deterministically on-demand, using the website's domain name effectively as a salt. There's no system with a list of websites.

The signed challenge-response you give to the phishing site cannot be forwarded to the real site and accepted, because you used the domain name as part of your response, and as part of key generation, so it doesn't match. That's all that meant. 'Credentials' included the use of a public/private key, not just the typed password.


No! Registration is important. What you've described would be subject to an enumeration attack.

During Registration the Relying Party (web site) picks some random bytes, and sends those to the Client (browser). The Client takes those bytes, and it hashes them with the site's domain name, producing a bunch of bytes that are sent to the Security Key for the registration.

The Security Key mixes those bytes with random bytes of its own, and produces an entirely random private elliptic curve key, let's call this A, the Security Key won't remember what it is for very long. It uses that private key to make a public key B, and then it _encrypts_ key A with its own secret key (let's call that S, it's an unchangeable AES secret key baked inside the Security Key) to produce E

The Security Key sends back E, B to the Client, which relays them to the Relying Party, which keeps them both. Neither the Client nor the Relying Party know it, but they actually have the private key, just it's encrypted with a secret key they don't know and so it's useless to them.

When signing in, E is sent back to the Client, which gives it to the Security Key, which decrypts it to find out what A was again and then uses A to sign proof of control with yet more random bytes from the Relying Party.

This arrangement means if Sally, Rupert and Tom all use the same Security Key to sign into Facebook, Facebook have no visibility of this fact, and although Rupert could use that Key to get into Sally's account, the only practical way to "steal" the continued ability to do so would be to steal the physical Security Key, none of the data getting sent back and forth in his own browser can be used to mimic that.


Right, there's a good security reason they have the registration step. Though I don't think what you described is quite how it works. The FIDO and CTAP protocols don't let the Relying Party provide any entropy to the authenticator. The only input is the domain name(+ user handle in CTAP). Authenticator has to create the key with its own entropy. It doesn't need the server's entropy to have multiple keys per domain name.

(In Yubikeys' case, E is actually a Yubikey-generated random nonce that's used to generate the private key by HMAC-ing it with S and the domain name, not an encrypted private key, but that's all opaque implementation details. E can be anything as long as it reconstructs the key.)


No, the Relying Party absolutely does provide entropy here. Specifically the "challenge" field which you probably think of as just being for subsequent authentication is _also_ present in the registration and is important.

This challenge field, as well as the origin (determined by the client and thus protected from phishing) are turned into JSON in a specified way by the client. Then it calculates SHA-256(json) and it sends this to the Security Key along with a second parameter that depends on exactly what we're doing (U2F, WebAuthn, etcetera)

You can see this discussed at the low level in FIDO's protocol documentation: https://fidoalliance.org/specs/fido-u2f-v1.0-ps-20141009/fid... and you can see the Javascript end discussed in WebAuthn: https://www.w3.org/TR/webauthn/#createCredential

The Security Key doesn't get told the origin separately, it just gets the SHA256 output, this allows a Security Key to be simpler (it needn't be able to parse JSON for example) and so the entropy from the Relying Party has been stirred in with the origin before the Security Key gets involved.

As well as values B and E, a Security Key actually also delivers a Signature, which can be verified using B, over the SHA-256 hash it was sent. The Client sends this back to the Relying Party, along with the JSON, the Relying Party can check:

That this JSON is as expected (has the challenge chosen by the Relying Party and the Relying Party's origin)

AND

That SHA256(json) gives the value indicated

AND

That public key B has indeed signed the SHA256(json)

The reason they go to this extra effort with "challenge" and confirming signatures during registration is that it lets a Relying Party confirm freshness. Without this effort the Relying Party has no assurance that the "new" Registration it just did actually happened in response to its request, I could have recorded this Registration last week, or last year, and (without the "challenge" nonce from the Relying Party) it would have no way to know that.

Thanks for correcting me on how Yubico have approached the problem of choosing E such that they don't need to remember A anywhere.

[edited: minor layout tweaks/ typos]


>So it's an additional factor for authentication, not a way of identifying fraudulent sites to the user?

It doesn't identify fraudulent sites (tls is the tool for that). but it won't give a properly signed login response for gmail.com to a request from the site fakegmail.com.

That's a poorly worded answer to your question, but here are some slides I made to explain the specification: https://docs.google.com/presentation/d/1AkcTHahME5xY-FExm6vN...


It could still be susceptible to a user mistaking fakegithub.com to github.com, but a pairing with github.com will never work with a request from a server from fakegithub.com. Likewise, github.com cannot request the user to sign an auth challenge for fakegithub.com. The requesting server is directly tied to the signature response.


Okay, but then that doesn’t sound like phishing protection, but obviation of theft of secrets ... aka regular multi factor authentication.


> For each website, e.g., GitHub, it creates a HMAC-SHA256 of the domain (www.github.com) and the secret key, and uses this to generate a public/private keypair. This is used to authenticate.

Can one usb device work on two separate accounts for a given domain, (e.g. work gmail and personal gmail), or do you need two of them?


One device can work on two separate accounts, no problem. For the same reason you can use the same password for two different accounts (although there are other reasons why you wouldn't want to do that).


So is the main difference between FIDO U2F and regular TOTP simply the addition of the HMAC-SHA256 of the domain on the server side?

Is there a requirement that FIDO be implemented on a hardware device?


Do you have a recommended device? Ideally it would work reasonably well with iphone as well as macbooks (unfortunately both usb-A and a courage's worth of usb-C).

thank you


I hope Firefox will implement it, because Chrome isn't a trusted browser anymore


Firefox has implemented it, although you need to go to about:config and enable "security.webauth.u2f" for it to work.


This is disabled by default because it doesn't entirely work. WebAuthn is fully implemented in Firefox, and on by default, but U2F is so far still much more common, and the U2F enabled by this feature switch is only kinda-sorta compatible.

Sites need to move to WebAuthn, which works with the same tokens and browsers (well, Chrome, Firefox, Edge) have either shipped or demo'd with a promise to ship. But right now today U2F works in a lot of places if you have Chrome whereas WebAuthn is mostly tech demos. The most notable website that has WebAuthn working today is Dropbox, seamless in Chrome or Firefox, any mainstream platform (including Linux) and all the tokens I have work. That's what everybody needs to be doing.


How much implicit trust do users place in the manufacturers of their security keys?


U2F is fantastic. I wish Apple supported it in Safari (hoping!).

Also, YubiKey 4 is a great device. Set it up with GnuPG and you have "pretty good privacy" — with convenience. I recommend this guide for setting things up: https://github.com/drduh/YubiKey-Guide

The great thing about YubiKeys is that apart from U2F, you also use them for OpenPGP, and the same OpenPGP subkey can be used for SSH. It's an all-in-one solution.


WebauthN is coming to WebKit (already available in Firefox, Chrome and Edge). Once that is supported we should be able to have u2f everywhere.


Nice! I'm also looking at getting a YubiKey 4 or a Nitrokey after reading about it being used by all developers with commit access to kernel.org.

https://www.linux.com/blog/2018/3/nitrokey-digital-tokens-li...


Yea, except YubiKey got compromised.

https://www.yubico.com/support/security-advisories/ysa-2017-...

And, if you lose your fob or your backup fob you're boned.


That vuln only affected RSA keys generated for specific niche functionality and not most uses of the YubiKey.

> The issue weakens the strength of on-chip RSA key generation and affects some use cases for the Personal Identity Verification (PIV) smart card and OpenPGP functionality of the YubiKey 4 platform. Other functions of the YubiKey 4, including PIV Smart Cards with ECC keys, FIDO U2F, Yubico OTP, and OATH functions, are not affected. YubiKey NEO and FIDO U2F Security Key are not impacted.


That didn't stop me getting about 15 calls from RSA declaring Yubikey will never recover. The annoying thing with this non-issue is the FUD around it.


Hm, I suppose, though that is the functionality the poster I was replying to was discussing. Though, one has to wonder, what other flaws are lurking below the surface on that chip. It isn't flawless. Once there is another major issue it is going to be an abandon ship type of situation. What are the alternatives if any, move to a new key that doesn't have the problem or look into an alternative means, etc.


I think this is a revocation and provisioning problem: when the device is compromised, how hard is it to revoke that device and provision a new one for yourself?

Structurally, actually making these tokens should be commoditized anyway. So on the software side, it needs to be not absolutely painful to rotate credentials. Something like a one-time-pad that you can use in "in case of fire break glass" situations.


If you've ever used GitHub's SSH keys provisioning, any halfway decent U2F or WebAuthn implementation (including GitHub's) works a lot like that.

You can register as many keys as you like within reason, you can give them names like "Yubico" or "Keyfob" or "USB Dildo" and any of them works to sign in.

Once signed in you can remove any you've lost or stopped using, and add any new ones.

The keys themselves have no idea where you used them (at least, affordably priced ones, you could definitely build a fancy device that obeys FIDO but actually knows what's going on rather than being as dumb as a rock) and there's no reason for your software like a browser to record it. Crypto magic means that even though neither browser nor key remembers where if anywhere you've registered, when you visit a site and say "I'm munchbunny, my password is XYZZY" it can say "You're supposed to have one of these Security Keys: Prove you still do" and it'll all just work.


Thanks for the explanation. It all makes sense, and the public/private key system is awesome for that.

The point I was getting at was "if your one Yubikey is stolen, what do you do?" If you fall back on password authentication, then your Yubikey based system was only as secure as the password mechanism protecting your account recovery mechanism.

The answer might be "provision two keys and stick one in a bank deposit box", etc. Regardless, there's an inherent problem that you want your recovery mechanism to be as hard to crack as your primary authentication mechanism, but you need it to not be an absolute pain.


Most sites require you to set up another form of 2FA along with U2F (for example, TOTP using Google Authenticator). There are also recovery codes that you print and store on paper.

I don't consider losing a Yubikey to be a serious problem, though it's important not to use it to generate RSA keys, as then you will not be able to make any backups. Generate your keys in GnuPG and load them onto the key, keeping backups in secure offline locations.


Several of the sites offering 2FA begin by telling you a bunch of arbitrary one-use passwords for such emergencies. They suggest you write _those_ down and stash them somewhere.

They also tend to propose you provision several other 2FA mechanisms, such as SMS or TOTP OTP. But yes, I always begin by enrolling two Security Keys, and then one of them goes back in my desk drawer of emergencies.


Potentially difficult if you were relying on a unique product like yubikey which doesn't have a 1 to 1 competitor in the industry at the moment.


There are many makers of FIDO U2F complaint hardware devices these days.


The original poster was discussing the OpenPGP feature. The U2F feature of YubiKey wasn't compromised by the vulnerability.

The vulnerability is real and still exists. There was even someone in this HN thread that was planning to use an old key fob Arstechnica sent him, specifically for the OpenPGP feature.

I should have split my backup and vulnerability comments into two, because they've sparked two unrelated debates. It started out as such a simple comment! :)


Yes, but with OpenPGP you can just rotate your subkeys. For encryption subkeys it's advised to back them up somewhere either way.

It maybe you're talking about U2F applet of Yubikey? Then it's not affected by the bug you posted. And you should have backup codes enabled.


The use case I gave is: You lost your backups and your main, now what? You're done. Firesale on your life or business. Backups are something everyone has to contend with in any situation, but it isn't one that has been completely solved in the security industry yet in a way that is acceptable or uniform in any way. The average user just doesn't have a clear system for providing a high level of protection for both their security and ensuring they have redundancy in their life or livelihood.

There are lots of different ways to skin a cat but no one has established a definitive solution or made it easy or obvious. Something like a YubiKey is only one part of a solution, and without something more you are at risk. Or, perhaps there's a way to create an encryption with redundancies built in so you're never in that situation to begin with. What if the concept of a backup was built into the key exchange and losing your original didn't necessarily lock you out.


Is this really a part of the standard? There isn't a "I lost my token" process like there is an "I forgot my password" process on every website now?


None if this affects me — I generated my keys using GnuPG and I do have backups (offline, of course).


I'd like to mention that I've been testing the Krypton app (iOS only for now) for U2F. You install Krypton on your iOS device, it creates keys that are only on the device. You then install the extension for Chrome. When U2F is requested they send the challenge to the iOS device which calculates the response and sends it back to the extension. App can be configured to require approval or always send response.

App also support SSH keys.

Works very well for me and the service is free. https://krypt.co/


Good to hear you're liking U2F on Krypton. Android support was released last week, and Firefox/Safari support is coming soon!


I wish you just had the workstation download on the homepage again. I had to find your homebrew bottle GitHub repo to figure out how to install Krypton on my new MacBook.


Agree. The new page seems phishy. I double checked the domain and certificate before trusting the page at all. Other than that.. great product


Sorry for the inconvenience. You can also find the install instructions on the help screen of the app.


When I started using Krypton for ssh and code-signing last year, the first thing I did was ask the Krypton team on twitter if they were going to add U2F. Glad to hear it’s in beta! It’s rarer these days to subsume another device into our phones’ functionality, but it’s still a good feeling.


Am I only the one who is disappointed in the seemingly stalling of traction for U2F? Google, Github, and Facebook supported U2F 2 years ago - so all I can see is Twitter, Dropbox and niche security news like KrebsOnSecurity.com have added support since then? Sure it's something, but 2 years I would have expected more - Who am I missing? Without more websites, consumer mass market has little incentive to adopt - and without users, websites have little incentive to support U2F - thereby furthering the stalling.


Well, maybe I'm over-reaching, but I think that most banking "security" sucks.

Last month I tried to make an e-banking account in South Europe. In 2018.

- They required "6-12 characters as a password, and no special characters". You can't hash special chars?

- Apparently it's okay, because "2FA". Which is a "changeable via a call" 4-digit-code, which the bank employee knows "only" two digits.

I'd be far more inclined to trust Twitter or GitHub than my bank with my data.


In my country, many banks force people to install "security modules" which includes a driver that monitors their network. There is no privacy policy.


I needed a new bank and thought surely there will be one that offers U2F.. days of searching later, and I still have yet to find one that does. It seems like the vast majority of online banks don't even support any kind of 2FA except email/text. Really really sad.

For regular guys like me, I can't think of any online service more important to protect than my bank account.


From https://twofactorauth.org/#banking, the only American or Canadian bank that supports a Hardware Token is Wells Fargo - which only seems to support RSA SecurID: https://www.wellsfargo.com/privacy-security/advanced-access


Banks seem very slow to adapt to technology. My credit union for years after the release of the first iPhone still used a Flash login, although they did have a mobile login link you could get from them by asking.


FWIW, in Poland some banks started using 2FA (many different types) several years before Google or any other site I know of.



Yet only Chrome is supported -- and this does not include chromium-browser on Linux.


It sounds like they force you to use a phone/email code when you log in from a new device? Or am I reading that wrong.


> Am I only the one who is disappointed in the seemingly stalling of traction for U2F?

The problem is that all of these things are a PITA to administer.

I wanted a VPN between our two offices. Cool. I'll buy some YubiKeys, type some command line magic on Linux and I'll be good to go ...

Pschye!

This stuff is fine if you have 100+ people and the resources to administer.

If you simply want to manually distribute stuff to <10 people, it's a nightmare.

Until I can set up something easily at the 10-person level and scale it gradually to 100+, this stuff is going to remain tractionless.


U2F was never fully supported in browsers making it hard for sites to deploy it everywhere. The new WebauthN standard is going to be supported everywhere which makes it more likely that sites will actually use it.


Something like U2F is never going to find mass success in a consumer application. Every enterprise auth provider supports it, which is its major use case for now.


I guess this is a dumb question, but is it still "multi factor authentication" if you only use a single physical device to complete the login process?

The way the article is written, it makes it sound like the physical key is a replacement for 2FA instead of just a hardware device that handles the second factor (while leaving the password component in place).


The key replaces the keycode of 2FA auth - password still has to be used.

You can already use the same process on your GMail if you have a compatible U2F key.


OK thanks, this clarifies the part that says "it began requiring all employees to use physical Security Keys in place of passwords and one-time codes," which I found super confusing.


Strange sentence, but I believe they mean replcaing "password + one time code" with "password + U2F"


Actually U2F can be used to devise several different schemes, either token+password, or token alone, even just token without username. Of course each of these has various advantages and disadvantages.


Some logins are cookie + security key (basically if I've already logged in today) which basically feels like "tap my security key and I'm logged in".

Of course, more sensitive stuff (access to production, access to pay stubs, access to $cloud_erp) requires re-entering password plus the security key.


The password can be replaced by something simpler like a PIN, which is why you'll read about U2F replacing passwords and one-time codes.

Sometimes 'replacing passwords' is used to mean 'replacing the traditional username and password login' as well.


> I guess this is a dumb question, but is it still "multi factor authentication" if you only use a single physical device to complete the login process?

This is a common misconception. The threat model of 2FA is not "I lost my device, and it is now in the hands of someone who knows the password".

The threat model of 2FA is one of:

1) "An attacker has gained remote access to my computer, but not physical access"

2) "I have been targeted by a sophisticated phishing attack, and I trust the machine that I am currently using"

TOTP (and even SMS) protects against (1) in most cases, though U2F is still preferable. U2F is the only method that protects against (2).


> U2F is the only method that protects against (2)

A bit of clarification: U2F protects against phishing attacks by automatically detecting the domain mismatch when a link from a phishing email sends you to g00gle.com rather than google.com, which is something that a human might overlook while they're typing in both their password and the second factor they've been sent via SMS. However, if someone were to use a password manager and exclusively rely on autocomplete to fill in their passwords, then that would also alert them to the fact that something was fishy when their browser/password manager refuses to autocomplete their Google password on g00gle.com. So this isn't exactly the only method that protects against the second scenario above... though I will concede that using a password manager in this way sort would sort of change 2FA from "something you know and something you have" to "these two somethings you have" (your PC with your saved passwords and your USB authenticator), which is something that might be worth considering.

Regardless, these physical authenticators are a huge step up from SMS and I'm very happy that an open standard for them is being popularized and implemented in both sites and browsers.


> However, if someone were to use a password manager and exclusively rely on autocomplete to fill in their passwords, then that would also alert them to the fact that something was fishy when their browser/password manager refuses to autocomplete their Google password on g00gle.com.

Lots of websites do weird modal overlays, domain changes and redirects, redesigns, or other tricks that break password autocompletion. I've never seen a secure password manager that's robust enough against all of these that it would eliminate the human factors causing the phishing opportunity here.

Apparently Google hasn't either, because that was their motivation behind developing these schemes.


> U2F is the only method that protects against (2)

Would you be able to elaborate on this? I'm not understanding the difference between TOTP and the physical key from the article for this scenario.


With TOTP, a sufficiently clever phish may convince you to enter the one time code.

With U2F, there is communication between the browser and the device, requesting authentication for a specific origin hostname -- that can't (shouldn't) be fooled by a phish hosted at Google.com-super-secure-phishing.net


Where do password managers fit in here? If a phisher convinces me to try to login to google.com-super-secure-phishing.net using my google account I'm going to notice something is wrong when my password manager refuses to fill in the login form.


This is where it comes down to user behavior. One of the security engineers from Stripe gave a talk about this at Blackhat last year -- she had phishing campaigns that had users ignore that autofill didn't work and manually copied/pasted their password manager credentials into the phishing sites.

https://www.youtube.com/watch?v=Z20XNp-luNA


> If a phisher convinces me to try to login to google.com-super-secure-phishing.net using my google account I'm going to notice something is wrong when my password manager refuses to fill in the login form.

You say that, but the overwhelming body of evidence from real-life phishing attacks and red-team exercises demonstrates that even very technologically-literate engineers will not consistently notice.


Google and Apple both have mobile (non-SMS) based two factor prompts that seem equally immune to phishing?


> Google and Apple both have mobile (non-SMS) based two factor prompts that seem equally immune to phishing?

Any "type in a code" or "approve this login (yes/no)?" authentication factor is technically vulnerable. All the phishing site needs to do is proxy the authentication to the actual site in real time.

These guys put together a great overview of the approach: https://www.wandera.com/bypassing-2fa/


The current domain is sent to the device and used to generate a private key that is used to authenticate. If it's a phishing domain, the device will return a private key that won't work on the real domain.


That's interesting, thanks.

I always thought the 2FA threat model was "Someone acquired my password" or else "someone has access to my email account and may try to do password resets by email."


First paragraph answers it:

> Google has not had any of its 85,000+ employees successfully phished on their work-related accounts since early 2017, when it began requiring all employees to use physical Security Keys in place of passwords and one-time codes, the company told KrebsOnSecurity.


If they "use (physical security keys) in place of (passwords and one-time codes)", that would no longer be MFA: they're authing strictly with "something they have".

A more in-depth quote is later in the article: "Once a device is enrolled for a specific Web site that supports Security Keys, the user no longer needs to enter their password at that site (unless they try to access the same account from a different device, in which case it will ask the user to insert their key)."

The parenthetical seems to imply that they're doing initial auth (and thus cookie generation) with password + U2F, and then re-validating U2F for specific actions / periodically without re-prompting for the password, similarly to how GitHub prompts for "sudo mode" when you do specific account actions.


Correct me if I am wrong, but I think the YubiKeys are PIN based. So in order to use it to authenticate you have to enter a PIN and three wrong attempts results in it locking. The PIN itself would be the, "something you know," and the YubiKey is the, "something you have."


Depends how you use the yubikeys.

They support x509 certs, which use PINs. Whether it needs the PIN once-per-session, once-per-15seconds, or once-per-use is configurable. The number of failures before it locks is also configurable. More details can be found here: https://developers.yubico.com/PIV/Introduction/

They also support TOTP/HOTP, where the computer asks the device for a code based on a secret key that the device knows. This can require a touch of the button.

EDIT: TOTP/HOTP modes do support a password, as cimnine pointed out. I'd forgotten about that setting.

Yubico OTP is similar to TOTP/HOTP, and is the mode where the yubikey emits a gibberish string when pressed. The string gets validated by the server against Yubico's servers. This does not require a PIN.

The U2F mode does challenge/response via integration with Chrome and other apps. The app provides info to the USB device about the site that's requesting auth, then you press the device and it sends a token back. This is critical to the phishing protection: barring any vulns in the local app (which have happened before), you can't create a phishing site that will ask my Yubikey for a challenge that you can then replay against the real site. This mode requires physical touch but no PIN.


YubiKeys support PINs for protecting TOTP/HOTP.


Right you are, my mistake. I've added an edit to my comment correcting that.

Thanks!


They do have a PIN but you don't enter it every time you authenticate.

The PIN is used to configure the YubiKey itself.


This isn't correct, they only have a single small button.


Poorly worded (or possibly misunderstood) — it was password + OTP → password + U2F. (In practice the OTP was also usually supplied by a dedicated USB stick, so the change was mostly transparent.)


Now the real question becomes: how often were they getting phished before the new policies? Knowing Google, there's no way they will answer THAT before another decade.


Why? What does it matter? Presumably it must have happened at least once for them to bring in this policy.


Well, for one, it would put the non negligible costs in perspective. Second, it would be an additional data point. More data is usually better than less of it. In this case, though, it's sensitive data, for a number of reasons, which is why I don't see it happening for years.


Huh? The article literally is about 2FA, as originally conceived. It isn't replacement for 2FA -- it is 2FA.

The key (sic) thing about U2F isn't that it is new and special (it isn't -- it's plain old 2FA as used for more than a decade) but rather that it is practical to deploy for smaller organizations. You don't need to buy large quantities of keys. You don't need a special server. You don't need staff with special skills to deploy it. It works with "cloud" providers like Google and Facebook, out the box (same key as you use for your internal services).


Not quite. A 6 digit code can be phished out from users pretty easily. They'll enter it anywhere its asked, similar to a password.

However the U2F and Fido spec requires a Cryptographic assertion (with all that replay attack mitigation stuff like Nonces) that makes it so that an attacker can reuse a Token touch. I'd probably encourage a glance over this https://fidoalliance.org/specs/fido-u2f-v1.0-ps-20141009/fid...

Sadly the Wikipedia article doesn't have a good layman's explanation yet, but I'm sure it'll will soon.

Yes at a high level, its still 2FA but like most options in any factor of Auth. It can be improved upon. (For a simple case, take Fingerprint readers and look at the advances of liveliness checks and how many unique points it requires.)


When I say "2FA" I mean proper 2FA with a hard token. As used for 20 years or so in government, large companies.


No, the key thing about U2F is that it can't be phished.

Any other 2FA method can.


How do you phish a smart card?


The best explanation is really: https://fidoalliance.org/

Short version: the keys are matched directly from the device to the site making it virtually impossible to phish unless you control the site itself.

More

Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: