Hacker News new | comments | show | ask | jobs | submit login
CloudFlare partners with Authy to implement two-factor authentication (thenextweb.com)
52 points by danielpal 1580 days ago | hide | past | web | 46 comments | favorite



Here's a blog post on the decision process we went though to select Authy and, in particular, why we didn't use the Google Authenticator app:

http://blog.cloudflare.com/choosing-a-two-factor-authenticat...


Really happy you guys decided to finally implement two-factor auth! I had suggested this after the compromise you guys had a while back: http://news.ycombinator.com/item?id=4067190

I wassss kind of rooting for DUO Security (http://www.duosecurity.com/) to be the vendor you chose, but I guess you can't win them all. Do you think maybe you could work on login accounting and some alerting next? :-)


It's not very straightforward, but it does seem you can revoke the app via this page -

https://accounts.google.com/b/0/SmsAuthConfig


This is something google specifically does. And it's not something you can really do if you loose your phone (i.e you need two-factor authentication to get there to disable two-factor authentication). Plus is not something that is centralized (if you have X accounts you have to go to all X accounts and disable them), and then you need to reconfigure all X accounts.

Honestly that's a lot of work. I've been using Google Authenticator for 2 years now with 7 accounts. Everytime I change phones (twice now) it's been a nightmare. Also since I travel, half the time they don't work.

I am going to build Google Auth support into Authy and it will be 10 times better that the Google Authenticator App....how I wish Google would do it, but they have abandoned Google Authenticator long time ago (they didn't even bother to support retina display's).


> And it's not something you can really do if you loose your phone (i.e you need two-factor authentication to get there to disable two-factor authentication).

Backup codes. It's not the cleanest approach in the world, but it's still an actual second authentication factor.

How can you safely disable Authy if you lose your phone without risking someone else having the same ability to do so? This is a solved problem.

I agree that the process of switching phones sucks, but it almost needs to in order to keep the MFA keys difficult to clone.


Not against 2-factor, but now I have to install yet another application of questionable origin to trust my access to. Why not just use Google Authenticator?


"Google Authenticator" actually is any RFC 4226/6238 compatible HOTP/TOTP client, actually, including hardware dongles or phone apps or whatever.

I'm not sure if Authy is using the same protocol.

What really surprises me is that Twitter doesn't support any OTP solution; Twitter accounts getting hacked is a fairly common thing (@mat), and there's basically no solution to it now. Facebook doesn't use OTP but uses the information they have to do probably the best knowledge based authentication on the Internet (plus, shows IPs in use, and does geo-IP based fraud prevention, but Twitter doesn't really know anything secret about you. I've bugged Twitter people about this several times.


Authy is based on the same standards.


So I can install just the Google app and enter my Authy codes there, and vice versa?


My understanding is that Authy uses a key that is twice the length of the Google Authenticator key and so they are not compatible. Also, there are other things that Authy does way better than Google Authenticator:

1. If you change phones you have to reconfigure all your accounts that are using Google Authenticator. With Authy it just works when you install the Authy app on the new phone.

2. Authy fixes time sync problems between your phone and UTC in the background so you do not have to worry.

3. And if you lose your phone with Google Authenticator there's no simple way to revoke access to all your accounts. With Authy there is.

4. Authy has a way to revoke tokens across all devices. If (like what happened to RSA) the private information were stolen from Authy they can invalidate all the tokens and securely reissue new ones.


#1 is the biggest problem with Google Authenticator today (as well as inconsistent use of Authenticator across various Google properties, but that wouldn't matter to a third party site).

AWS supports the same system in a less brain-dead way, treating the authentication as a separate feature per account profile, and letting you re-enroll a new token and delete the old one. I'm pretty sure this is a Google Account problem, not a Google Authenticator problem. (AWS IAM is actually a pretty awesome product, but not very well documented, and from what I've seen of AWS users, not being correctly used to even 5% of its potential by most users.)

It's obviously more secure to make changes to the authentication system harder than just "any single valid login from any device lets me reset authentication requirements for future logins", though. The real problem is being able to get a one-time authenticated login, use that session to add a new "always let me log in" credential, and leave everything else as-is. The legitimate owner then has no reason to be suspicious, and you can continue to subversively use the account. It's almost better when the legit owner is forced out of his account on a change like that, as he can then contact Google's excellent customer support (...) to resolve the issue.

Of course, Google exposes themselves to THAT with application specific passwords, which are neither application-specific nor otherwise limited, and can be created easily once you log in.


I'd rather deal with resetting OATH 2-factor tokens, than rely on proprietary apps like duo or authy, or rely on the security of mobile networks (for SMS). That complication is an unavoidable consequence of not relying on trusted third parties (like authy or duo), right? Apps like those also encourage fracturing of the 2-factor ecosystem, rather than relying on a single standard.

I realize that corporations with lots of employees to support see things differently, and centralized 2-factor proxy in that environment makes more sense.

The only problem I have with google authenticator is that it's barely maintained. Even obvious features like rearranging order of accounts, haven't been implemented. http://code.google.com/p/google-authenticator/issues/list?ca...

Although they could add some export functionality, that would be a problem, now that I think about it, because anyone with even brief access to your phone could export all the OATH seeds to their phone.


> The only problem I have with google authenticator is that it's barely maintained. Even obvious features like rearranging order of accounts, haven't been implemented. http://code.google.com/p/google-authenticator/issues/list?ca....

There is also a weird bug in google authenticator (on ios) where the edit button in the lower left doesn't work until you hit legal information button, then go back. Very weird.


Actually Google seems to have fixed the one issue I really cared about; it's possible to create a new device (and then manually use the new code to put the app on both iPhone and iPad) without turning off 2-factor-auth entirely.

But they don't actually check that you have the 2 factor device when you do this, so all you need to do is get logged in once and you're clear to lock the original user out, change auth credentials, etc.

It's totally fine for the "protect a personal gmail account" threat model, but as far as I can tell, no one does 2fa in a way as-secure as I'd want it for an enterprise/IT deployment.


> Actually Google seems to have fixed the one issue I really cared about; it's possible to create a new device (and then manually use the new code to put the app on both iPhone and iPad) without turning off 2-factor-auth entirely.

Interesting. I have been using 2fa for a while now, and hadn't noticed this yet (guess I don't view that page often). It seems they rely on someone needing to know the password and the generated pin to get to that point (or the pre-generated backup codes -- which are also not invalidated with the new 2fa replacement).

> But they don't actually check that you have the 2 factor device when you do this, so all you need to do is get logged in once and you're clear to lock the original user out, change auth credentials, etc.

I don't see how this is different than it was before. If someone gets your password and a valid 2fa pin or pre-generated code (eg. they can log in), they could just change your password, turn off 2fa, etc. Your aren't getting in then either.


If you have a computer saved as "trusted", it doesn't require the dongle. I'm pretty sure people usually save their primary computer as trusted, so all this takes is an unlocked browser with potentially saved password for Google (not unreasonable, given how often people log into google services). There has to be some happy medium between 100% trusted computer (including admin pages) and untrusted computer (PIN every 30 days or every new Google service login...). I'd probably start by always requiring PIN for the admin page, particularly for features related to removing or changing the authenticator, even if the user is on a trusted computer.

What I want is a system where only momentary access to all login credentials normally used for a login (password (saved in browser?) and 2fa device) is enough to log in once, potentially lock out the authorized user, but not to have permanent access.

With a "real" token, at least in most enterprise systems, a normal user can't remove the token from his account. So you'd be able to log in once, maybe reset the password, but couldn't change the token or remove the need for a token.

I'd be fine with a system where removing 2fa was a big procedure, same as the lost password recovery should be (i.e. more than just one email). Maybe just requiring a 72h wait (after emailing you and a trusted designee?) where anyone could complain to stop it and escalate to humans?

There should be a "low trust path" way to lock someone out of an account (with minimal authentication), so if you lose your token it can be locked, which sucks, but being unable to check your email for 12-72h in the interim might be an acceptable compromise. All of this should be configured when you open the account, or as you rely more on the account, in advance of an actual breach.

I guess what I really want is an Internet login system which assigns different levels of trust and trust requirement to different actions, not the current basically binary system. Something more similar to credit (less checking for small amounts, repeated transactions, etc. than for a new mortgage).

Amazon sort of does this (they have at least 4 levels; un logged in, sort of logged in on a machine without 1-click, logged in enough to make purchases to existing addresses, logged in in last 5 minutes or so to edit details). However, Amazon is light years ahead of everyone on a lot of things related to ecommerce, and it's not trivial to figure out how this should work.


Good point on the "trust for 30 days" thing. I generally use IMAP and only login to the web panel for administrative/settings purposes, and thus never check the 'verify this machine for XXX days' box. Google should indeed require PIN for changing any account settings, if you have 2fa enabled.


re: key length - Google Authenticator supports any length of key; Google happens to provide 80-bit (below the 128-bit minimum, per the HOTP spec!) keys for their own services. However, it does NOT support > 6 character output length, nor HMAC algorithm, nor (for TOTP) varying step or offset support.

re: 1 - By putting your MFA codes in a centralized (and synchronizable) location, you're losing much of the effectiveness of the second factor. So "when you install the Authy app on the new phone" now there are two (or more) devices with access to the second factor. So when someone hacks your Authy account, all of your MFA tokens are compromised.

re: 2 - time sync problems are supposed to happen server-side, per the spec. As an implementor it's nice to not have to solve that problem, but you should be using a library for it regardless. Also, please find me a phone (at least one with GPS) that doesn't keep remarkably accurate time. I'd be more worried about server clock drift.

re: 4 - Any company dealing with MFA secret keys should have a way to revoke them. It's conceptually as easy as nulling out a cell in a database. By putting everything with this service provider, you're increasing the amount of damage a single breach can do.

There are a lot of things wrong with Google's Authenticator app, but the fact that it doesn't sync with a remote service is a HUGE benefit to security. I like the idea of a "revoke everything from this device" button, but I don't like having all my security eggs in one basket.


> There are a lot of things wrong with Google's Authenticator app, but the fact that it doesn't sync with a remote service is a HUGE benefit to security. I like the idea of a "revoke everything from this device" button, but I don't like having all my security eggs in one basket.

I strongly agree.


According to https://www.authy.com/help/faq, it looks like they're using http://tools.ietf.org/html/rfc4226 HOTP with some version of SHA-2 and a 256 bit seed.

The suggested default for HOTP/TOTP is SHA-1 with a 160 bit seed and that's what most systems seem to use.


Ah thats my problem .. only if authy can integrate and let us add what we already have on Google authenticator . i will dump Google auth.


You will be very pleasently surprised very soon. We will support google authenticator and it'll be awesome...I can't wait to share - I wish apple was quicker approving apps - If you want a version sooner I can share via testflight if you wish, that way you can also help us make it better. E-mail me at d@authy.com and I'll make it happen.


Does this mean you have Google Authenticator support in the Android app already, since there aren't any approval times? Or are you specifically delaying updating the Android app to wait for Apple? That's not fun. :(


I'm a little puzzled by the way authy works.

It seems to me the authy app refreshes a new code whenever I turn the screen off and on. I don't know how it syncs with the authentication server. The app must not only based on time ticks, like other 2-factor apps.

One way I thought is to notify the authentication server in background network requests. But I guess this isn't the case, because it won't work if you don't have network signal (you can try it in the airplane mode).

The other way I thought is the code must be self-verifiable. That means half of the code is a random number, and the other half is the real code computed based on the rand numb and some credentials. If it is really in this case the security strength is only half of the length of the code, i.e. 3.5 digits. That's not very safe to me, especially the website of authy itself only needs one code to log in (without any `first-factor` password).

Any thoughts?


It is based solely on time. The reason we refresh the time as you screen off and on is so that you always have 20 seconds ( I hate it when I open google authenticator and I have to wait for it to refresh).

We do some complex things to make sure time is always synced in the background. If you are in airplane mode obviously we can't contact the server. But if you do have connection, contact our server in the background and sync it. We also store the delta of how much was your time desynchronize so that next time you are in airplane mode we can do a good guess of how much your time might be off by, and then we use that. We don't expose this to the user, it should just "WORK"


Thanks for your explanations. I didn't get your details but I did more experiments. You may tell me if I am wrong.

1. If my phone has Internet connections, whenever the screen is turn on, the code is refreshed.

If the code is solely based on time, the app may sync its time tick with the server every time when it's turn on.

If the app sends its time to the server, the synced time value should also be encrypted and authenticated. The only pre-shared credential is the 6-digit pin sent by sms, which might be a little short.

If the app gets the time from the server, it might have a slight time delay before the code is computed and shown due to the network latency.

2. If my phone doesn't have Internet connections, the code is only refreshed in ~5s.

So I guess every 5s the app will generate a new code for the next 0~25s.

When the server wants to verify the received code it will check whether it matches with the codes generated in 0s, 5s, 10s, 15s and 20s.

So at least 5 different codes should work in the same time. This may be a nice way.


1. There is a window of tokens valid at any given time, its called: look-ahead synchronization window size (http://www.ietf.org/rfc/rfc4226.txt)

2. The synced time value should also be encrypted and authenticated.

No exactly what you are thinking. Time is transfered using https, so it's encrypted by that protocol, not our own.

3. Yes there is a slight delay, but see look-ahead synchronization window.

4. 2. If my phone doesn't have Internet connections, the code is only refreshed in ~5s.

No. The code is refreshed same way as if it had internet connection. It's just we used previous information on how your device clock works to make educated guesses on synchronization.

5. When the server wants to verify the received code it will check whether it matches with the codes generated in 0s, 5s, 10s, 15s and 20s.

Yeah something very similar to this.

Read the RFC, we follow it very closely:

http://www.ietf.org/rfc/rfc4226.txt


Thanks



I don't think it is solely based on time. Did you read my comment carefully?


I have implemented HOTP/TOTP. HOTP is based on the secret seed and a counter value (e.g., button presses). TOTP substitutes time for the counter value.

I read your comment carefully and it didn't make any sense to me.

The other way I thought is the code must be self-verifiable.

No, code verification requires knowledge of the secret seed value.


Please read my comment to @danielpal.

By self-verifiable I mean the half of the code is a random number, the other half is computed based on the secret seed and the random number.

I guess another way without frequent sync is to generate a new code every sec, and the server check if it's one of the 25 codes in the last 25 seconds. But this might be unnecessary and inefficient.


By self-verifiable I mean the half of the code is a random number, the other half is computed based on the secret seed and the random number.

Yeah, TOTP doesn't do that.

I guess another way without frequent sync is to generate a new code every sec, and the server check if it's one of the 25 codes in the last 25 seconds. But this might be unnecessary and inefficient.

Here's how it actually works:

The auth code is generated entirely by:

1. The algorithm documented in RFC 4226 and 6238,

2. static preconfigured parameters of that implementation (e.g., SHA-1, 30s, 6 digits),

3. the secret seed for that token, and

4. the current time.

For a given token, (4) is the only thing that changes to generate new codes.


You didn't get my point...

I know what it is based on, except the implementation details.

My question was what's the mechanism to verify the code refreshed every time the screen is turn on (which is a little different from other authenticators).


So if you turned the screen off and on every 1 second, you'd get a different code every single time?

I don't think there's a reasonable way to do that within the TOTP specification. There's (eventually) got to be some rate-limiting or it will reduce the security.


Details on how this works for CloudFlare customers: http://blog.cloudflare.com/2-factor-authentication-now-avail...

Also, Authy is a YC company with a nice solution.


It's nice to see them start giving users a balance between security and user experience by implementing 2FA which allows us to telesign into our accounts. I know some will claim this make things more complicated, but the slight inconvenience each time you log in is worth the confidence of knowing your info is secure. I'm hoping that more companies start to offer this awesome functionality. This should be a prerequisite to any system that wants to promote itself as being secure.


When sent an SMS from Authy (in the UK) the sender is listed as "BulkSMS", would be nice if it was "Authy" or something, I didn't recognise why I would have an SMS from that sender until I opened it.

Looking at the pricing, is there any explanation of what "Users" and "Auths" mean exactly, is that people that can authenticate with the application, or people that do every month? Not entirely sure.


#1. We wish. We use around 14 different cellphone # to bypass Spam etc on SMS. Most people don't understand how hard is to reliably send SMS internationally (twilio makes it look easy in the US :))

2. Auth's are times you call the verify/token API. Since it's an API you can decide when exactly when to call it. Most our clients authenticate tokens every 14 - 30 days. So people can authentication to your app via username - password or cookies and then sometimes via username - password - token. We only count an Auth when they do username - password - token.


I believe Twilio supports the UK.

Thanks for the clarification. Final question regarding the "large" plan, it lists "100,000+ Users" and "25,000+ Auth's/month", does the + mean there's no upper limit (with some degree of fair usage)?


The + means at the plan we you can buy more auth's granularly. So you can buy 5000 more auth's if you need or users too. Prices depend on volume.


Great, thanks!


ironic how the homepage shows a red lock icon (using resources over http) and their demo is completely on http://


We don't transfer any resources over HTTP on www.authy.com, we even use HSTS on www.authy.com.

Demo is over http because it's not meant to be a secure site, only something you use to see how it works. Database is destroyed daily, but I agree we should change it to https.


this Authy service really misses the point: the numeric code is redundant when you have a separate secure device with an internet connection (i.e. a phone).

when the user logs in issue a push notification, which when tapped sends the code back automatically.

the numeric code can be used as a fallback if the net connection is down.


We tried push notifications, but they were unreliable. (people disabled them, they were delayed etc, doesn't work with bad reception).

We're working on something better.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: