Hacker News new | past | comments | ask | show | jobs | submit login

Sysadmin at a school: we use GMail for our students and faculty, and we got hit by this hard right before the holiday break. Three employees and a handful of students all got hit by the attack within a two hour period. It's the most sophisticated attack I've seen. The attackers log in to your account immediately once they get the credentials, and they use one of your actual attachments, along with one of your actual subject lines, and send it to people in your contact list.

For example, they went into one student's account, pulled an attachment with an athletic team practice schedule, generated the screenshot, and then paired that with a subject line that was tangentially related, and emailed it to the other members of the athletic team.

They were using bit.ly to obscure the address (in Russia). We had to take our whole mail system down for a few hours while we cleaned it up.




Requiring 2-factor auth would prevent this from being exploitable, right? Probably impossible in a school environment but in an enterprise situation, more palatable perhaps.


2FA would make it harder to exploit, but phishing attacks are getting fancier. They capture the 2FA code you enter and immediately start a session elsewhere with your password and 2FA. Hardware 2FA, a security key, (such as a Yubikey) is the only likely way to prevent phishing (excluding targets of state actors) https://support.google.com/accounts/answer/6103523?hl=en


> They capture the 2FA code

How can that be done? That's between my phone and Google, so how can they "listen in" on that?


The phishing site will ask you for your 2FA code and then enter it on the real Google login page.


Hmm, but that gets us back to "stage one": For that to work, you have to ignore your URL-bar...


Why would a yubikey prevent this? They can still send the 2FA code to Google to start your session...


No, they cannot with the U2F protocol (as implemented by yubikey).

The simplified version is, Google sends the browser a one-time key, which the browser forwards to the HW token to sign with its private key. Then the browser sends this back to the web server to verify, using its copy of the HW token's public key.

This would be vulnerable to MITM attacks, as you say.

So what the protocol actually does is concatenate the nonce sent by the web server with the origin of the web page as seen by the browser and have the HW token sign that. This way the server can verify that the HW token signed the right nonce for the right origin.

See https://docs.google.com/document/d/1SjCwdrFbVPG1tYavO5RsSD1Q..., search for "origin".


Oh I think I've never used this feature with my Yubikey - it's just been essentially an external keyboard that types rather quickly.


It's only available on newer yubikeys.


It's a different protocol. Not an expert but as I understand it U2F isn't totally out of band - the browser communicates the URL so the token you give wouldn't be accepted by Google when it is replayed


@extrapickles describes it better further down: https://news.ycombinator.com/item?id=13376402


Google can prompt you to confirm the login via your phone. It appears to work well: there's a time-out, and this time-out is also triggered if a second login attempt is made in parallel (and reaches the confirmation stage).

So… whichever login attempt gets to confirmation stage last wins (not relevant in this situation), and the confirmation screen on (at least) my phone does not indicate anything regarding location (which is highly relevant).

This looks a little weaker than TOTP (you're basically trading a little security for the convenience of not entering a code while keeping the second factor) and a lot weaker than U2F.


> Hardware 2FA, a security key, (such as a Yubikey) is the only likely way to prevent phishing For now.


Or manual challenge-response, like some internet banking tokens have.


My school is actually rolling out optional 2-factor auth. I'm not a fan of the system they use^, but it's neat that a University is taking advantage of some security best practices.

^Instead of using "standard" 2-factor that generates a code on-the-fly within an app like GAuth or Authy, users receive a text message with 10 codes. The first digit of every code increases sequentially (0972,1042,2512,etc), must be used in that order (0 code on first login, 1 code on second, etc.), and the page informs the user which number they're on.


Sorry to hear about your experience, Jarwain!

Duo offers a choice of authentication methods, depending on the usability and security requirements of your application or organization.

Duo Push is actually one of the easiest (and most secure) authentication methods, as one of the commenters pointed out:

https://www.youtube.com/watch?v=tPLxe9HUDjY

It might be worth pinging your IT/security dept to ask about enabling Duo Push as an option or to change the policy for SMS passcodes (eg. you can just have one passcode sent instead of ten).

- Jon Oberheide, Co-Founder & CTO @ Duo


Duo does work as advertised, and my uni uses it, but the privacy policy allows for a lot of personal data collection.

tldr: "Duo Security does not sell, rent, or trade and, except as described in this Privacy Policy, does not share any Personal Information with third parties for their promotional purposes." But Duo still collects A LOT of data on you.

From the policy: "Device-Specific Information: We also collect device-specific information (e.g. mobile and desktop) from you in order to provide the Services. Device-specific information includes:

attributes (e.g. hardware model, operating system, web browser version, as well as unique device identifiers and characteristics (such as, whether your device is “jailbroken,” whether you have a screen lock in place and whether your device has full disk encryption enabled)); connection information (e.g. name of your mobile operator or ISP, browser type, language and time zone, and mobile phone number); and device locations (e.g. internet protocol addresses and Wi-Fi). We may need to associate your device-specific information with your Personal Information on a periodic basis in order to confirm you as a user and to check the security on your device."

The policy continues to state that Duo may use this data for analytic/advertising purposes (although only in-house) as well as to comply with legal requests, subpoenas, NSLs etc.

Duo isn't collecting your data for nefarious purposes or to sell it to other companies but they still are collecting A LOT of it. Other two factor methods, like the one's used by Google and Facebook, allow clients to install their own code generators that don't collect personal data or even need access to the internet. Of course these methods don't have push requests that you can just approve rather than type in the code.


also, if it's a US company and it ever goes bankrupt/sells its assets, third party buyers aren't bound by any privacy policy whatsoever. yes, this is crazy and it means US privacy policies are basically meaningless; best just don't give them your data, but what can you do. personally I believe that collecting the data and pretending a privacy policy makes it okay, is nefarious by itself already.


I think that's a fair read. The primary use of that data is for security use cases. Eg. if you're coming from an out-of-date browser or have risky Java/Flash plugin versions, we can notify you to update/remediate.

Another way to look at it: We collect security-relevant information on your device, but not your _personal_ data. In other words, we don't collect your email, photos, contacts, user-generated data, etc.


I'm at a large research university, and we use Duo across the institution. It really does work as advertised. The Duo Push feature combined with my iPhone's TouchID is very convenient (Duo Push also works on other devices).

Most importantly to me, though, the system has thus far been completely reliable. I haven't yet heard of a single case where somebody couldn't log in because of Duo. I'm not sure what our enterprise agreement is / how much this all costs, but it's a very good system for us.


cc: @jonoberheide

My Duo hardware token (the code generator with the button and the LCD) tends to "desynchronize" after long periods where you don't use it. The internal clock gets off, so it drifts in what token it returns vs what the server thinks it should be returning, and then it stops working.

Normally, if you log in on a regular basis the server corrects for this drift. There is probably a sliding window of N valid keys (say 10) and using one of them tells the server what the internal clock state is. But if you don't use it for a long time (more than 30 days in my experience), the clock drifts, you start going outside the window and it refuses to let you log in.

If your IT desk is open, they can "resync" it by typing in a couple numbers in a row, which lets the server scan the key sequence and find where your token is.

Use-case: We don't have Duo tokens rolled out system-wide, they are only issued for admin tasks and we have separate admin accounts for these with the Duo attached. I'm an "occasional sysadmin" who administrates several stable servers that mostly don't need to be touched.

As I don't need to use it day-to-day, my key desynchronizes quite often for me, I have had it happen at least 3 times. It would be bad if I had an after-hours emergency with my Duo token, I do not trust it. The hardware tokens are not reliable, in my book.

edit: The fix for me would be for the token to automatically resynchronize on the fly. Just like the IT guys can do, but over-the-wire. If the server sees (f.ex) three sequential login attempts with valid-but-stale keys, with the proper order and timing pattern, then it accepts them and resynchronizes the key window.

To prevent replay attacks, you would also need to add a constraint that the keys be newer than one ones last used for a sucessful login, but it should be doable. You would also want to avoid causing an account lockout as you type in the invalid keys.


Hi Paul! I believe your token should automatically resync if you enter three consecutive correct passcodes that are outside (but forward of) the current valid window.


Huh, I never would've expected to hear from the CTO just from making this post.

Thanks for the reply! I'll definitely get in contact with the school's OIT to figure out alternate options for authentication


No prob! I can't claim to be a HN veteran (/me glares at @tqbf), but if I hear people are having issues, happy to help.


So it turns out I can still use the Duo Mobile app. I have to re-add the device. Not the most intuitive but then again I figured it out on my own -shrugs-


I was a huge fan an evangelist of Duo up until the Duo Mobile 3.15.0 update December 13, 2016, which disabled the ability to approve Duo Push from a locked phone (lock screen, Android Wear). That change was horribly communicated and has been inadequately defended when challenged, and has shaken my faith in Duo.


Can Duo be used to Google Authenticator or do you have to use "Duo Push"?


Cannot use Google Authenticator. Can't even use Duo Push; this is the SMS functioniality.

I do plan on getting in contact with the schools OIT for enabling alternatives


I hadn't heard of Duo. Just looked briefly at the site. Does anyone have a TL;DR on that? Why would one use that rather than the native 2FA?


They can send push requests that you can just approve on your mobile device, no typing in those codes. They also have backup methods that work w/o needing internet access on your phone.

I think institutions also use Duo because Duo takes care of the whole think whereas traditional 2FA isn't trivial to implement for the institution (generating tokens and all of that). At least that's what I was told by my institution when they made us start using Duo.


> takes care of the whole thing But I would have assumed that there is considerable work necessary on the backend for a web server to integrate with Duo.


Oh my god that's awful, what's the point of making it so counterintuitive?? I'll never understand the motivation of companies that roll their own 2FA instead of just using TOTP or Authy.


It was probably the worst way they could have implemented 2FA; we're still vulnerable to a MITM attack.

One of the more annoying things is that the codes are sent from a random 386 number. Out of the 7+ texts I've received thus far, only 2 were from the same number.

Apparently the company they're using is named https://duo.com/


That's odd, we use duo at work and it's great. Every user is configured to get a push notification directly to the device which bypasses the issues with SMS.


That requires the user to use the Duo app though, right?

I don't recall whether I had the option to use the app when I enabled MFA initially. However, after the fact, and as far as I can find, I cannot go back and enable the app.


That's correct, of course without having the app installed there is no option other than SMS or a hardware token.

I remember that configuring this is tricky, but I did eventually get user self enrollment configured with push being the default. Happy to dig more into my config, if you're curious: gabe@untapt.com


Totally curious, unfortunately it'd probably go in one eye and out the other since I'm not involved in the Uni's implementation.


Huh, I've heard good things about Duo. They're not a nobody at any rate.


We have security experts developing 2fa techniques, and then we have these sort of people.


I assume some misguided soul got told that they needed to reduce the number of texts sent to save costs, but that's horrible.

SMS for 2fa is poor to begin with. I wish people would at least implement the standard TOTP/HOTP option as well if they are going to pull stuff like that.


My university uses the same system as yours, and what's worse is that in order to install their custom 2FA app on Android you have to configure your phone to allow apps from unknown sources. So I have to choose between using SMS codes that can be intercepted or letting an entirely unvetted app run amok on my phone.


This is the most unintuitive approach to 2FA I've ever seen


That's a pretty awful method of securing anything.

Like, that would prevent me from using 2FA.

Whatever happened to standards?


I guess someone wanted to make a new standard (https://xkcd.com/927/)


No. A man-in-the-middle phishing attack can ask you for your second factor and pass it through to Gmail.


This is definitely true of TOTP but U2F was designed to prevent phishing attacks by incorporating the hostname in the protocol[1], which means the attacker needs to successfully compromise SSL as well.

1. https://security.stackexchange.com/questions/71316/how-secur...


Very clever, thanks for sharing.

However I wouldn't want my second-factor to be attached to my browser. Seems way too volatile for me. Personally I'd rather keep TOTP and be vulnerable to time-of-use phishing.

Maybe if the browser had an OS API that a YubiKey could query...


Just to extend on what jon-wood said, that's definitely the other way around: U2F is an open standard and the intelligence lives on the USB/NFC device. Any browser which implements that standard[1] can login and, especially nice for security, the browser never gets access to the keys or, with devices like the YubiKeys which require you to touch a button for each request, even the ability to authenticate without user approval.

1. Currently Chrome has this, Firefox is close (50.1 shipped it but it only works in the e10s mode), and there are extensions for Safari and older versions of Firefox.


sigh read my response to the other guy


It's actually the other way round, the YubiKey (or other U2F token) has an API the browser queries, generally triggering the token requesting some sort of physical interaction.


I know how it is now, but that's not what I'm talking about. Currently the URL is not included in the hash, that's my point. It could be by having those two talk to each other. Who's the server and who's the client is beside my point.


Huh? What are you even talking about? This comment makes no sense to me in the context of what jon-wood said.

> the URL is not included in the hash

What hash? Nobody even mentioned a hash. The crypto keys used for U2F are indeed domain-specific, if that's what you're trying to ask.

> It could be by having those two talk to each other.

Who's "those two"? And what's "it"? I'm very confused.


> What hash? Nobody even mentioned a hash.

I mentioned a hash. The secret is hashed together with the time. _That_ hash.

> The crypto keys used for U2F are indeed domain-specific, if that's what you're trying to ask.

I know the secret is domain-specific. What I was describing is taking the secret, and the time AND THE DOMAIN and use them to produce the hash. This would break MITM. One of the comments above me mentioned this and I run with it. But you're talking to me like you didn't read anything above....

> Who's "those two"?

Those two are the yubikey and the browser.


> I mentioned a hash.

I think you're confused. You have not mentioned the word "hash" even once in this thread prior to the previous comment I replied to.

Anyway, I think you're confusing U2F with TOTP. U2F does not rely on the time at all AFAIK; it uses public key cryptography, and authenticates by signing a data structure containing the domain name of the site and a server-provided nonce (among other things).

> What I was describing is taking the secret, and the time AND THE DOMAIN and use them to produce the hash.

I think there's still some sort of disconnect here, because up until this this comment you've described nothing of the sort in this thread. Could you link the comment you're referring to where you explained all this?

> One of the comments above me mentioned this and I run with it.

If you're referring to acdha's comment about U2F, as acdha and others in this thread have explained, U2F (aka Universal 2nd Factor) is an entirely different protocol from TOTP (aka Time-based One Time Password). U2F does not use hashing or the system time in the way you seem to be envisioning, but it is also not vulnerable to phishing like TOTP is.

U2F interfaces with your browser, and uses a set of public and private keys (that is stored on the U2F device, not in your browser) to authenticate to sites in a way which can't be phished. It's not theoretical; it exists and can be used today with many popular sites, including Google, GitHub, Dropbox, and more. You just need a USB device which supports U2F (YubiKey is one, but there are many others).


I think most of us are having trouble understanding exactly the question which you're trying to ask – could you try to state it clearly and precisely?


That's assuming the attacker could log in with it before it expired, isn't it?


Yes, but as per the standard TOTP codes are valid for a window of 1 minute.

TOTP barely protects against phishing. What you want is an U2F key as the second factor. It's not like they are expensive anyway (usually 7-15 Euro) and quite some large services support U2F tokens already (Google, Dropbox, GitHub, Fastmail, etc.).


Thanks!

Is the 1 minute window always the case? In the authenticator app, it seems like codes expire after ~30 seconds. If I wait till the last few seconds before using the code, does that make me any safer?


If the codes are time-based (TOTP), they are typically generated with a rolling window of 30 seconds (as you saw in Google Authenticator). The 30s rolling window is the recommended (and widely implemented) default value from the TOTP RFC [0].

It is common but not universal for sites to accept, at a given time, 1) the current U2F token, 2) the U2F token from the previous window, 3) the U2F token for the next window. This is done as a partial mitigation for potential clock skew issues on the client that's generating the TOTP codes (e.g. your phone). In practice this means every code is valid for 1m30s, although sites may customize this (with or without changing the window size, which is typically not done because that parameter must be consistent system-wide).

> If I wait till the last few seconds before using the code, does that make me any safer?

Maybe, but this is not practicable security advice. The latency of a MITM attack on a 2-factor TOTP login depends on the attack infrastructure and design, but can easily be made to be on the order of tens or hundreds or milliseconds. Reducing the window seems like it might help your security, but it can never be perfect and there is a direct tradeoff with usability because users need time to look up the codes on one device and enter them on another.

Folks often say "enable 2FA" in response to news of new and sophisticated phishing attack campaigns, but it's critical to note that most commonly deployed 2FA (TOTP, HOTP, SMS) is trivially phish-able. 2FA is not an automatic defense against phishing, although some newer designs achieve this and were created specifically with this goal in mind: U2F is a good example.

[0]: https://tools.ietf.org/html/rfc6238


The RFC recommends a time step of 30 seconds + permitting at most one previous time step for handling out of sync clocks and slow/late entry:

The validation system should compare OTPs not only with the receiving timestamp but also the past timestamps that are within the transmission delay. A larger acceptable delay window would expose a larger window for attacks. We RECOMMEND that at most one time step is allowed as the network delay.

[...]

We RECOMMEND a default time-step size of 30 seconds. This default value of 30 seconds is selected as a balance between security and usability.

Since the client's clock could be in the behind or ahead of the server's clock, I have to correct myself and the window would be 90 seconds.

One could be a bit strict and e.g. the previous time step only until 1/2-way the current time step, which would bring the window make to 60 seconds.

At any rate, all these timeframes are far to large to avoid real-time phishing attacks.


In some cases the server is configured to accept multiple codes (prev, current, next) to handle timesync issues between server and client (where the app is running).


No. How long does it take from you entering the code to the code reaching Google? A few tens of miliseconds? With a phisher in the middle it's a couple extra miliseconds.


I don't think so, I'm not sure how it could.

One of the tweets points out that something like lastpass would help with this as it wouldn't allow you to autofill your password (as it's not on the google the domain), but then you could get it manually from there anyway.


Well, when the attacker attempts to log in via the stolen credentials, they would get the 2FA check, and you would get an SMS.

Normally this would alert you to the fact that someone is logging in to your account, and would stop the attacker since they lack the 2FA one time pass. In this case though, since you've already fallen for the "I'm trying to log in to Google again", the attacker will probably fake the 2FA screen as well, and you'll merrily type it in.


1. You visit the attacker's page and give them your username and password.

2. The attacker immediately tries them, triggering an SMS to you and an 'enter SMS code' page for them.

3. The attacker shows the 'enter SMS code' page to you, and you enter the code from the SMS you just received, giving it to the attacker.

4. The attacker completes their login using the SMS code.

5. The attacker shows the user some believable error message (implying an error on Google's end, or a typo in the SMS code) then forwards the user to the legitimate Google login page.


Yep, that's what I'm saying too. If you've fallen for providing 1FA, you'll fall for 2FA too, since you think it's legit.


Apple's 2FA for iCloud will likely avoid this if you're careful. They do a GeoIP lookup of where the request is coming from and show the approximate location of the login attempt before they show you the 2FA code. For example, when logging in legitimately from home, it'll say that there's a login attempt from the city where I live. In the likely case where the phisher's server isn't in this area, it'll show something else, and I'll know what's up.

Obviously this isn't perfect because it depends on people actually paying attention to that, and on not having too many false positives due to GeoIP failures, but it seems like a nice improvement.

Apple has a nice UI on it (no surprise, I'm sure) where they show a map centered on the location in question, but even SMS-based solutions could include a quick "Login attempt from City" along with the code.


Apple's 2FA is good, but their geo-location needs some work. I constantly get notifications that someone located 3000km away from me is trying to log in whenever I perform a 2FA sign on.

It's enough to concern me on the odd occasion that someone is trying a MITM attack.

I am guessing it is because in Australia, quite often the central server allocating IP addresses for our major ISPs can be in a completely different city?!?


That's too bad. Do other services get it right?


I was thinking this would be done automatially. You enter username and password, they send to google and get a 2fa request. Show you the same screen and ask for your 2FA pass, which they then send on and they're in.

Someone else mentioned U2F would work though as that's tied to the domain, but I don't really know much about that.


Autofill should usually pull the user out of their tunnel vision and focus them on the site and what they are doing.

Not perfect but atleast they're not blindly typing in passwords.


U2F would prevent this from being exploitable, but one-time password schemes like TOTP would not.


Why would TOTP not suffice to prevent this exploit ?


They can use the TOTP token to auth themselves where as U2F will not work if you are the middle-man.

U2F basically[0] signs the current URI and HTTPS key and sends it back. If there is a man-in-middle then the signatures will not match and the auth will fail.

[0]: https://developers.yubico.com/U2F/Protocol_details/Overview....


Probably not; the fake page can also prompt for the second factor and then quickly do the real authentication using that.


This is why having a warning for non-HTTPs sites is so important: http://boingboing.net/2016/11/05/chrome-is-about-to-start-wa....


Yup, hopefully that would make a difference, though if we keep getting news like today's GoDaddy validation bug, this'll gradually lose value. :(


I don't see how that would help here. Couldn't the phishing site have HTTPS?


I feel like all these kinds of extra security burdens aren't worth it. If you could quantify and add up all the inconvenience caused by extra security past simple password logins, affecting all users always, it would surely be more than what would have been caused by the attacks prevented, temporarily affecting a few users.


What if they're able to comprise a person who works in HR who probably has copies of passports, social security numbers and other highly-sensitive PII in their email inbox? The fact is people send around all kinds of sensitive information via email, including IT/engineering who probably has discussions about various security holes they're working on patching.


Not really. The phisher can just ask for the second factor the same way they ask for the password.


U2F knocks this on the head - a MITM site won't have the secret required to generate the token.


Holy crap. That is some serious ingenuity and skill being applied to the cause of evil.


>They were using bit.ly to obscure the address (in Russia).

Clicking on links from email is such an edge case its bewildering we allow any link to be routable from an email client. I'd love to see my email client block this stuff by default. There's no case for me that an email should lead me to Russia, be it via a shortener or not. Or to a IP address that is on any honeypot list or has a suspicious rating.

I think we need to rethink what is allowed to route out of emails. I can see a whitelist of legitimate and vetted companies with large warnings for anything else. A little AI would go a long way here. Maybe visit the domain, verify the site has SSL, verify its not another country, verify its not trying to impersonate sites, check reputation lists, etc. A handful of predicative rules put into a browser or email client would greatly help here.

Its clear we can't spot phishing attempts well, but we may be able to make actually visiting the phishing site as difficult as possible. Links in emails should be seen as extremely hostile by default.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: