For example, they went into one student's account, pulled an attachment with an athletic team practice schedule, generated the screenshot, and then paired that with a subject line that was tangentially related, and emailed it to the other members of the athletic team.
They were using bit.ly to obscure the address (in Russia). We had to take our whole mail system down for a few hours while we cleaned it up.
How can that be done? That's between my phone and Google, so how can they "listen in" on that?
The simplified version is, Google sends the browser a one-time key, which the browser forwards to the HW token to sign with its private key. Then the browser sends this back to the web server to verify, using its copy of the HW token's public key.
This would be vulnerable to MITM attacks, as you say.
So what the protocol actually does is concatenate the nonce sent by the web server with the origin of the web page as seen by the browser and have the HW token sign that. This way the server can verify that the HW token signed the right nonce for the right origin.
See https://docs.google.com/document/d/1SjCwdrFbVPG1tYavO5RsSD1Q..., search for "origin".
So… whichever login attempt gets to confirmation stage last wins (not relevant in this situation), and the confirmation screen on (at least) my phone does not indicate anything regarding location (which is highly relevant).
This looks a little weaker than TOTP (you're basically trading a little security for the convenience of not entering a code while keeping the second factor) and a lot weaker than U2F.
^Instead of using "standard" 2-factor that generates a code on-the-fly within an app like GAuth or Authy, users receive a text message with 10 codes. The first digit of every code increases sequentially (0972,1042,2512,etc), must be used in that order (0 code on first login, 1 code on second, etc.), and the page informs the user which number they're on.
Duo offers a choice of authentication methods, depending on the usability and security requirements of your application or organization.
Duo Push is actually one of the easiest (and most secure) authentication methods, as one of the commenters pointed out:
It might be worth pinging your IT/security dept to ask about enabling Duo Push as an option or to change the policy for SMS passcodes (eg. you can just have one passcode sent instead of ten).
- Jon Oberheide, Co-Founder & CTO @ Duo
From the policy:
"Device-Specific Information: We also collect device-specific information (e.g. mobile and desktop) from you in order to provide the Services. Device-specific information includes:
attributes (e.g. hardware model, operating system, web browser version, as well as unique device identifiers and characteristics (such as, whether your device is “jailbroken,” whether you have a screen lock in place and whether your device has full disk encryption enabled));
connection information (e.g. name of your mobile operator or ISP, browser type, language and time zone, and mobile phone number); and
device locations (e.g. internet protocol addresses and Wi-Fi).
We may need to associate your device-specific information with your Personal Information on a periodic basis in order to confirm you as a user and to check the security on your device."
The policy continues to state that Duo may use this data for analytic/advertising purposes (although only in-house) as well as to comply with legal requests, subpoenas, NSLs etc.
Duo isn't collecting your data for nefarious purposes or to sell it to other companies but they still are collecting A LOT of it. Other two factor methods, like the one's used by Google and Facebook, allow clients to install their own code generators that don't collect personal data or even need access to the internet. Of course these methods don't have push requests that you can just approve rather than type in the code.
Another way to look at it: We collect security-relevant information on your device, but not your _personal_ data. In other words, we don't collect your email, photos, contacts, user-generated data, etc.
Most importantly to me, though, the system has thus far been completely reliable. I haven't yet heard of a single case where somebody couldn't log in because of Duo.
I'm not sure what our enterprise agreement is / how much this all costs, but it's a very good system for us.
My Duo hardware token (the code generator with the button and the LCD) tends to "desynchronize" after long periods where you don't use it. The internal clock gets off, so it drifts in what token it returns vs what the server thinks it should be returning, and then it stops working.
Normally, if you log in on a regular basis the server corrects for this drift. There is probably a sliding window of N valid keys (say 10) and using one of them tells the server what the internal clock state is. But if you don't use it for a long time (more than 30 days in my experience), the clock drifts, you start going outside the window and it refuses to let you log in.
If your IT desk is open, they can "resync" it by typing in a couple numbers in a row, which lets the server scan the key sequence and find where your token is.
Use-case: We don't have Duo tokens rolled out system-wide, they are only issued for admin tasks and we have separate admin accounts for these with the Duo attached. I'm an "occasional sysadmin" who administrates several stable servers that mostly don't need to be touched.
As I don't need to use it day-to-day, my key desynchronizes quite often for me, I have had it happen at least 3 times. It would be bad if I had an after-hours emergency with my Duo token, I do not trust it. The hardware tokens are not reliable, in my book.
edit: The fix for me would be for the token to automatically resynchronize on the fly. Just like the IT guys can do, but over-the-wire. If the server sees (f.ex) three sequential login attempts with valid-but-stale keys, with the proper order and timing pattern, then it accepts them and resynchronizes the key window.
To prevent replay attacks, you would also need to add a constraint that the keys be newer than one ones last used for a sucessful login, but it should be doable. You would also want to avoid causing an account lockout as you type in the invalid keys.
Thanks for the reply! I'll definitely get in contact with the school's OIT to figure out alternate options for authentication
I do plan on getting in contact with the schools OIT for enabling alternatives
I think institutions also use Duo because Duo takes care of the whole think whereas traditional 2FA isn't trivial to implement for the institution (generating tokens and all of that). At least that's what I was told by my institution when they made us start using Duo.
One of the more annoying things is that the codes are sent from a random 386 number. Out of the 7+ texts I've received thus far, only 2 were from the same number.
Apparently the company they're using is named https://duo.com/
I don't recall whether I had the option to use the app when I enabled MFA initially. However, after the fact, and as far as I can find, I cannot go back and enable the app.
I remember that configuring this is tricky, but I did eventually get user self enrollment configured with push being the default. Happy to dig more into my config, if you're curious: firstname.lastname@example.org
SMS for 2fa is poor to begin with. I wish people would at least implement the standard TOTP/HOTP option as well if they are going to pull stuff like that.
Like, that would prevent me from using 2FA.
Whatever happened to standards?
However I wouldn't want my second-factor to be attached to my browser. Seems way too volatile for me. Personally I'd rather keep TOTP and be vulnerable to time-of-use phishing.
Maybe if the browser had an OS API that a YubiKey could query...
1. Currently Chrome has this, Firefox is close (50.1 shipped it but it only works in the e10s mode), and there are extensions for Safari and older versions of Firefox.
> the URL is not included in the hash
What hash? Nobody even mentioned a hash. The crypto keys used for U2F are indeed domain-specific, if that's what you're trying to ask.
> It could be by having those two talk to each other.
Who's "those two"? And what's "it"? I'm very confused.
I mentioned a hash. The secret is hashed together with the time. _That_ hash.
> The crypto keys used for U2F are indeed domain-specific, if that's what you're trying to ask.
I know the secret is domain-specific. What I was describing is taking the secret, and the time AND THE DOMAIN and use them to produce the hash. This would break MITM. One of the comments above me mentioned this and I run with it. But you're talking to me like you didn't read anything above....
> Who's "those two"?
Those two are the yubikey and the browser.
I think you're confused. You have not mentioned the word "hash" even once in this thread prior to the previous comment I replied to.
Anyway, I think you're confusing U2F with TOTP. U2F does not rely on the time at all AFAIK; it uses public key cryptography, and authenticates by signing a data structure containing the domain name of the site and a server-provided nonce (among other things).
> What I was describing is taking the secret, and the time AND THE DOMAIN and use them to produce the hash.
I think there's still some sort of disconnect here, because up until this this comment you've described nothing of the sort in this thread. Could you link the comment you're referring to where you explained all this?
> One of the comments above me mentioned this and I run with it.
If you're referring to acdha's comment about U2F, as acdha and others in this thread have explained, U2F (aka Universal 2nd Factor) is an entirely different protocol from TOTP (aka Time-based One Time Password). U2F does not use hashing or the system time in the way you seem to be envisioning, but it is also not vulnerable to phishing like TOTP is.
U2F interfaces with your browser, and uses a set of public and private keys (that is stored on the U2F device, not in your browser) to authenticate to sites in a way which can't be phished. It's not theoretical; it exists and can be used today with many popular sites, including Google, GitHub, Dropbox, and more. You just need a USB device which supports U2F (YubiKey is one, but there are many others).
TOTP barely protects against phishing. What you want is an U2F key as the second factor. It's not like they are expensive anyway (usually 7-15 Euro) and quite some large services support U2F tokens already (Google, Dropbox, GitHub, Fastmail, etc.).
Is the 1 minute window always the case? In the authenticator app, it seems like codes expire after ~30 seconds. If I wait till the last few seconds before using the code, does that make me any safer?
It is common but not universal for sites to accept, at a given time, 1) the current U2F token, 2) the U2F token from the previous window, 3) the U2F token for the next window. This is done as a partial mitigation for potential clock skew issues on the client that's generating the TOTP codes (e.g. your phone). In practice this means every code is valid for 1m30s, although sites may customize this (with or without changing the window size, which is typically not done because that parameter must be consistent system-wide).
> If I wait till the last few seconds before using the code, does that make me any safer?
Maybe, but this is not practicable security advice. The latency of a MITM attack on a 2-factor TOTP login depends on the attack infrastructure and design, but can easily be made to be on the order of tens or hundreds or milliseconds. Reducing the window seems like it might help your security, but it can never be perfect and there is a direct tradeoff with usability because users need time to look up the codes on one device and enter them on another.
Folks often say "enable 2FA" in response to news of new and sophisticated phishing attack campaigns, but it's critical to note that most commonly deployed 2FA (TOTP, HOTP, SMS) is trivially phish-able. 2FA is not an automatic defense against phishing, although some newer designs achieve this and were created specifically with this goal in mind: U2F is a good example.
The validation system should compare OTPs not only with the receiving timestamp but also the past timestamps that are within the transmission delay. A larger acceptable delay window would expose a larger window for attacks. We RECOMMEND that at most one time step is allowed as the network delay.
We RECOMMEND a default time-step size of 30 seconds. This default value of 30 seconds is selected as a balance between security and usability.
Since the client's clock could be in the behind or ahead of the server's clock, I have to correct myself and the window would be 90 seconds.
One could be a bit strict and e.g. the previous time step only until 1/2-way the current time step, which would bring the window make to 60 seconds.
At any rate, all these timeframes are far to large to avoid real-time phishing attacks.
One of the tweets points out that something like lastpass would help with this as it wouldn't allow you to autofill your password (as it's not on the google the domain), but then you could get it manually from there anyway.
Normally this would alert you to the fact that someone is logging in to your account, and would stop the attacker since they lack the 2FA one time pass. In this case though, since you've already fallen for the "I'm trying to log in to Google again", the attacker will probably fake the 2FA screen as well, and you'll merrily type it in.
2. The attacker immediately tries them, triggering an SMS to you and an 'enter SMS code' page for them.
3. The attacker shows the 'enter SMS code' page to you, and you enter the code from the SMS you just received, giving it to the attacker.
4. The attacker completes their login using the SMS code.
5. The attacker shows the user some believable error message (implying an error on Google's end, or a typo in the SMS code) then forwards the user to the legitimate Google login page.
Obviously this isn't perfect because it depends on people actually paying attention to that, and on not having too many false positives due to GeoIP failures, but it seems like a nice improvement.
Apple has a nice UI on it (no surprise, I'm sure) where they show a map centered on the location in question, but even SMS-based solutions could include a quick "Login attempt from City" along with the code.
It's enough to concern me on the odd occasion that someone is trying a MITM attack.
I am guessing it is because in Australia, quite often the central server allocating IP addresses for our major ISPs can be in a completely different city?!?
Someone else mentioned U2F would work though as that's tied to the domain, but I don't really know much about that.
Not perfect but atleast they're not blindly typing in passwords.
U2F basically signs the current URI and HTTPS key and sends it back. If there is a man-in-middle then the signatures will not match and the auth will fail.
Clicking on links from email is such an edge case its bewildering we allow any link to be routable from an email client. I'd love to see my email client block this stuff by default. There's no case for me that an email should lead me to Russia, be it via a shortener or not. Or to a IP address that is on any honeypot list or has a suspicious rating.
I think we need to rethink what is allowed to route out of emails. I can see a whitelist of legitimate and vetted companies with large warnings for anything else. A little AI would go a long way here. Maybe visit the domain, verify the site has SSL, verify its not another country, verify its not trying to impersonate sites, check reputation lists, etc. A handful of predicative rules put into a browser or email client would greatly help here.
Its clear we can't spot phishing attempts well, but we may be able to make actually visiting the phishing site as difficult as possible. Links in emails should be seen as extremely hostile by default.
Analysed whole attack here: https://gist.github.com/timruffles/5c76d2b61c88188e77f6
This was the response I got:
> The address bar remains one of the few trusted UI components of the browsers and is the only one that can be relied upon as to what origin are the users currently visiting. If the users pay no attention to the address bar, phishing and spoofing attack are - obviously - trivial. Unfortunately that's how the web works, and any fix that would to try to e.g. detect phishing pages based on their look would be easily bypassable in hundreds of ways. The data: URL part here is not that important as you could have a phishing on any http[s] page just as well.
How many people really know that you can put a whole webpage in the URL?
I can't even imagine what legitimate use there is to placing an entire HTML document into the URL. Just seems like a hack someone came up with as a solution to a problem, not the right solution, but a solution nonetheless.
It allows you to embed data in an URL, meaning you can link to documents that aren't necessarily stored anywhere, such as generated images/text.
I suppose you could make an argument that it shouldn't be shown as a regular URL.
I agree that blocking the rendering of data:text/html (and any other MIME type that could be used maliciously) from the address bar is a good idea. I can't think of a valid use case for that scenario. It seems like similar attack vectors have been known for some time (https://nakedsecurity.sophos.com/2012/08/31/phishing-without...).
At any rate, if you allow a URI scheme that embeds the data in the URI itself it'd be very odd to arbitrarily restrict the valid MIME types. It'd be like forbidding a http URL from linking to a JPEG.
Well it wouldn't really be arbitrary, it'd be specifically HTML and/or JS, for security related purposes.
And how about the domain with a character that looks more like 'o' than '0'? There was something on HN recently about that. The example given would have completely fooled me, since it looked the same as the real domain.
And also, why not do something like this even. Let the browser save screen shots of some user selected sites. Like mail login page, online banking login page etc etc and have them map to a trusted url.
After loading a page, browser should screenshot the page and use some ML magic to compare it to the stored screenshots (I mean, there are things today that can call out the names of the things in an image and even what tell they are doing, right?). When one of them matches and If the url of the current page differs from the trusted url, the user should be alerted..Something like "Hey user, this page suspiciously looks like this page that we stored, but the url is completely different. Are you sure about this?"
The best approach I can come up with after five seconds thought is disabling links on non-text elements.
And then they go make an anchor that is whitespace over top of a background image... so we'd also need to disable links on large expanses of empty whitespace in text when its embedded in a mail.
I should think that can likely be worked around too, however. Got any more ideas?
More seriously, the expectation that emails will consist only of plain text is simply untenable. From a security standpoint this is obviously not ideal, but security and usability are opposed, and if your security scheme does not allow users to send documents with some form of markup, it will not be widely used.
Conceptually I like the idea of an ultra security mode for certain use cases, but ultimately it ends up making the whole web look like a bunch of plain text emails -- no JS, probably no images (unless the are somehow sandboxed and displayed from a safe local store), links are fully visible, etc.
If it matches, flag it with the usual warnings.
It feels like there's at least the potential to explore options.
Pragmatically I think Browsers disabling the rendering of data:text/html is a better approach. The breakage is minimal and it would catch more phishing attacks than just ones that originated from emails with images embedded.
These attacks are a numbers game. There's a low cost to sending the emails and a much larger payoff.
Education helps, but it's still possible to catch people off guard, tired, new users etc.
Anything that can be done to flag these emails as spam, or increase the cost to the attacker helps.
Since they rely on attachments and subject lines that are drawn from an individual user's gmail account, they have to propagate through a network, and they can't be just mass-emailed. Anything that can get the ratio of people falling for this lower than 1/<avg addressbook size> will completely eliminate the issue.
Amazing thing was I KNEW the email was phishing. I was asked to look at it by someone internally who was suspicious. I forwarded it to a Gmail account I use for dodgy items. I fired up a VM and logged in to the Gmail account. I looked at the email. I briefly examined the raw message (too briefly). Then I clicked on what I still thought was a Google Drive attachment.
My first thought was "oh I've been logged out of Gmail for some reason". I was just about to login again when I decided to double check the URL and finally saw what was going on.
I think most normal users would be very vulnerable to this. It's very subtle. Luckily the guy in accounts is paranoid.
Pretty nasty phishing attempt, way more subtle than past attacks.
I was in a hurry, and frustrated and was a millisecond away from clicking the link when some gut feeling told me that something was not right. Closest I've come to date, and it worried me.
EDIT: Sorry, I meant to respond to @soneca below, as this relates to phishing emails arriving with impeccable timing...
I believe this may have been before ebay took phishing seriously by included your real name in the emails etc.
Depending on how observant I'd be at the moment, I might check the URL bar and see something fishy. But I could fall for this, which is worrying.
It's still a good idea to have an analog backup of really important passwords. Like if you use Gmail and it is the password reset email for everything else, print out the generated password and put it somewhere safe. Just incase your password manager becomes insolvent one day.
I agree on some levels, but password managers can and have had vulnerabilities that can allow the gmail password to be populated despite the wrong domain. Given that the autofill adds legitimacy and reduces friction, it could make this particular scenario go from bad to worse.
Does this just prevent the display of images which require fetching from a remote URL, or does it also include images which are embedded in the email as attachments?
It's a nice extra security check, in addition to the primary benefits of using a password manager.
1. The download link didn't show any hover effect when I moused over it
2. Google is asking me to sign in even though I was obviously already authenticated
3. Even if at this point I didn't think to glance at the URL bar and actually entered my password into the phishing page, U2F would save me from being fully compromised
<a href="data:text/html,valid_looking_url <script src=data:text/html;base64,YWxlcnQoMTIzKQ==></script>">clickme</a>
a = document.createElement('a');
a.href = 'data:text/html,valid_looking_url <script src=data:text/html;base64,YWxlcnQoMTIzKQ==></script>';
a.textContent = 'clickme';
a.style.position = 'fixed';
a.style.left = 0;
a.style.top = 0;
a.style.zIndex = 9999;
1. Reform the browser address bar. Safari does this right. Chrome, IMHO shamefully, does not. The address bar is completely ignored by a large fraction (I've read it's about 25%) of users because it's full of meaningless technobabble. These users navigate entirely by sight. Weak sauce changes like making some of it light grey instead of black makes no difference. The usability nuclear holocaust that is the browser address bar is in my view THE leading cause of phishing because it's rendered users unable to identify who they are talking to when they submit data via the web. The address bar should show the domain name only, or the EV identity when that's present, and the browser industry should adopt practices to push usage of EV SSL everywhere. Only EV SSL is a feasible approach to get the actual, legal, verified identity of a server operator on the users screen in a reliable and scalable way.
2. The big networks need to lead by example and adopt EV SSL, see above.
3. Kill re-authentications dead. Google was talking about this internally around the time I was working on the account system there, but I don't recall if they ever did it. For as long as web sites routinely ask users to re-authenticate at seemingly random times users will type their password into any page that looks right without thinking. Only by making authentication a very rare event can you start to convince users to take more care over checking the site origin. I think Facebook has got this right: I don't think I'm ever asked to sign in to Facebook unless I'm using a new device, but lots of websites don't.
4. Teach UI/UX designers about the dangers of designing user interfaces where attacker controlled content isn't strongly visually separated from system controlled content. In this era of personalisation and theming there's really no reason why things like the Gmail attachment icon needs to be placed right next to the content of an email with the same generic white background as attacker controlled content. Give it a semi-transparent background and set users up with a wallpaper-esque theme by default and it gets a lot harder to put things in a message that look like UI widgets.
Chrome on Android does this. And it's extremely annoying. Since mobile browsers (and desktop browsers with tabs) usually don't show the title of pages, the address bar is the only place to tell e.g. what Wikipedia page you're currently reading.
You are probably correct, that it's a win for security, but I wish it could be turned off.
There's some tricks now to make sure the domain is visible and highlighted, but IMHO not enough to be safe, especially with the address bar scrolling of screen on phones.
In practice, the sort of users who complain about such things are in my experience the sort who also have dozens of tabs open, which smushes the title down to just a few characters. Heck even when there's space in the tab bar Chrome won't allocate more than a few cm of space on screen to showing the title. HTML titles are pretty much a dying thing anyway, so given the ongoing pain caused by phishing I wouldn't hesitate to pull the plug on them.
1. Distinguish clearly between authenticating to the correct server and entering form data.
2. Not send the actual password to the server but instead use some form of challenge-response.
3. Store the authentication token securely i.e. not as a cookie.
4. Enable other forms of authentication e.g. with keys.
5. Decrease the use of passwords overall (though better password authentication would still be a win).
This would make it much harder to perform a range of attacks from phishing to session hijacking. It would also potentially increase privacy, since you could more easily disable things like tracking. The reason you don't see the improvements you mention is to some extent because the engineers in question would have to reconciliation with the idea that they are the ones responsible. It's much easier to hold the position that its other entities, or users, that don't understand how things work.
Then I would forget my password, like I always forget my github password and have to reset it every leap year when i log out for some reason, but i guess that's a small price to pay.
Case in point:
I also use Authy with backup, but I don't store my backup password in the password manager, because that would be a potential single point of security failure.
The app kindly asks for a backup password occasionally. It's not for access, but for reminder. In fact if I don't remember my password, I can reset it right there in the app. I find that feature very useful.
EV certs have their place, but I'm not sure they're better than a URL that you're familiar with. For example, Natwest uses an EV cert which displays as "The Royal Bank Of Scotland Group Plc" because it's part of a larger group, but the actual legal name of the firm is "National Westminster Bank Plc".
Additionally, what happens when we get companies in different sectors with similar names? If there's an "RBS Applications Ltd" that gets an EV cert which was later compromised and used for phishing I wouldn't suspect it was wrong.
> Kill re-authentications dead. Google was talking about this internally around the time I was working on the account system there, but I don't recall if they ever did it. For as long as web sites routinely ask users to re-authenticate at seemingly random times users will type their password into any page that looks right without thinking. Only by making authentication a very rare event can you start to convince users to take more care over checking the site origin. I think Facebook has got this right: I don't think I'm ever asked to sign in to Facebook unless I'm using a new device, but lots of websites don't.
And what do we do with the problem of users leaving their computers open and exposed for short periods? I Like GitHub's sudo feature, it helps ensure that sensitive actions (adding SSH keys, adding access tokens etc) require a confirmation.
An alternative could be to require a 2FA-only confirmation rather than a password check.
> Teach UI/UX designers about the dangers of designing user interfaces where attacker controlled content isn't strongly visually separated from system controlled content. In this era of personalisation and theming there's really no reason why things like the Gmail attachment icon needs to be placed right next to the content of an email with the same generic white background as attacker controlled content. Give it a semi-transparent background and set users up with a wallpaper-esque theme by default and it gets a lot harder to put things in a message that look like UI widgets.
Completely agreed on this. The rollover animations and other features that seem to be declining in use with the advent of flat design are also a great help here, because you can't achieve that sort of interactivity with an image.
Worth noting that I suspect there would've been some tells anyway with this sort of attack. The cursor would've been wrong over the entire image (hand not just over the button) and any subtle click animations wouldn't have worked.
The point of an EV cert is only half to give user meaningful names. The other half is that there's a meaningful level of verification done on the ownership of the name. If you're creating fake companies for the purposes of getting phishy EV cert names, it should be a lot easier to track down who you are. The standards around them are much more carefully spelled out than for DV certs.
Leaving your computer exposed is what lock screens are for.
If you ever enter your google password on any domain other than accounts.google.com.
It will immediately alert you and give you a link to change your password.
It can handle multiple passwords too if you have multiple google accounts.
Still useful, I guess, because it lets you know immediately what's up, so you can send out emergency emails to your contacts.
But even if you assume at that moment the attacker has your password... I had seen gmail takeovers live and Google's authentication system allows you to recover an account even after it was taken over as long as you still have the old methods of authentication and you are within an unspecified timeframe.
Of course you will have to spend the day cleaning up your email filters and apologizing all your contacts, but at least you will have your account back.
I'm not advocating against web-based mail readers, simply because it's not always possible or practical to use external ones. But it seems security is harder to implement because everything is "made of the same parts", i.e. a web-based mail displayed in a web-based application, opening links in the same (browser) window.
I get it that the browser ppl will say only their chrome is trusted, but when someone is using your app, your app's internal ui affordances receive that same level of trust in your users' minds.
I use 1password, which will only fill in the password associated with the current domain.
1. Watermark all images on the in-email preview.
2. You should be able to design a mail scanner which would detect images that look too much like gmail elements and flag them.
The whole world is, basically, using one email client. The lack of diversity means a well written scam like this spreads easily.
I can say for certain I'd never fall for this scam -- because it looks like crap in Pine. I know I'm special, but the same applies to Thunderbird, or whatever.
There's probably a parallel to biology here. Let's get diversity back in our internet culture and with it resistance; scams like this will be harder to convince and much less likely to spread. Hopefully removing some of the incentive, too.
Absolutely, it can happen to anyone. I'm sick of people here or on other forums who do some victim blaming, calling phishing victims "idiots". It's not going to solve the problem. And often Gmail or Chrome teams dismiss these kind of issues.
I had to revert to the html version of Gmail because I was sick of all the phishing attempts and disable images in the client.
The corresponding private key is stored on the token indexed in part by the requesting domain, which is supplied by the browser during an auth request. It is because of browser participation that a MITM domain would not be able to ask the token to answer the challenge with the correct key handle.
The actual implementation can differ from what's described above, see Yubico's description of their key wrapping scheme if you want more detail:
2FA is a great way to know when you have to look at all the data to decide wether or not to give the token. For instance, I always double check the URL when I'm about to hand out a 2FA code.
How is your experience?
I understood that I can register specific machines not to use 2-factor, so if I loose my phone I still can login in. Anything else to consider?
But you can generate backup codes that you can print out or store somewhere safe for that emergency.
Are you suggesting that all email/webmail clients stop rendering HTML?
Long term goal is be to get rid of HTML, at least in my utopian mind as in it will never happening in reality.
It converts the n-websites-n-passwords situation into one where passwords become random tokens unlocked by a single client-side secret.
We need to make U2F more widespread.
It does point out a major problem. Email used to be text only. Then we added attachments that needed to be saved as a file and read with whatever app. Then we went to automatically displaying attached images and having live HTML links. All of these things we do for convenience make this sort of attack more possible.
I can't tell you why, but I'm pretty sure it happens - I have a recollection of having to reauthenticate every few weeks or so when opening a Google Drive attachment from my Inbox window. So I would not be surprised if I saw a login screen after clicking on such an "attachment".
After 15 years alone in space he was "in good spirits" but wanted to come home and would share his overtime flight pay of $15M with me.
Seriously where do they find these stories.
The worst thing is I don't know how to help my less technical friends not fall for it. They are unlikely to use 2fa I think
Or did they manage to embed the JS to simulate these actions with the attack?
There were no image downloads, it was embedded within the message itself.
HTTPS alone only provides encryption. Google doesn't use EV anywhere but I feel it should on login pages especially given it is a high phishing target.
HTTPS is meant for preventing MITM attacks, but it isn't meant to validate the identity of the entity you're speaking to; even though some people try doing that, it's just a game of pretend.
As many other commenters here, I mostly rely on password autocompletion. If autocompletion doesn't recognize the site, then I'm extra careful. The point is that this is rare enough so that it is actually feasible for me to be careful on those occasions.
ETA: another parent comment talks about the same thing.
1. you input your username
google send back an msg/pic which you saved in google at last login
confirm then goes to step 2
3. google ask you input auth code
And while clicking on an attachment shouldn't sign out a user, being automatically signed out has happened enough to most people that it seems like a fairly innocuous event, especially since this is supposed to be an attachment, not a link, and you just need to sign back in. So one does.
Seems like at this point the perps are just harvesting credentials.