Hacker News new | past | comments | ask | show | jobs | submit login
Gitlab phished its own work-from-home staff, and 1 in 5 fell for it (theregister.co.uk)
382 points by samizdis 11 days ago | hide | past | web | favorite | 254 comments





It's important to note the nature of the failure.

Opening a phishing email should not be considered a failure. The email client is specifically designed to be able to display untrusted mail.

Even clicking a hyperlink in a phishing email isn't too bad - web browsers are designed to be able to load untrusted content from the internet safely.

It's only entering credentials by hand into a phishing website, or downloading and executing something from a phishing site that is a real failure.

IT departments should probably enforce single sign on and use a password alert to prevent a password being typed into a webpage. They should also prevent downloads of executable files from non-whitelisted origins for most staff.


> It's important to note the nature of the failure.

Definitely! UCSF had a security firm send out a fishy-looking fishing email. My email client pointed out the url did not match the link text, whois told me it was a security company, and I opened the URL in a VM.

“You just got fished!” eye roll

I wouldn’t be surprised if most of those employees at gitlab were not so much fished as curious.


The article says 17 employees opened the link, and 10 of those types in their credentials. The 20% the headline is talking about are those 10, not the 7 that didn't do anything.

They did a test like this at a company I worked at. I ended up entering fake credentials because the thing seemed so shady, I was curious what its deal was.

I opened the email and I forwarded the email to abuse at corporate domain just like the corporate website says and my manager still got an email saying I failed the test.

Maybe because the tracking pixel remote image loaded? I remember reading an article where people sent an email to Apple and it got passed around within Apple and iirc either Steve Jobs or someone who reports directly to Steve Jobs opened the email not knowing that they were sending out a makeshift read receipt every time they opened the email.


Hi Gitlab! I'm available for hire if you need a replacement.

> I wouldn’t be surprised if most of those employees at gitlab were not so much fished as curious.

"Curious" might get you to opening the web page, but actually entering credentials moves you into "phished" no matter how curious you are.


Not if every component is intentionally faked.

I'm not even going to get to the point of wondering whether every component is faked or not, since my thought process will stop at "I'm not going to ever enter credentials into a site I got to from a random link in an email". Which seems to me to be a far better policy than trying to figure out whether a particular site I got to from a random link in an email is faked or not.

Nobody is demanding you do. But if you go around claimng people "got phished", then you should be sure.

I've also entered fake credentials into a clearly faked login form to see what'd happen. Would it redirect me to the right site? Just claim the information was wrong? Send me to a mock up of the intranet I was trying to access? You can call it bad policy if you want (although you don't know about my precautions), but it doesn't mean I was phished.


What it does mean, though, is the person who sent the email now knows, at the minimum:

1. Someone receives and reads the email sent to this email address.

2. That person is willing to enter data into a form.

This is 2 pieces of information the person didn’t have before, and it can be used in further phishing attempts in a variety of ways.

It doesn’t mean you were fooled, but that’s only half the story.


I did the same thing at a previous job. Got signed up for an 8 hour security training because of it - somehow I refused to go and they didn't fire me.

VM escape exploits are a actually used in the wild, so yes, if that was on your work machine, you failed the test.

If your security model requires people to never open an untrusted link in their browser, you just cannot allow open Internet access

Isn't this fairly common? I've now worked at several organizations where sensitive information was stored on air-gapped networks. Software updates or data were moved in and out using pre-approved external drives.

I tend to think this is good software dev practice anyway. You ought to be able to test everything on your testing servers, and if this doesn't adequately reproduce the production environment, it's a problem with your test system.


> I've now worked at several organizations where sensitive information was stored on air-gapped networks.

Then you won't be processing email on machines on those networks.


No that is not common.

It is common in the sense that it's done frequently enough that we don't need to reinvent it. Most orgs don't want that level of security & inconvenience. FWIW I personally have never encountered it.

This is kinda ridiculous. You first need the email client to have a bug which enables some kind of cross-site scripting just rendering an email, then a sandbox bug for a webpage to leak into the underlying system, and THEN a bug for the VM to escape to the parent OS.

At that point, I think it's as likely that your airgapped email laptop can hack into your work machine through local network exploits.

If you think a hacker is going to manage all that, you might as well assume that the hacker can trick gmail in to opening the email for you. There's a point at which we have to realistically assume that some layer of security works, and go about our lives.


> airgapped...local network exploits.

I'm curious what definition of airgap you're using?


Like other words whose scope has expanded meaning (e.g., serverless, drone), airgap can simply mean segregated network and not just completely unplugged.

AWS uses it this way: https://aws.amazon.com/blogs/publicsector/announcing-the-new...


1. Nothing about that post says it's just network layer segmentation. C2S is it's own region, with multiple AZs (data centers). Why would you believe those are collocated with commercial AWS and not, as they write, air-gapped.

2. Please don't contribute to giving marketing license to remove what little meaning words still have.


The wrong one I suspect. An Airgapped machine is a term reserved for a pc never connected to the internet, hence the gap. Usually for extreme security concerns like managing a paper crypto wallet or grid infrastructure.

Yeah, I should have just said standalone in this case.

You are confusing executing untrusted code in a VM with opening something in a browser in a VM - would really need to be a double VM escape.

Clearly your threat model adversary is Mossad.

It is a paranoid stance. But if you are a developer in a large company, think about how likely it is that your computer has (direct or not) access to data/funds worth more than $100k to someone, and what kind of exploits that money can buy.

Anyone can get phished if, on an off day when you're tired or distracted by personal issues or whatever, your guard is down and you happen to receive a phishing attempt that also pattern matches something you're kind of expecting, either because it's a targeted attempt or just randomly for a wide-net phishing attempt. That's my model of how phishing works, they just make lots of attempts and know they will get lucky some small percentage of the time.

With that as my model: the email getting to your inbox is of course the first failure and increases the chance of getting phished from zero to not zero. Opening the email is another failure that raises the chance. Clicking the link is another.

All of the steps leading up to entering credentials or downloading and executing something from a phishing site is a real failure in that it increases the chances of becoming compromised.

That's even true if you're suspicious the whole way through. If you know it's a phishing attempt and are investigating, fine. But if you are suspicious, that means you can still go either way. You can also get distracted and end up with the phishing link in some tab waiting for you to return to it with all the contextual clues missing.


Someone once posted a link on hackernews titled "new phishing attack uses google domain to look legit"

I opened it in a new tab along with several other links to read, I was expecting a nice blog post explaining an exploit.

After about 20min of reading the other tabs I came across that tab again. I had forgotten the title of what I had clicked, I'm not sure I even remembered it was a hackernews link that got me to that page.

"Oh, looks like Google has randomly logged me out, that doesn't happen often" I think as I instinctively enter my email and password and hit enter.

Followed half a second later by "oh shit, that wasn't a legitimate google login prompt."

I raced off to quickly change my password, kick off any unknown IPs and make sure nothing had changed in my email configuration.

I'm lucky I came to my senses quickly. I think it was the redirect to generic google home page that made me click, along with the memory of the phishing related link I had clicked 20min ago.

But yeah, it can happen to anyone on a bad day.


There should really be a browser-managed 'tainted' flag on any tab opened from an email that prevents password input. Or if not prevents, at least a scary warning click through like an unsigned certificate creates, which at least shows the true full domain name.

Whenever it read about phishing it seems insane we have a system that requires human judgement for this task. If there isn't a deterministic strategy to detect it, how could the user ever reliably succeed? And if there is such a strategy, it should be done by the mail server, mail client, and browser.

Even an extension doing this might work in a corporate context. That makes me wonder if companies do their own extensions to enhance the browser for their needs. If all your employees are using web browsers for multiple hours per day it might really be worth it.


Any new constraint on password inputs will result in attackers creating a fake password input without any constraint, via CSS / JS.

But an AI in the browser could detect this and warn the user.

So we are throwing machine learning at the problem because we can't come up with the heuristics for this ourselves?

Yes?

That's exactly what it's for: finding patterns that are too hard or too complex for humans to find. Enumerating every edge case of "enter a password" is not possible for a human, and whatever edge cases we humans miss _will_ be exploited by someone to compromise someone else.

It's also a matter of volume. How many pages can you evaluate and categorize in an hour versus how many can a ML system do in the same? I once saw a demo where a firewall/virus scanner app could detect malware heuristics dynamically by comparing to a baseline system, and could do so in 10 seconds or less per item. It would take a human more than 10 seconds just to read the report to generate a rule, and humans don't scale nearly well enough.

There are lots of complaints to be had about ML and privacy / fairness / ethics / effectiveness, but this shouldn't be one of them.


Machine learning is being used in spam filters, so why not use it for this problem too?

>There should really be a browser-managed 'tainted' flag on any tab opened from an email that prevents password input

I was going to say that couldn't be done, but then thinking about it - obviously the way OS currently works you can't know if it came from an email but you can know it came from an application that was not the browser (although that of course would require the browser to keep track of where a tab came from, which I assume they already do), but then links opened from web based email client would not have this scare warning click through.


The problem is passwords.

They wew created 60 years ago as an additional layer to on-site physical access, in a world with a compute and network capacity billions of times less than today.


That's a good point, it might be more productive to focus on U2F type solutions since they protect against this attack and others, where this is only a bandaid with a convenience cost.

Do you have an alternative to authentication?

I did, I'd be rich.

The problem is clearly pretty deep. One posibility is that it's inherently inconsistent with a deep, high speed, long range, high bandwidth data regime. We live in a universe where all of us are ventriloquists, or may be ventriloquist dummies.

There's the questions of what identity is, and its distinction from identifiers or assertions of identity. There is the matter of when you do or do not need to assert orverify a specific long-term identity, and when you do. When identifiers require a close 1:1 mapping, and when they don't. Of what the threat models and failure. modes of strong vs. weak authentication schemes are.

And ultimately of why we find ourselves (individually, collectively, playing specific roles, aligned or opposed with convention, the majority, or other interests) desiring either strongly identified or pseudonymous / anonymouus interactions.

Easy or facile mechanisms have fared poorly. Abuses and dysfunctions emerge unexpectedly.

It's complicated.


I like the "tainted" tab idea. Maybe warn the user if the site attempts any non-GET HTTP request. "Are you sure this site is legitimate? It could be a phishing attempt."

This is why an auto-filling password manager is an essential security tool for every internet user. If your password manager doesn't autofill/offer to fill your passwords, the domain isn't legitimate.

Password managers are great for security and super convenient. It continues to shock me how many people surf the web while continuing to type the same password into dozens of sites, and then they wonder why they fall for phishing.


Autofill matching breaks in many ways on the same website, so you have to keep on doing it manually. Ex: Chase has about 5 different ways / pages you can enter your login credentials.

That sounds awful, but all you need to do is add all the legitimate domains to your chase login record, then you are phish-proof.

Obviously autofill itself can break on complex page layouts, and that's fine. The security comes from the password manager doing domain matching and offering to fill the password when you click on its addon menu.


That's the breading ground of phishing. If they have 5 login page a sixth won't raise any red flags.

> Chase has about 5 different ways / pages you can enter your login credentials

If they had 5 different ways, that'd be one thing. Lately, I've been seeing different domains. For example the marketing department registers a domain such as AcmeExclusives.com.


No, this is why FIDO/U2F is essential. Password managers are good but people regularly search and autofill across domains because most companies, especially in industries like finance and HR, have spent years training users to expect random vanity domains and renaming every time someone in marketing wants to mark their territory. People phish TOTP similarly.

In contrast, the FIDO design cannot be used across domains no matter how successfully you fool the human.


Sounds great, but U2F keys are super expensive and U2F is supported nowhere. Password managers are free and work everywhere already.

U2F keys start at $15, so there’s a barrier but it’s hardly “super expensive”, and they’re supported by a fair fraction of major sites (Facebook, Google, Twitter, GitHub, Gitlab, login.gov, etc.).

> Anyone can get phished if, on an off day when you're tired or distracted by personal issues or whatever

It shouldn't matter how tired or distracted you are: you should never enter credentials into any place you get to from anything you receive in an email--or indeed by any channel that you did not initiate yourself. If you get an email that claims there is some work IT issue you need to resolve, you call your work IT department yourself to ask them what's going on; you don't enter credentials into a website you got to from a link in the email.

It's the same rule you should apply to attempted phone call scams: never give any information to someone who calls you; instead, end the call and initiate another call yourself to a number you know to see if there is actually a legitimate issue you need to deal with.

Rules like this should be ingrained in you to the point where you follow them even when you're tired or distracted, like muscle memory.


I just realized that this might happen to me. On my home PC my alarm bells would definitely go off when Firefox stops suggesting credentials for a supposedly known domain, but on my work computer we're a bit higher security and a password manager integrated into the browser (even with master password and quickly installing patches and whatnot) is just not up to scratch. So what I realized is that I may not notice a lookalike domain because I need to grab the creds from another program anyway.

Is there an add-on for Firefox that warns when you enter credentials on a new domain? Or puts a warning triangle in a password field when today is the first day you visited the domain or something? Firefox already tracks the latter, you can see it in the page info screen, so both should be easy to make but I'm not sure anyone thought of making this before.


This is exactly why security keys should become standard. They are essentially unphishable.

I really don't understand why every laptop/computer/keyboard/smartwatch doesn't come with NFC for exactly this purpose.

Clicking a hyperlink is certainly bad.

Browsers have vulnerabilities and you're broadcasting the attacker valuable information about yourself, including the fact that you're receiving, reading, and clicking on links in their mails.

Also, the article states clearly that 1 in 5 fully entered their credentials.


> Clicking a hyperlink is certainly bad.

HN must be a boring place if you are not prepared to click on external links.


There’s a fundamental difference between HN links and links in targeted emails. I cannot start phishing GitLab employees using HN posts, the threat model is just different.

I’m not so sure about that. With enough dedication and time I think you could target a specific company from HN. Start writing a few good blog posts that would appeal to your audience, only run attack when some attribute is true to that company (i.e. their Corp IP addresses).

You could even combine the two. Post the blog to hacker news, then send phishing email pointing to HN post. That is a trusted link. Then the user will likely click the source link in HN.

Obviously, a lot harder and lower chance of success, but not impossible.


> [...] only run attack when some attribute is true to that company (i.e. their Corp IP addresses). [...] Obviously, a lot harder and lower chance of success, but not impossible.

In general maybe, in this particular case it's gonna be challenging however, as gitlab is a remote company so most employees will logon from residential ips


It's not impossible to determine which of your visitors has login cookies to other sites, such as internal.gitlab.com, and provide different content to them.

I would imagine they would be using some sort of company vpn to access the files they need to use.

Most companies I’ve encountered have moved towards split-tunneled VPNs so an employee clicking on a phish page would traverse the employees gateway, not corporates.

My experience is the opposite: Part of the justification for moving away from standards-based VPNs is to prevent split-tunneling.

My present employer's VPN client goes a step further and mangles the routing table to deny access to my own LAN while connected.


I can’t decide if I hate that more or less than what I’ve seen: client-side blocking of DNS resolution and driving all queries through Cisco Umbrella or friends.

I guess they both suck pretty hard.


interesting, i heard that some employers did set the default route to go through their vpn, havent had that experience myself either though.

it was always only the 10.0.0.0/8 and some /24 ranges from 192.168.0.0/16 at my current job


liberty mutual, the largest insurance provider, is in the process of moving from default route on the vpn to no vpn at all and zero trust networks for their apps.

Or just buy ads with suitable targeting.

cannot start phishing GitLab employees using HN posts

You definitely could perform a watering hole attack if you compromised a site that always gets on the front page of HN. If I were an evil hacker and I wanted to compromise HN I would instead attack a site like rachelbythebay.com or some other popular blogger then just wait for HN’ers to click the link.


Go for medium.com

Just make a post about rust. Everyone clicks on them. Everyone.

(Myself included)


Especially if it has a controversial title,

"Why rust is not a real programming language"

"It's a complete waste of time to learn C++ in 2020"

"Rust is 2x as fast as C++"


“Rust is a complete waste of time”

And then just point to an article about Rust the game.

Jokes aside, I love the name, the pun is nice, but man it makes searching a pain. I’ve ended up too many times in pages related to the game or to actual rust (as in iron).


Emotion - the perfect bait.

Reflections on trusting rust.


The point is to recognise the email/situation as phishing or otherwise malicious before deciding to click the link. The chance of clicking a malicious link on HN is pretty low if you stick to the front page.

Ok, so you close a tiny window, while leaving the entire web open as a giant door by its side.

And you do by a really invasive means that will make sure that everybody that knows what they are doing but are curious to safely inspect it further will be marked as clueless. Leading to false positive and negative errors larger than the signal, but you still expect to get useful data from it.


Usually I mouseover and see where the link would take me. If it's something like micr0soft.co, it raises some red flags. For something like a targeted phishing email, it's even more reasonable to be concerned about things like browser 0 days

Eh; I'm 95% here for the comments.

Emails and HN are different.

Then someone will point out watering hole attacks, where adversaries find where targets hang out socially, and attack that.

And then I'll point out that the inherent risk in HN links vs. unfamiliar emails are very different.


Some people never do ;)

In theory, sure. In practice everyone is clicking on links all day. If someone is has a 0-day, employees manually checking domain names on emails is not going to stop them.

Yeah good luck defending your company against a Chrome 0-day

It's not about defending against something specific.

It's using strategies like teaching people to check links before clicking them that can prevent a number of different things (phishing, malware, etc.)

If you've already clicked a link, attackers know exactly what browser you are using, and that you're probably also willing to click on the next link you send them too, allowing them to go from a blanket attack to a targeted attack.


I disagree that clicking a hyperlink is not bad. If you have a determined attacker with some 0-days up their sleeves, simply opening a hyperlink may result in arbitrary code execution.

See an example from last summer: https://blog.coinbase.com/responding-to-firefox-0-days-in-th...


My understanding of the text is that 10 of 50 actually entered credentials. So the 1/5th is really the number of people who a phisher would've stolen credentials (although they say later they use 2fa which would've prevented a real attack, but still bad enough, as you can expect these people use other accounts which may not even support 2fa).

2FA (assuming TOTP, not hardware keys) prevents attacks using credentials leaked from side channels, but does not work in phishing attacks using a fake login form. The attacker just needs to channel the TOTP you entered into the real login form, and on average they have a bit more than 15 seconds to do so, which is more than enough.

This is what makes security keys so great, you can't surreal a token from one domain and use it on another. They completely remove this type of attack, which no amount of training will ever fully protect you from. You can't put the onus on the employee, you have to make it impossible for them to do the wrong thing in this case.

Any password manager with browser integration can make sure you don’t fill in credentials on the wrong domain. No need for additional hardware.

That just stops you from automatically entering the password. A security key will literally not authenticate in that situation.

Defense in depth is just as much of a thing for personal security as network security.


Something I'm curious about 2FA with security keys: why are we entering login then password then click 2FA instead of doing login then 2FA then password ?

It seems it would add a layer protection to the weak link which is the password.

Any idea?


An idea:

Most sites, certainly consumer sites, which offer WebAuthn it's very optional. So doing it the current way just adds a step after the password step. You need a (perhaps stolen) password to even find out there's a next step and you're not in after all.

But if we swap it, now we're telling bad guys if this account is protected up front. "This one is WebAuthn, forget it, same for the next one, aha, this one asks for a password, let's target that".

The people with WebAuthn are no worse off that before, maybe even arguably better in terms of password re-use - but everybody else gives away that they aren't protected.


When I worked somewhere large enough to have an IT dept. running these tests, it was obvious they were from IT, and people would open them for amusement.

So yeah, definitely some interaction should be required to consider it a failure, but also the test email should be as convincing high quality phishing as possible.

Not just because it makes for a better test, but because it's more likely to be a valuable lesson for more people, people who thought they wouldn't fall for it.


> The email client is specifically designed to be able to display untrusted mail.

Email clients often do things like load images, which can tell the sender you've read the email, which is an information leak.

Some email clients try not to do this, but that's actually somewhat recent, and I wouldn't say they're 'specifically designed to be able to display untrusted mail', rather 'they try to avoid common exploits when they become known'.


What can be done with this information?

Most companies have e-mail addresses that are completely predictable, so you can pretty much assume that this e-mail address exists. If this really was a security risk shouldn't you have UUID emails for everyone?

Also how do you as an attacker know that it was user not a e-mail server checking those images?


> What can be done with this information?

It will reveal if they're working right now, what time they work otherwise, their IP address, their approximate physical location, their internet provider. A lot you can do with that.

> Most companies have e-mail addresses that are completely predictable

That's the point. Predict an email address, send it, find out if such a person works there.

If I email unusual.name@sis.gov.uk and they open it then guess what I've worked out?

> Also how do you as an attacker know that it was user not a e-mail server checking those images?

Agent signatures.


I mean you can just get employees from LinkedIn and already know their e-mail addresses with high certainty and know when they work by the timezones. If this information was abusable, why is it so easy to guess in the first place and why is it not actionable then?

It would be arbitrary to have the image links switched out by the server so they always go through a proxy/urldefense and it would never be the user ip address or user agent the attacker sees.

I would assume a company like Gitlab would have such measures if this info was indeed abusable.


> I mean you can just get employees from LinkedIn and already know their e-mail addresses with high certainty and know when they work by the timezones.

Do you put your IP number on LinkedIn?

When you travel do you put the hotel you're staying in on LinkedIn?

Also, not everyone is on LinkedIn in the first place.

> It would be arbitrary to have the image links switched out by the server so they always go through a proxy/urldefense and it would never be the user ip address or user agent the attacker sees.

The word 'arbitrary' doesn't make any sense to me in this context so not sure what you mean sorry.

In general, I don't know what you're trying to say - that there are ways to try to defend against these attacks? Yeah I know. I'm not sure what point of mine you're refuting or replying to anymore.

You asked 'What can be done with this information?' - this is the list of things you can do with that information. Can you defend against some of it? Yes to some extent. But it still leaks for many people.


> Do you put your IP number on LinkedIn?

Which companies own which IP address blocks is public information.

> When you travel do you put the hotel you're staying in on LinkedIn?

Conferences are announced; advertised, even.

> Also, not everyone is on LinkedIn in the first place.

That's OK, companies do a fine job publishing employee information all on their own.

> You asked 'What can be done with this information?' - this is the list of things you can do with that information.

You've moved from Step A, getting the information to Step B, correlating the information, but you've left off Step C, which is profiting from the information. What is a benefit you can gain from knowing someone at some IP address opened your email? Can you get that benefit some other way, such as by looking in a phone book or viewing the company's website?


> Which companies own which IP address blocks is public information.

People are working from home! That's the entire context of this thread! They aren't using corporate IP addresses! And they don't do it when travelling either!

> Conferences are announced; advertised, even.

People travel for other things beside conferences. For example to a meeting or client site.

> That's OK, companies do a fine job publishing employee information all on their own.

Many don't do this.

> What is a benefit you can gain from knowing someone at some IP address opened your email?

I've already listed all these things.

> Can you get that benefit some other way, such as by looking in a phone book or viewing the company's website?

Yes, people not listed in a phone book or the company website.

You're listing exceptions, but they don't apply to everyone. If they don't apply to everyone then you can catch some people.

Try this to help yourself understand - people do in fact use tracking images. Therefore, do you think that maybe there's a benefit to doing this? Otherwise why do you think they do it?


What I am trying to say is that someone opening an e-mail should not be considered a failure. You can't expect people not to do this. All of this can be avoided if you just use some service to proxy the images. So the IP would not be leaked because the proxy server is fetching the image and it could easily be doing this no matter what and even if it determines the message to be spam and user might not even see the e-mail.

>> Also how do you as an attacker know that it was user not a e-mail server checking those images?

> Agent signatures.

Can you Expand? Googling isn't helping me understand what this means/how it works.


Also called agent fingerprinting. You can look at exactly how the agent is responding and make educated guesses at what agent it is. You think one HTTP request looks like any other, but there's enough little bits of information here and there to leak info.

The user agent is the simplest example. That can be spoofed, but there's more subtle traces as well, all the way down the stack https://en.wikipedia.org/wiki/TCP/IP_stack_fingerprinting.


Thanks!

The "user agent string" of the email client is different from that of the user's browser.

Thunderbird blocks remote content from non-contact email. Is that not standard behavior? It prevents someone from knowing when you've opened their email.

> What can be done with this information?

Now you know who's curious enough to open a shady-looking email, and perhaps click a link out of curiosity. It means your list for the next round of attacks is much smaller and more targeted, making it easier to evade detection.


> Email clients often do things like load images, which can tell the sender you've read the email, which is an information leak.

That makes it less than ideal, but describing it as a ‘failure’ isn’t going to help any users pay more attention to phishing mails, because they get tons of legitimate emails with images in.


This is one thing I like about Outlook. It doesn't load embedded images unless you click on a button at the top. All email clients should do this. Not only is it safer, but it discourages people from putting a ton of images in emails which is just annoying in general anyway.

Gmail used to do this, but it seems they've phased it out.

Thunderbird has always done this. Loading images prompting before send receipt notifications you name it.

Email clients started out without embedded images. Images came after the initial email implementation. So one could say that displaying images in email clients is rather new. Also, most if not all email clients have the option of disabling Inline images.

Email clients, just like browsers, are made specifically to handle untrusted user content. That then some people/clients allow information leak, is another thing. Just like websockets in modern browsers.

brmgb 11 days ago [flagged]

Sure, let's pretend images in email are a new development and should be stopped.

Meanwhile in the real world some of us have actual users. Pretending we should stop using widely used and useful technology while flailing your arms and shooting "but security!" is not going to help anyone.


> Sure, let's pretend images in email are a new development and should be stopped.

What? No. No one is arguing that...

The only thing I'm refuting in my previous comment is "Some email clients try not to do this, but that's actually somewhat recent" which seems to indicate chrisseaton thinks that email clients that don't load images is a new thing. So the idea is that first we had email clients, then the email clients added the option to hide images.

When in reality, email clients started out without images, then they added images.

Way to reply to a comment without reading the context and subsequently completely miss the point.


discriminating between different failure modes is important. However, every situation you've described is still some form of failure mode.

1. A user opening a phishing email means the email made it into their inbox (spamming failure unless whitelisted for the sake of a test) and the user was moved to click the email based on the subject line. This in itself is the lowest risk of the failure modes we're about to describe, But some risk will exist considering e.g malware has spread through the simple opening of emails before.

2. Clicking a link in a phishing email is much higher risk and, regardless of how the phishing test was crafted, is considered with absolute certainty to be a failure mode of any phishing test or event for three reasons: A user has definitively disclosed their presence within a company (email clients today may block trackers from loading, but clicking a link gives it away), the user has disclosed their receptivity to the message, and in a real world attack, merely landing on the page may trigger an event such as the delivery of a malware payload via a functioning exploit against the browser and the underlying operating system.

3. Entering credentials is probably the most obvious one.

---

Rather than a "password alert" control that just alerts a user that their account was signed into, what would be more helpful is a second factor; a bare minimum would be a prompt on a user's phone indicating that a login attempt was detected and requesting confirmation before that attempt can succeed. This at least helps a user potentially preempt an attack against their own account (assuming they're trained on how this works) even if they never figure out that they've entered their credentials into a phishing site, And if the second factor challenge is never met, an alert to the security team could automatically get the security team to triage the risky login.

Pardon typos. Voice to text.


What can be done with the info that user has read, opened and clicked on the website? Our company for example has completely predictable e-mail addresses with first letter of first name and then last name @ company.com. You would have this knowledge even without having to send e-mails. I assume Gitlab has it similarly.

Also I assume Gitlab already has 2fa.


> What can be done with the info that user has read, opened and clicked on the website?

Follow it up with a much more tailored spear phishing attack - https://www.knowbe4.com/spear-phishing/

Reworked to sales terms: it's the difference between a cold lead and a hot lead. A user who's clicked through has proven themselves to at least be warm or receptive to phishing campaigns in general.

As an adversary, I'd probably couple unique links (for tracking clicks) with heatmapping and other front-end tracking technologies to see what exactly the user is doing and how far they've gone before backing out, which helps me refine the attack. Most attackers probably wouldn't go that far (spear phishing the people who clicked would probably be the extent of it), but if someone is after something of particular value at your firm, there's no reason why they wouldn't put more effort into sharpening the attack.


It depends on what your threat model is. If you are a high value target, clicking on a link could definitely get your hacked through a browser vulnerability, e.g. https://blog.coinbase.com/responding-to-firefox-0-days-in-th... and https://www.theregister.co.uk/2016/10/20/alleged_dnc_hackers... .

Most people are probably not worth the effort of this, but I could imagine a source code hosting company could be, as a step to try to compromise some other software...


> Opening a phishing email should not be considered a failure. The email client is specifically designed to be able to display untrusted mail.

I'll go further:

It is impossible to never open a phishing email.

From addresses can be spoofed. Path information... well, it isn't available at all unless you open the email, is it? Also, it can be spoofed right up to the point it enters your company's email system. The Subject can be made appropriate and innocuous, or it can be made just as "OPEN THIS EMAIL IF YOU WANT TO KEEP YOUR JOB!" as the sender desires, and there isn't a person on Earth who has to respond to emails who will be able to divine the inner intent of the sender from just the Subject line.

Should corporate email systems prevent address spoofing? Argue amongst yourselves. My point is, they don't, or at least they haven't anywhere I've worked.


> IT departments should probably enforce single sign on and use a password alert to prevent a password being typed into a webpage. They should also prevent downloads of executable files from non-whitelisted origins for most staff.

I can hear the developers raising Hell at just the suggestion that they don't have local root and free reign with brew, docker, and npm. PMs and marketing can be relied upon to react similarly to being told that they have to use SSO-equipped tools that have been through procurement, and not someone's random free shared Retrium or whatever. That SSO tends to add a zero or two to the cost makes them even more skittish, on top of the chance that the procurement process says no.

Trying to enforce SSO use can be challenging.


Agreed that entering credentials is the most serious security failure here. It is worth noting that credentials alone are never sufficient to access a GitLab employee's account. GitLab employees are required to use MFA on all accounts, including GitLab.com. https://about.gitlab.com/handbook/security/#security-process.... Yubikey/hardware token or TOPT (time-based one-time password) from authenticator are necessary to access employee accounts. OTP via email or SMS or email is strongly discouraged and not an option for employees.

"use a password alert to prevent a password being typed into a webpage. "

Out of curiosity, would you mind explaining what this means and how one would achieve it?


Generally I agree. Interacting with the email is counted as "phished" because it makes the security team look better.

The counter-argument is that if an attacker sees you interacted with the phishing attempt they may try again with a more targeted attack in the future.


Mostly agree, but you "fail" when opening a link due to browser exploitation. Not all phish is credential phish. Good example is the coinbase attack in 2019(18?).

> The email client is specifically designed to be able to display untrusted mail.

No, they weren’t, which means chances are you are using one that wasn’t.

> web browsers are designed to be able to load untrusted content from the internet safely

Yes, but they aren’t perfect.


You cannot reasonably expect a person to refuse to even look at a suspicious email. Let's say your support product x at your workplace. From: Person you don't know. Subject: Bug report.

Noone would delete that email without reading it, just like the finance department when they see something claiming to be a bill. Now what if your email said "I have sample website that demonstrates this bug". Again, there's no reason for you to not click that. The only thing that you should be able to reasonably expect a person to "fail" on is getting there and downloading a .exe or providing a set of credentials.


I have not claimed otherwise.

Please read my message again.


yeah the article says 10 out of 50 entered their credentials, which is the same as the headline

I'm a web developer with a focus on security and I nearly got phished multiple times. Once was a legitimate-looking email from Linode, which I opened and was fooled by (I didn't check the domain because I trusted my spam filter too much to consider that it might be fake), I was saved by my password manager not auto-filling the credentials because the domain didn't match, which made me look and see that I was on the wrong domain.

The second time, someone was about to steal $30k worth of cryptocurrency from me with a very convincing page on śtellar.org, where I nearly entered my wallet seed (did you notice the accent over the s? I didn't), and was saved by the fact that I keep my cryptocurrency in a hardware wallet, so I had no seed to enter.

Both times, what saved me from being phished wasn't that I'm trained or that I'm more observant (which my parents have no hope of ever being), but that I had used best practices so I didn't have to rely on being trained or observant.

I'm hoping WebAuthn takes off, which will really kill phishing for good, but you can take steps now: Use hardware U2F keys as second factors, use a password manager, don't use SMS auth. Make long, random passwords, etc.


I was honestly almost phished a couple of times until one of my professors said something I had never though about before.

"If you have a password manager, use the password manager's 'take me to site' function instead of anything on the email. Just open the site from your password manager instead"


Except a good number of emails aren't directing you to their site in general, but a specific page on their site, that might be very hard or impossible to find any other way.

Right, but if you login to wellsfargo.com and click on a link to a specific page on wellsfargo.com, you will be logged in already...

Hilariously untrue of Chase where you can go from Chase Travel (using your CC points) to your Chase Account page back and find yourself confronted by a login wall at the Chase Travel page.

Two years ago I was fooled by "colnbase.com" (L instead of i) to the point that I was annoyed that 1Password "wasn't working". Of course, 1Password didn't have a uname/password for a phishing site. I almost opened it to copy the password in manually when I spotted the L. It's sobering.

As for WebAuthn and U2F, unfortunately they chose every trade-off possible away from practical usability. They're doomed. Go look up the impl/ux flow for WebAuthn right now for example.

We need less of that and more good ideas that people would actually implement and use.


Really? What do you think is impractical about it? I just tap my USB key and I'm logged in.

Hell, it even supports a mode where you don't have to have a username or password at all (e.g. log in and try adding a key on https://pastery.net, you can then just log in with the key with no username/password at all).


Note that to do the latter ("Usernameless login") you need a FIDO2 key. A relatively modern Yubico product can do FIDO2, but cheaper alternatives mostly don't offer this.

The reason it's a cost upgrade? Those credentials have to live somewhere, and that means they're using Flash storage baked inside the FIDO2 key, ordinary FIDO keys don't have close to enough storage.

Next you might wonder: Wait, how does a FIDO key log me into Google if it isn't storing the keys?

Magic. Well, cryptography. When you registered the key it minted a key pair (Elliptic curve most likely) and obviously it gave Google the public key, but it also provides Google a large random-looking "Identifier" which Google must give back each time you authenticate. That identifier could, by the specification, just be some sort of hidden "serial number" but in reality what everybody does is encrypt the private key or its moral equivalent - with an AEAD scheme using a device-specific secret key and then use that as the identifier. So when Google gives you back the "identifier" the FIDO device decrypts it to discover its own private key for the site which it can use to log you in. The FIDO dongle doesn't actually even know you have a Google account, yet it works anyway. Magic!

FIDO2 is a much less clever trick, and that flash storage is too expensive to use it everywhere - but the UX is so seamless it makes username plus password look like they asked you to undergo a cavity search by comparison.


Why is this downvoted? It's 100% correct, except the distinction is not FIDO vs FIDO2, it's "resident key mode" vs not (FIDO2 supports both, and does non-resident keys in the way described above).

Yes, the fact that you need flash storage for FIDO2 resident credentials is unfortunate, but that's why I'm exited about the new SoloKeys, which I heard will have enough flash space for thousands of keys. In comparison, the Yubikey has 25, which makes it useless for what I want it, and they don't even advertise that limitation anywhere.

Logging in with this usernameless mode is just amazing, you can go to an untrusted computer, plug the key in, tap a button and you're logged in with no possibility of any credential theft anywhere (just make sure to log out afterwards).


I don't think you need a FIDO2 key for usernameless.

try https://www.passwordless.dev/custom#heroFoot with the latest Firefox on a recent Android.

You can register and login with just a PIN code (or gesture pattern) from your Android


That's fair, there probably are more people with a suitable Android phone than with a FIDO or FIDO2 dongle, and you're correct that the phone (having more than enough storage) offers this feature and unlike a dongle I think you can be comfortable the phone won't "run out" of space if you sign up for frivolous nonsense this way.

On impl, which I take to mean implementation:

I finally have direct implementation experience (thanks COVID-19 I guess?) of WebAuthn now so I can speak confidently to this consideration.

I built a toy implementation on my vanity site and am gradually integrating it to a site friends built back when we all lived in the same city at the turn of the century. That site is old PHP (actually parts of it are terrifying Perl CGI code that looks like it was written before HTTP/1.1 existed) so my WebAuthn implementation is also PHP at the backend. This is neither the simplest, nor most capable technology, I have no doubt it can be done faster and better in your preferred language (it certainly can in mine).

I wrote <1 KLOC, no frameworks, no libraries beyond standard components, there's a little corner cutting in my PHP CBOR implementation but nothing likely to break in the real world for this purpose (we can treat all "I don't understand" cases as "Probably bogus, refuse entry" and be fine).

The JS is a little bit of Promises and some JSON processing, nothing every browser (that can do WebAuthn) doesn't offer already and I included it in my < 1 KLOC total.

Now you aren't going to get this done by thinking it's something else. Trying to do all the work on the client? Not going to make that happen. Hoping to hide all the WebAuthn credentials in a 64 character "password" field your database already has for each user? Not going to be like that.

But if a team has one person who understands in principle what this looks like, I'd say it's maybe a week for a backend person, a week for a frontend person and a week for a tester to spin up on what's going on and learn it. And that's the first time. And that's going to be markedly less for people who aren't learning the components (Web Crypto, public key crypto) as they go.

The pay off is huge. When you store passwords, that's a liability you've got there, it's like toxic waste you're storing. If somebody gets those passwords you can face fines, somebody might sue you, even at best you'll need a PR firm to help try to sell how sorry you are about it. But stored WebAuthn credentials aren't even secret. They make your preferred sock colour look like the crown jewels of PII by comparison, yet they're far stronger than a password as login credentials.


>a very convincing page on śtellar.org

If you rarely use IDNs, toggling `network.IDN_show_punycode` in about:config can help with that - you would have seen `xn--tellar-2ib.org`.


Thanks, I had originally typed up the URL in the comment with https:// and HN did convert to punycode, foiling the attack. I never use IDNs, even though I'm in Greece, so I've set that option, thank you.

haha, for the first time i thought the accent on the s in śtellar.org was just a dust particle on my monitor.

I THOUGHT THE EXACT SAME THING! That's why I didn't notice it at the time :(

Thats one of the reasons it is so effective :-)

This is similar to a correlation problem. I was complaining multiple times to a company, finally they called me back. They had this elaborate explanation and needed me to reset my password.

Then they asked for my password. I was pretty confused but almost gave it to them. It was just a coincidence that some con-artist had called me to try to phish me when I had been trying to reach out to the company.

Those assumptions where you know it’s real are dangerous because it can make you ignore red flags.


Any recommendations on hardware U2Fs? I’ve looked a couple of times the yubikey but didn’t go through with it

Most important element is definitely finding a device that suits your needs in terms of connecting it, USB connectors, NFC and so on. The whole idea is these things are trivial to either plug in and leave in a machine that's with you everywhere, or carry on a keyring to use quickly, if it's a whole performance to use your key then you just won't.

I can vouch without hestitation for the Yubico Security Key (newer version has a "2" printed on it clearly, this also does the FIDO2 protocol with resident credentials). This is a relatively expensive option for the purpose but it's robust (lots of people put these on key rings and stuff then carry them everywhere) and simple and the people who built it know what they're doing. But it's a USB A device, if you need bluetooth or USB C or whatever then don't buy one hoping to like it.

That product skips all the fancy Yubico features other than being a Security Key, thus saving a big fraction of the cost - but there are much cheaper options that work if budget is tight, if you're just playing around, or to do testing for a potential deployment: I also have a "KEY-ID FIDO U2F Security Key" again USB A and it works nicely, but many people don't love the bright green LED (all the time, not just when authenticating, it's on all the time). However it clearly also feels cheaper than the Yubico product, this is not an heirloom product.


I have a Yubikey 5C but it might be a bit of a waste of money, since all I use it is for FIDO2/U2F, especially now that SSH supports that.

I'm excited about the new version of the SoloKeys (https://solokeys.com/) coming out next month, they aren't using secure elements like the Yubikeys are but I'm not really worried about someone stealing the key from me to extract the credentials with physical attacks, so they might be a good alternative.

Other than that, I eventually see password managers having built-in software FIDO2 implementations, so you just open your password manager and it automatically intercepts U2F requests and authenticates them, but that's a different thing.

Basically, anything you get that's U2F/FIDO2 compatible is fine, and much better than the second best thing (TOTP or whatever). Get something that's cheap enough for you to get two of, have one with you and the other at home as a backup, and that's it.


Use nextdns.Io to block phished and new domains

Maybe this article came about because of my tweet: https://twitter.com/sytses/status/1263216521175642112?s=20 “ I'm grateful for the red team at GitLab doing an amazingly realistic phishing attack https://gitlab.com/gitlab-com/gl-security/gl-redteam/red-tea... with custom domains and realistic web pages. The outcome was that 20% of team-members gave credentials and 12% reported the attack.”

I think it is amazing that our res team make https://gitlab.com/gitlab-com/gl-security/gl-redteam/red-tea... public so other companies can learn from it and they where comfortable with sharing the results.


I’ve seen this a lot in my work where companies hesitate to conduct phishing exercises that are “too convincing” (or, put another way, too realistic) because they fear documenting poor results. Of course that means the exercise and the learning opportunities are much less impactful. I’ll concede it’s a little different with financial institutions because regulators and auditors will usually see the results at some point but I really admire Gitlab’s commitment to transparency.

I try to emphasize to clients that it’s not a test but a phishing exercise akin to a fire drill. You don’t pass or fail a fire drill - you use it assess how prepared you are for a fire. And if you find that you’re totally unprepared, well wouldn’t you prefer to figure that out before anything is actually on fire?


I love the lure, and I respect the GitLab team for making it public, but this is a tough read - it’s putting way too much responsibility on the end-user. For example I’m a huge fan of security teams using email headers to analyze suspicious messages, but I think it’s a step too far to expect a user to ever look at an email header, no? We can hardly get regular end-users to hover over a link; encouraging them to open up email headers to see what service the mail was sent from, or to understand what a “received” message header vs an x-originating-ip means is counter-productive. Headers are hard to understand even for a security analyst, asking HR or Recruitment or Sales to analyze them and understand them feels like the red-team are underestimating how little time everyone has and overestimating how technical most employees are!

I'm intrigued. Why limit the experiment to 50 employees? Why not everyone except the Red Team?

My company regularly runs internal phishing tests like this, using an outside organization. We apparently have a near-constant 7% failure rate. Personally, I cheat: Long ago I discovered that the outside org puts some identifying headers into the email, so I wrote an email rule that adds "[PHISHME]" to the subject line.

The phishing emails are sometimes very good. They appear to be from senior management and address projects or other internal events everyone knows about. Some emails are very easy to spot, in the Nigerian prince category. It is very interesting that we have that 7% failure rate no matter how good or bad the phishing email is.

In general, I think internal phishing tests are a great way to educate the workforce.


> My company regularly runs internal phishing tests like this... I think internal phishing tests are a great way to educate the workforce

Yes and no. I used to report phishing attempts to IT. Then we started running tests like every month, so I'd just delete suspicious messages and move on. Of course, that's when we got a real phishing message.

Frequent company-wide tests are, in my opinion, overboard. Once a year company-wide tests, followed up by more-frequent tests for sensitive groups and/or those who failed previous tests, makes more sense.


That's the thing, reporting a phising email in my org excludes you from one month's worth of email... then two months... then four months... I spoke to the guy in charge and he checked (my account is set to not receive for 2 years)

Our tests seem to be somewhat staggered. We may see phishing email tests twice in a month, then nothing for several months. Typically there is a two-month lag between the tests.

I should note that phishing tests are just one component of many company-wide education programs regarding physical, computer, data, and network security. My company deals with very sensitive data, so information security is a Big Deal.

The problem with targeting these tests is that new employees are constantly coming in and need to be educated/trained. Also, the persistent failures do not seem to be confined to only certain work groups; they're spread around the company fairly randomly, and they move.

Exactly how phishing tests are run probably depends quite a bit on what kind of company you have and what kind of employees work there. A workforce full of programmers would -- I would hope! -- be much less susceptible to phishing scams. The sales force, possibly more susceptible. That may be stereotyping, though.


Just curious: Are there repercussions if you don't "pass" the phishing test (that would be seriously stupid), do you dislike them or simply "cheat" because it saves you time?

I'm not a huge fan of these phishing-test exercises. I run the service at https://urlscan.io which a lot of folks use regularly to check out suspicious links in mails / chat messages. I've been approached by some of these phishing-test companies asking me to prevent scanning their domains/IPs. They flat-out told me that they weren't happy about users using my service to check the link, which I always found odd, and I never got an explanation for it. Probably less spectacular findings for these companies if users can figure out a phishing test by themselves...

> Probably less spectacular findings for these companies if users can figure out a phishing test by themselves...

It's the same issue as "ad companies"... if you don't cook the numbers that show your expensive service is worth it, then people will switch to the service that looks worse (this one has 7% fail rate but this one has 50% fail rate)


Perhaps they should look at doing integration that shows how much urlscan.io is blocking the phishing test companies?

What are the legitimate cases for excluding the domains from your scanning service?

Not many, I usually only do it when the domain or URL pattern in question is almost exclusively used for sessions/invites/sharing-links and basically every URL submitted leaks either a customer name and/or invite-token and/or PII. zoom.us is a good example, certain DocuSign URL patterns, the sort of thing where knowing the URL gets you a sensitive document, etc.

When I worked at Google, orange teams weren't allowed to use phishing tactics because they worked so reliably every single time that they provided no new information about the security of internal systems.

The reality is that humans are hard to secure, so defense in depth generally involves preventing compromised accounts from causing lots of damage, detecting them as early as possible, and controls for shutting them down.


I don’t understand how working from home is relevant to this?

Do people working in offices have IT staff come by to update their laptops? Would people in an office not open this email if they’d do so at home?

When I worked in an office nobody touched my laptop but me.


While in office you're connected to internal network, supposedly within internal domain and IT dept. would have direct access to push updates automatically. When outside you're suppose to connect via a VPN (best case) or communicate via encrypted something (email, ftp etc) but you'll need to enter your credentials somewhere.

Also, please remember, it's not your laptop, it's company's laptop, merely given to you to do your work on it. Anybody within the company with correct credential would have the right to touch that laptop.


> While in office you're connected to internal network

Not all companies do it this way. Many use a clear network and make services encrypted.

> Also, please remember, it's not your laptop

It is if you work for a bring-your-own-device company.


Bring your own device is bad for companies. Any of them using this approach are just begging to have their talent pool drained. If I do work for company on my own device there absolutely no difference between my personal research and the company research and in eyes of the law these companies will always lose if they try to enforce some "secret sauce" to not go to their competition. Wondered why FAANG companies never did this, those that will lick every penny from whatever corner they can? Exactly because they know too well they'll lose badly. Just look at that guy that got bankrupted by Google after he went to Uber - HN had an article a few weeks back.

I wasn't saying it was good or bad, just that some companies do it.

Shouldn't that exactly be appealing to the talent, not having to worry about the company claiming their side projects as their own?

I very often work on my side projects and it is quite an annoyance having to move around with 2 laptops or paranoidly erasing my personal work from company computer.

Also from my experience working at a fang like company they definitely don't seem to lick every penny. We have company laptops because of security reasons, but phones are bring your own which they pay for. Also they pay for WFH office equipment as long as you can reason it makes you more productive or is good for your health. Basically anything that makes you more productive or sustainable they will pay for.


use a VPN to work on your own server/computer from the company issued device. This way there is no need to keep anything of your on their.

> Any of them using this approach are just begging to have their talent pool drained

citation needed

> FAANG companies never did this

Actually it's allowed in 3 FAANGs that I know of.


> Also, please remember, it's not your laptop, it's company's laptop

Correct.

> Anybody within the company with correct credential would have the right to touch that laptop.

That is only partially correct. In many European countries people enjoy quite some protection also in work life. So in order not to do anything illegal the employer has to carefully control access rights to your PC. And the ones who have access rights cannot do whatever. Reading emails is typically illegal, yes emails on the work account! (Just to mention the legal concepts; of course in today's architecture emails are rarely stored on your PC)

I understand in the US employees enjoy little protection while at work. I could guess video surveillance in the toilets could still be unacceptable. Just to make the point, even if the location, paper and water is paid by the employer and more importantly the time is paid, it shouldn't be like that that the employer controls everything. (Although there have been reports that Amazon warehouse workers in the UK use bottles for their needs, because the employer does not provide for more human arrangements in practice. Some employers are always worse than others and that's why I have stopped ordering from that company.)


Most companies will have a firewall on their corp network so new domains, or malicious-categorized websites will usually be blocked which offers additional protection above working from home. You can obviously use an always-on-VPN for wfh companies, or tools like Cisco Umbrella, ZScaler or Netskope, but many companies haven't done that yet.

Someone at my work (before lockdown) recently avoided a phishing attempt because they turned to their colleague and asked, "Why would the high-rank-officer email me?"

Gitlab is a remote-only company, I don't know why this article is choosing to highlight that fact so much though.

I think there's some sort of anti-work-from-home agenda going on here. It's completely irrelevant to the story. If you were in an office you'd get exactly the same email and presumably respond to it in exactly the same way.

It's relevant to the story because so many people are currently in their first months of WFH so a headline that mentions WFH will be more interesting to them than one that doesn't. Another way to put it would be "WFH pioneer gitlab phished its own staff", nothing wrong with that.

How is it different from the same attack done while you are in the office?

In offices you have the ability to monitor and filter the network connection, so it's plausible to detect and/or prevent the malicious connection after the phish succeeds.

Our company informed us 2 years ago that they will be attempting to phish us continuously (no frequency specified).

If you fail, the last page is corporate training on the topic.

I was so inspired to not have to do corporate training, that I assume everything is a scam now.


> If you fail, the last page is corporate training on the topic.

In my work, the policy is 3 strikes and you are gone. First two fails are trainings with tests and third fail is an instant fireable event. As we work with clients and and their data, this is strictly enforced too.


I've never gotten in trouble for missing a phishing test, but everywhere I've worked there are real emails that have all the hallmarks of a phishing one. Like, misspellings, weird domains, etc. So I don't think it's reasonable to punish people, nor it is sufficient to raise awareness. The security people don't address the issue of real emails that look fake that condition people to click on similar things, because obviously it's outside of their area of responsibility and control.

Also, what do you do if you have a draconian policy and someone important clicks on one?


I guess that depends if failure is visiting the unique URL they've sent you or actually inputting credentials.

I got curious about an obvious internal phishing test and decided copy the link to another machine and see how convincing it was... I hadn't clicked, it wasn't my work machine, and I didn't enter any details - but instantly received an email informing me I'd failed.

Yeah right, I obviously haven't done the associated failure training and I will forever refuse to do so out of principle.


Sounds like a hellhole. That policy is perfectly tailored for corruption and paranoia.

That’s the cost of client enforced security policy. I have not known or heard anyone personally fired for this but definitely getting warnings and or getting reassigned their roles.

concur. I do hope that the "well meaning" security team that thought this up is diligent in investigating and accounting for false positives. "Oh, I clicked the link in the fishing email IN A VM to see what the F* it was" and "I entered 'fakeceo' and 'mrpassword123'".

People have different methods of exploring and learning to decide if something is legit or not. Nor should any "security policy" should be a 3 strikes zero tolerance policy. Everything needs context.

P.S. I'm pretty sure that the mental and behavioral damage done by this 3 strikes policy can easily be weaponized.

Shame.


> That policy is perfectly tailored for corruption

Elaborate?


Christ, what a nightmare.

> Hunt said GitLab has implemented multi-factor authentication and that would have protected employees had the attack not been a simulation.

"Protected employees" is a weird way to put it to say the least. It's not about protecting employees, it's about protecting gitlab company and their customers. And the protection would have failed. The attacker would have needed to use the credentials (including the one-time credential) in real-time. That makes the attack-site logic a bit more difficult, but it would have allowed to break in. I doubt gitlab employees have to reauthenticate very often during a working day.

Well, unless they really use a challenge response system. At least what I use as a gitlab customer is not, it's just standard OTP. I would provide a valid one time password to a phishing site, should I fall for it.

(Edit: reworded. Commenting on the phone is never a good idea...)


Most challenge response systems don't help either, the attacker gets to forward the challenge to you, and then your response back to the real site. It's some extra work but you can get ready-made software to help perform this attack.

WebAuthn (and the older U2F) works, because it's recruiting the browser (which knows perfectly well which site this is) to mint site-specific credentials every time.

An attacker with a phishing site https://fake-gitlab.example/ has a few options, none of which work out for them:

* Just don't do WebAuthn, now they don't have a second factor and can't get in

* Ask the browser for legitimate WebAuthn credentials for fake-gitlab.example. But, of course GitLab won't accept those credentials, any more than it'd accept a made-up username so they're useless.

* Show the browser the "cookie" GitLab offered for GitLab WebAuthn credentials, the browser will cheerfully give a user's FIDO dongle this cookie and the fake-gitlab.example name, and the dongle will explain that it doesn't recognise the combination, maybe use a different dongle? No joy.

* Show the browser that cookie and tell it this is gitlab.com. But this is fake-gitlab.example not gitlab.com, so the browser will just raise a DOMException SecurityError in the fake site's JS code. The code can hide that easily, but it doesn't get any credentials.


Thanks for mentioning https://en.m.wikipedia.org/wiki/WebAuthn According to Wikipedia Dropbox supports it. Any other widely used adopters? Need to check whether gitlab supports when I am at my computer. So it might well be that they even mandate it for their employees. But the statement or at least the part of the statement that made it to the article was not that specific.

My understanding is that Google mandates U2F (the de facto predecessor to WebAuthn) for employee systems, certainly the Google employees I know have FIDO keys. One interesting thing is that some of them don't really understand how those keys work - and the U2F/WebAuthn design means that doesn't matter at all. I believe way more firms should do this and I've tried to gently encourage it at places I've worked.

Older sites tend to support U2F rather than WebAuthn. If you're on a greenfield install, you should just do WebAuthn, but it can be complicated in some scenarios to migrate from U2F especially if you're huge so it's understandable that not all have. In at least Chrome and Firefox the UX is identical anyway.

So, not differentiating them:

Facebook, GitHub and Google are three popular examples

You can also authenticate for some US Federal Government business on Login.gov (even if you aren't a US citizen)

And the UK's "Gov.uk verify" authentication can use Digidentity's offering which in turn relies on WebAuthn or U2F.

Edited to add:

AWS can do it, but, for some crazy reason they won't let you register more than one FIDO dongle. So I would not advise securing an "admin" AWS account this way, only users who can go to someone with admin privs to reset if they lose the dongle, but it's good for a team of developers I guess.

Not allowing multiple dongles goes against the intended security design, ignores a SHOULD in the WebAuthn standard, and also makes a bunch of the fairly complicated design pointless, I can't tell if Amazon are incompetent or had some particular weird reason to do it.


> Need to check whether gitlab supports when I am at my computer.

They support U2F, of course completely opt-in for users/customers.

The question that remains is do they mandate it for employees.


Google, GitHub, GitLab all support it, at least. Azure AD, notably, does not.

Azure AD does, and was one of the first to adopt the new WebAuthN standard.

https://docs.microsoft.com/en-us/azure/active-directory/auth...

You can also use it on a personal Microsoft account.


I guess it might be a premium feature then? It certainly doesn't show up as an option for me..

Gitlab.com has used U2F/WebAuthn for years (not sure which, but they're both isolated by origin anyway).

I work at gitlab and just stumbled across this, we use U2F but we have an MR to add WebAuthn support https://gitlab.com/gitlab-org/gitlab/-/merge_requests/26692

Right, according to https://en.m.wikipedia.org/wiki/Universal_2nd_Factor it's U2F. So I would not be surprised if gitlab requires their employees to use the dongle instead of simple OTP which they allow for customers/users. A shortcoming of the article not to mention whether that's the case or not.

At a place I worked at they did something similar with the most obviously fake email possible.

Seemed like a pointless box ticking exercise.

Funny enough IT sent out an email about a Windows update rolling out (upgrade to a new version like 1709) that looked ever dodgier than their fake email. That had people reporting that as phishing.


> Seemed like a pointless box ticking exercise.

Phishing emails often look pretty obvious - that’s part of the program! It filters out people you can’t trick and leaves you only with the most gullible ones.

Had the same at a previous company. If you use GMail, IT needs to manually approve the mail to avoid it going into the spam folder. A huge warning saying “this message has been excluded from your spam filter by your IT department” shows up at the top. People still click through...


That might actually make it seem like the email has been explicitly sanctioned by IT. "Huh, this email is a bit weird, but IT says it's okay." click

> Phishing emails often look pretty obvious - that’s part of the program! It filters out people you can’t trick and leaves you only with the most gullible ones.

For frauds that requires the attacker to spend time with the victim, sure. For a fully automated phishing attack? There is no reason to lose out on people early on.

And for a targeted attack against a company? Makes even less sense to make it obvious.


It could be a strategy to make people less careful : send one or two "obvious" fake phishing email and then the real one a little later when they are confident they can avoid phishing.

My company sends phishing emails every few weeks, for like past 5 years

U click it or open attachment, u are automatically enrolled in trainingg u must complete.

Very few people click anything remotely obscure, and are asking manager if the email is from a legitimate company we are dealing with.

Ex: I got a signup confirmation email from a legitimate website, asked our director about it. He looked into it and confirmed with IT we had been signed up and infosec was fine with it.

We then relayed to the whole team that it is legitimate email.

I would say highly successful


A better approach is to implement anti-phishing measures way up in the chain -- at the MTA level itself. Simpler ideas like: stripping URLs' from mail, stripping attachments if email origin is outside the organization, converting HTML email to plain-text, disallowing HTML email, yield substantial benefit in stopping phishing.

Basically, don't try to solve a problem by humans when it can be solved more efficiently by technology!

Phishing exercises are absolutely pointless in my experience and contribute zero to increasing the awareness. Shaming does not address the underlying human weaknesses that make us fall for phishing, they simply make the IT Guys look cooler, and increase CISOs' and Red Team budget. :-(


The best security is multi-layered. The human layer is the weakest part of any security system, and both technical and human measures must be taken to achieve defense in depth.

Some technical measures used here were requiring 2FA for all internal services, and scoping keys/POLP to limit the damage from one compromised key.

The purpose of exercises like these is not to shame someone who "fell for it", but to educate workers about phishing attacks and strengthen the human security layer.


Two decades of experience suggests that "strengthening human security by training" ain't happening, no matter how hard/smart you try. The technical controls have to be beefed up to a point where that human-weak-link is eliminated.

These tests are nothing but CISOs'(and Red Teams, and the whole industry around it) justifying their existence, and potentially doing a song-and-dance about it at the quarterly all-hands. Nothing more, nothing less. We can come back to this thread in another year/two years/five years/decade, and I can bet dollars-to-doughnuts, the industry will still be training humans, and claiming these pointless statistics about phishing. ;-)

On this note, see #6 "Educating Users", in Marcus Ranum's excellent article "The Six Dumbest Ideas in Computer Security": https://www.ranum.com/security/computer_security/editorials/...


We do this a lot where I work and it’s fun.

There’s a button in the email client for “report phishing link” so I’m always on the lookout.

If you report a test evil message you get immediate feedback that you passed the test.

If you report a real one the security team immediately looks at it and let’s you know if it’s legitimate or not.

I think it’s a good system.


Time to send every employee a FIDO compatible security key, implement WebAuthn, and make it mandatory for employee login.

Is that meaningfully more secure than something like Auth0 with Duo MFA? (Which doesn’t require a dongle hanging off my USB-C port and works seamlessly on virtual machines.)

The problem seems to me that companies and orgs want to send emails when it is convenient for them to do so (paystub ready, benefits enrollment open click here, etc.) but distribute the cognitive load to its employees/customers to figure out which emails are trustworthy and which emails are not. You eventually get trained to click on links in emails as a form of legitimate interaction.

https://www.cl.cam.ac.uk/~rja14/book.html


If you look at the logs of phishing exercises you can often see employees messing with the red team, like entering invalid creds for CISO or CEO and stuff.

I think phishing exercises should provide much more details, e.g. the following metrics:

(1) # targets opened email

(2) # clicked link

(3) # who entered valid username (must match some identifier in email - to prevent trolling)

(3) # who entered password

(4) # entered valid(!) password

(5) # entered MFA/code or did Push

(6) # auth cookies stolen (full compromise)

Otherwise it's difficult to compare any of these tests and understand the actual risks and success rates.


The company I work at works with and contains a significant amount of PII regularly phishes on our staff. It’s usually between 1 in 5 and 1 in 4 that will click on the link. Despite all of the education and quarterly repeated phishes those number really aren’t improving much. I think at some point you have to accept that end users will click on things, and add additional protections in place to help mitigate the risk.

I regularly perform tests like these. Overall there's a flat 10% 'critical failure' rate across organizations. You send a phishing e-mail pretending to be from the IT department, with some instructions to install the 'anti-virus scanner' or whatever, and 1 out of 10 people will open the e-mail, click the link, give their credentials, follow all instructions, click through all warnings and infect their machines.

If your organization is above a certain size, remote code execution in your network is a given. There's several technical measures you can take to make is _much_ harder to perform these attacks on Windows in general:

* Disable unsigned Office macro execution (if on windows with office)

* Disable mshta.exe or remove the .hta file association

If you can get away with it, productivity wise, enable whitelisting for all software.

Attackers can often times still find weak points in your organization. It's not always the marketing or HR department with Windows that gets phished. I once observed a colleague phish a webdev on a macbook with a recruitment 'challenge'.


100% of people will fall for a good spear-phish, when you fail to accept that you start doing things like punishing people who fail. The point of these tests is to raise awareness and train people so that successful phishing attacks will need that much more targeting precision in addition to accuracy.

It's like combat training, the goal isn't to train your army so they all become elite fighters and martial artists, the goal is to improve their fighting skills so that they fair a good chance at victory against similar ranking enemy troops.

So, if your people fall for an emotet phish,that's bad. If they fell for a pentester's phish where he did background research on his subjects and spoofed email header fields, that's normal, just like a navy seal beating up an airforce sergeant would be normal.


All companies should be doing internal penetration/security testing. If you don't do it, someone in China or Russia will do it for you, you just won't know. I hope GitHub is doing this too. Google, for example, has an entire team whose task it is to exploit such attack vectors and close the holes in all sorts of products and processes, often with stunning results. I'm not sure if the rest of FAANG does this, although I'd be surprised if Facebook doesn't do essentially the same. I would not be surprised if Amazon or Apple don't do it, at least not to the extent you'd see at Google (no holds barred, the red team gets to pwn everything). Netflix, I'm not sure, they probably have something. Microsoft probably doesn't do it, since it'd make people look bad, and in their back-stabbing corporate culture people can't afford to look bad.

Is this newsworthy? My company does this very regularly, and the phishes are well crafted convincing.

20% seems low if they're reasonably well put together emails. In the wild there's plenty of badly made, easy to spot phishing campaigns but one would hope any decent Red Team could put together a good one.


I support this action and wish more companies do it. It would tremendously improve the security in every organization. The people that "bought" the fake login link feel ashamed, I'm sure, and they'd think twice before logging in next time. Kudos for Gitlab.

Not really.

Someone told me they did the same thing at his company, send out fishing emails to see who fell for it. Those who did (management was disproportionately represented) had to attend some training lessons.

They resent another phishing email a few months later. Most people who fell for it the first time, fell again, despite the training.


I don't think an additional training is needed, at least in an IT company. The fake-phishing success should be enough to make everyone who fall curious enough to at least research the subject.

What company has to make sure to communicate clearly is that the failure in the fake phishing test would not affect the employee's status in the company at all. But eventual failure in a real phishing event would have at least some consequences.

For non-IT companies the training should begin and end with the message above and in between should be short and concise with ideas how and where to learn more about the subject.


I appreciate the article, gitlabs, and da'reg commenter "Spencer" for pointing out that gitlabs publishes their security handbook: https://about.gitlab.com/handbook/security/

As I read through these comments and the linked handbook, it kinda makes me want to work for a company like that. As important as security is, even the security handbook has an appropriate tone vs. treating people (CS talented or not) as idiots who cannot be trusted. Good job gitlabs.


This is especially concerning considering that GitLab is a technology company consisting of mostly technical staff.

For a crafted spear-phish like the one used in this test, I wonder what the failure rate would be in larger, non-tech organisations?


I take the point but I also take it with a degree of realism.

I've been at companies where they did this and I usually 'fail the test'.

I received the email, but given the highly targeted nature (wasn't very generic) I get curious. When you can tell it san internal test its fun to see if you can trace it back to a particular person or department. So I created a VM in a secondary clean laptop and opened it.

So based on the test I failed because they detected I followed a link.

I don't for one second believe that 1 in 5 Gitlab employees also did this, but I'm certainly distrustful of test numbers like this.


My company does this often. It sends legitimate looking emails and at last I fell for one recently.

I thought about it, then I understood why. My company uses a lot of saas products - for submitting expenses, for giving appreciations etc etc. These saas products regularly sends emails, and they come from other domains.

When my company used all home grown or on premise web apps I never ever opened any emails coming from a different domain or open them very cautiously.

And now I think these saas emails have probably taught my brain to trust emails from other domains.

I am not sure.


I worked at a place where they sometimes sent phishing emails to see what people did. They also had mandatory annual training on e-risks, which wasn't in fact too painful.

The fun arose when the company employed third-party service providers that required employees to respond to an external email (infrequent but it did happen). Inevitably there had to be a certain amount of internal comms to let people know that this external email was in fact safe to respond to.


That's hilarious but it also highlights that ultimately, there are no 100% inherently safe communications channels. A sufficiently motivated actor can go through extreme lengths to compromise your IT even if it means faking email, voice, letters, physical interaction.

This is common in many workplaces, and while a little strange, I think it's a good exercise, especially for less sophisticated folks who get a lot of external mail.

It's not that strange. In a talk at last year's CCC someone explained that it's a good learning experience when you educate the people, that clicked on phishing, right in/after the phishing process. He also found that the learning effect only applies to the method the people failed in - so learning from phishing doesnt teach anything about passwords.

While sad, I think it's important to acknowledge and don't be do harsh to people who fail in the first attempt... ... also because my biggest learnings also came from really embarrassing moments or failure too..


Man I hate these, and I hate that companies get paid serious real US American Dollars to stage these for other companies.

Every time I see a colleague laid off, and then see one of these stupid phishing tests land in my inbox, I think about losing my job during a pandemic in order to ensure the security team still had the budget to pull this stupid crap.

It doesn't help that our own customers send us stupider looking emails that are actually legitimate.


Company runs phishing simulation (they were already remote)...is that news?

One in five isn't bad. As you target them, based on content and recipients, the results can get much worse. And when non-tech companies run these, the results are...scary.

It's no wonder the most sought after entry point into a network, the most reliable and probably the cheapest, is phishing. All it takes is one out of 50,000 to fall for it.


But the little green padlock was there! It must have been OK.

My company has just decided to enable 2FA in order to combat phishing. I'm not sure how this would help. What amazes me is that we allow HTML email at all. That alone would greatly reduce successful phishing attempts. Requiring all emails to have valid signatures doesn't even seem too difficult for an organisation.


"While an attacker would be able to easily capture both the username and password entered into the fake site, the Red Team determined that only capturing email addresses or login names was necessary for this exercise."

It says in the article that they never asked for passwords.

I wonder if the statistics would have been different if they did? You usually think twice before entering a password.


I'd just note that it seems Google documented that U2F keys were the only tech they'd tried capable of reducing credentials to Google systems being stolen from employees in phishing attacks to zero. Maybe we need more of that going around.

I also don't understand why they keep mentioning that their staff is all-remote. I don't see what difference that makes.


There have been studies which suggest that phishing your own staff has significant negative effects. I have trouble finding the study names but the NSCS website has a good article about it: https://www.ncsc.gov.uk/blog-post/trouble-phishing

This article is more of an opinion piece. Would be interested in evidence.

It seems logical to me that self phishing is a good way to educate on how to spot phishing/unusual emails, and to realize they are a target


> This article is more of an opinion piece

Wouldn't that be legally required to end with something about selfishly self-phishing shellfish?

It does read a bit like SEO copy for a training consultancy that offers an alternative to the intuitive self-phish/reprimand cycle, but it brings up some interesting ideas.


Reading this article brought this one to mind: https://krebsonsecurity.com/2018/07/google-security-keys-neu... (about Google using security keys to deal with phishing)

buying a phishing as a service trainer is the single best bang for your buck in the realm of all security. obviously, all computer security is relative to your use-case and threat model, so your mileage will definitely vary. if all your servers are publicly routable with no firewall or antivirus, emails are the least of your worries.

however, spam is not a solved problem. phishing is hard to stop, and spearphishing basically impossible. professionals you know get compromised, upstream toolchains get compromised, etc. the attack effort and risk vs reward is wildy skewed in their favor. it has been a vector of compromise for many highprofile breaches.

find a reputable company, pay them, and whitelist them in your spam filters. they will generate incredible phishing emails (using your domain and corporate info, since you let them) and give you a way to train your users in a way that is irreplaceable.


Seems pretty typical for results of phishing campaigns - although they targeted 50 people, which is not very representative to get overall numbers and stats on different disciplines in a larger organization.

Results are largely driven by the kind of phish that is sent and if its click-worthy.

Some companies do these exercises every month.


We did a similar phishing attempt in my previous company, which had a bit more technical background than gitlab.

From the 200 people, only one gave up his credentials. from marketing, as expected. We don't let them near anything important anyway.


20% is low for a typical test.

A clever insider can get that to 100%, say with "Benefit Plan Updates." lol


We had one earlier this year about staff raises

"Cookies in the break room!"

(I know, I know, this is serious... )


Or anything about PTO.

My company (big Valley corp doing robotics) does exactly the same, and it's very good at it: if you get phished, you'll automatically get signed up for a long and tedious training.

The most brutal phishing I've seen an enterprise use: "[SPOT BONUS] Your hard work and dedicated efforts are being rewarded!"

At work, failing clicking on a test phishing email can result in a dismissal if you do it too many times in a year.

Several have noted email rules set up to flag phishing simulation emails. Anyone care to share one of those rules?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: