Opening a phishing email should not be considered a failure. The email client is specifically designed to be able to display untrusted mail.
Even clicking a hyperlink in a phishing email isn't too bad - web browsers are designed to be able to load untrusted content from the internet safely.
It's only entering credentials by hand into a phishing website, or downloading and executing something from a phishing site that is a real failure.
IT departments should probably enforce single sign on and use a password alert to prevent a password being typed into a webpage. They should also prevent downloads of executable files from non-whitelisted origins for most staff.
Definitely! UCSF had a security firm send out a fishy-looking fishing email. My email client pointed out the url did not match the link text, whois told me it was a security company, and I opened the URL in a VM.
“You just got fished!” eye roll
I wouldn’t be surprised if most of those employees at gitlab were not so much fished as curious.
Maybe because the tracking pixel remote image loaded? I remember reading an article where people sent an email to Apple and it got passed around within Apple and iirc either Steve Jobs or someone who reports directly to Steve Jobs opened the email not knowing that they were sending out a makeshift read receipt every time they opened the email.
"Curious" might get you to opening the web page, but actually entering credentials moves you into "phished" no matter how curious you are.
I've also entered fake credentials into a clearly faked login form to see what'd happen. Would it redirect me to the right site? Just claim the information was wrong? Send me to a mock up of the intranet I was trying to access? You can call it bad policy if you want (although you don't know about my precautions), but it doesn't mean I was phished.
1. Someone receives and reads the email sent to this email address.
2. That person is willing to enter data into a form.
This is 2 pieces of information the person didn’t have before, and it can be used in further phishing attempts in a variety of ways.
It doesn’t mean you were fooled, but that’s only half the story.
I tend to think this is good software dev practice anyway. You ought to be able to test everything on your testing servers, and if this doesn't adequately reproduce the production environment, it's a problem with your test system.
Then you won't be processing email on machines on those networks.
At that point, I think it's as likely that your airgapped email laptop can hack into your work machine through local network exploits.
If you think a hacker is going to manage all that, you might as well assume that the hacker can trick gmail in to opening the email for you. There's a point at which we have to realistically assume that some layer of security works, and go about our lives.
I'm curious what definition of airgap you're using?
AWS uses it this way: https://aws.amazon.com/blogs/publicsector/announcing-the-new...
2. Please don't contribute to giving marketing license to remove what little meaning words still have.
With that as my model: the email getting to your inbox is of course the first failure and increases the chance of getting phished from zero to not zero. Opening the email is another failure that raises the chance. Clicking the link is another.
All of the steps leading up to entering credentials or downloading and executing something from a phishing site is a real failure in that it increases the chances of becoming compromised.
That's even true if you're suspicious the whole way through. If you know it's a phishing attempt and are investigating, fine. But if you are suspicious, that means you can still go either way. You can also get distracted and end up with the phishing link in some tab waiting for you to return to it with all the contextual clues missing.
I opened it in a new tab along with several other links to read, I was expecting a nice blog post explaining an exploit.
After about 20min of reading the other tabs I came across that tab again. I had forgotten the title of what I had clicked, I'm not sure I even remembered it was a hackernews link that got me to that page.
"Oh, looks like Google has randomly logged me out, that doesn't happen often" I think as I instinctively enter my email and password and hit enter.
Followed half a second later by "oh shit, that wasn't a legitimate google login prompt."
I raced off to quickly change my password, kick off any unknown IPs and make sure nothing had changed in my email configuration.
I'm lucky I came to my senses quickly. I think it was the redirect to generic google home page that made me click, along with the memory of the phishing related link I had clicked 20min ago.
But yeah, it can happen to anyone on a bad day.
Whenever it read about phishing it seems insane we have a system that requires human judgement for this task. If there isn't a deterministic strategy to detect it, how could the user ever reliably succeed? And if there is such a strategy, it should be done by the mail server, mail client, and browser.
Even an extension doing this might work in a corporate context. That makes me wonder if companies do their own extensions to enhance the browser for their needs. If all your employees are using web browsers for multiple hours per day it might really be worth it.
That's exactly what it's for: finding patterns that are too hard or too complex for humans to find. Enumerating every edge case of "enter a password" is not possible for a human, and whatever edge cases we humans miss _will_ be exploited by someone to compromise someone else.
It's also a matter of volume. How many pages can you evaluate and categorize in an hour versus how many can a ML system do in the same? I once saw a demo where a firewall/virus scanner app could detect malware heuristics dynamically by comparing to a baseline system, and could do so in 10 seconds or less per item. It would take a human more than 10 seconds just to read the report to generate a rule, and humans don't scale nearly well enough.
There are lots of complaints to be had about ML and privacy / fairness / ethics / effectiveness, but this shouldn't be one of them.
I was going to say that couldn't be done, but then thinking about it - obviously the way OS currently works you can't know if it came from an email but you can know it came from an application that was not the browser (although that of course would require the browser to keep track of where a tab came from, which I assume they already do), but then links opened from web based email client would not have this scare warning click through.
They wew created 60 years ago as an additional layer to on-site physical access, in a world with a compute and network capacity billions of times less than today.
The problem is clearly pretty deep. One posibility is that it's inherently inconsistent with a deep, high speed, long range, high bandwidth data regime. We live in a universe where all of us are ventriloquists, or may be ventriloquist dummies.
There's the questions of what identity is, and its distinction from identifiers or assertions of identity.
There is the matter of when you do or do not need to assert orverify a specific long-term identity, and when you do. When identifiers require a close 1:1 mapping, and when they don't. Of what the threat models and failure. modes of strong vs. weak authentication schemes are.
And ultimately of why we find ourselves (individually, collectively, playing specific roles, aligned or opposed with convention, the majority, or other interests) desiring either strongly identified or pseudonymous / anonymouus interactions.
Easy or facile mechanisms have fared poorly. Abuses and dysfunctions emerge unexpectedly.
Password managers are great for security and super convenient. It continues to shock me how many people surf the web while continuing to type the same password into dozens of sites, and then they wonder why they fall for phishing.
Obviously autofill itself can break on complex page layouts, and that's fine. The security comes from the password manager doing domain matching and offering to fill the password when you click on its addon menu.
If they had 5 different ways, that'd be one thing. Lately, I've been seeing different domains. For example the marketing department registers a domain such as AcmeExclusives.com.
In contrast, the FIDO design cannot be used across domains no matter how successfully you fool the human.
It shouldn't matter how tired or distracted you are: you should never enter credentials into any place you get to from anything you receive in an email--or indeed by any channel that you did not initiate yourself. If you get an email that claims there is some work IT issue you need to resolve, you call your work IT department yourself to ask them what's going on; you don't enter credentials into a website you got to from a link in the email.
It's the same rule you should apply to attempted phone call scams: never give any information to someone who calls you; instead, end the call and initiate another call yourself to a number you know to see if there is actually a legitimate issue you need to deal with.
Rules like this should be ingrained in you to the point where you follow them even when you're tired or distracted, like muscle memory.
Is there an add-on for Firefox that warns when you enter credentials on a new domain? Or puts a warning triangle in a password field when today is the first day you visited the domain or something? Firefox already tracks the latter, you can see it in the page info screen, so both should be easy to make but I'm not sure anyone thought of making this before.
Browsers have vulnerabilities and you're broadcasting the attacker valuable information about yourself, including the fact that you're receiving, reading, and clicking on links in their mails.
Also, the article states clearly that 1 in 5 fully entered their credentials.
HN must be a boring place if you are not prepared to click on external links.
You could even combine the two. Post the blog to hacker news, then send phishing email pointing to HN post. That is a trusted link. Then the user will likely click the source link in HN.
Obviously, a lot harder and lower chance of success, but not impossible.
In general maybe, in this particular case it's gonna be challenging however, as gitlab is a remote company so most employees will logon from residential ips
My present employer's VPN client goes a step further and mangles the routing table to deny access to my own LAN while connected.
I guess they both suck pretty hard.
it was always only the 10.0.0.0/8 and some /24 ranges from 192.168.0.0/16 at my current job
You definitely could perform a watering hole attack if you compromised a site that always gets on the front page of HN. If I were an evil hacker and I wanted to compromise HN I would instead attack a site like rachelbythebay.com or some other popular blogger then just wait for HN’ers to click the link.
"Why rust is not a real programming language"
"It's a complete waste of time to learn C++ in 2020"
"Rust is 2x as fast as C++"
And then just point to an article about Rust the game.
Jokes aside, I love the name, the pun is nice, but man it makes searching a pain. I’ve ended up too many times in pages related to the game or to actual rust (as in iron).
Not a phishing attempt, I swear!
And you do by a really invasive means that will make sure that everybody that knows what they are doing but are curious to safely inspect it further will be marked as clueless. Leading to false positive and negative errors larger than the signal, but you still expect to get useful data from it.
Then someone will point out watering hole attacks, where adversaries find where targets hang out socially, and attack that.
And then I'll point out that the inherent risk in HN links vs. unfamiliar emails are very different.
It's using strategies like teaching people to check links before clicking them that can prevent a number of different things (phishing, malware, etc.)
If you've already clicked a link, attackers know exactly what browser you are using, and that you're probably also willing to click on the next link you send them too, allowing them to go from a blanket attack to a targeted attack.
See an example from last summer: https://blog.coinbase.com/responding-to-firefox-0-days-in-th...
Defense in depth is just as much of a thing for personal security as network security.
It seems it would add a layer protection to the weak link which is the password.
Most sites, certainly consumer sites, which offer WebAuthn it's very optional. So doing it the current way just adds a step after the password step. You need a (perhaps stolen) password to even find out there's a next step and you're not in after all.
But if we swap it, now we're telling bad guys if this account is protected up front. "This one is WebAuthn, forget it, same for the next one, aha, this one asks for a password, let's target that".
The people with WebAuthn are no worse off that before, maybe even arguably better in terms of password re-use - but everybody else gives away that they aren't protected.
So yeah, definitely some interaction should be required to consider it a failure, but also the test email should be as convincing high quality phishing as possible.
Not just because it makes for a better test, but because it's more likely to be a valuable lesson for more people, people who thought they wouldn't fall for it.
Email clients often do things like load images, which can tell the sender you've read the email, which is an information leak.
Some email clients try not to do this, but that's actually somewhat recent, and I wouldn't say they're 'specifically designed to be able to display untrusted mail', rather 'they try to avoid common exploits when they become known'.
Most companies have e-mail addresses that are completely predictable, so you can pretty much assume that this e-mail address exists. If this really was a security risk shouldn't you have UUID emails for everyone?
Also how do you as an attacker know that it was user not a e-mail server checking those images?
It will reveal if they're working right now, what time they work otherwise, their IP address, their approximate physical location, their internet provider. A lot you can do with that.
> Most companies have e-mail addresses that are completely predictable
That's the point. Predict an email address, send it, find out if such a person works there.
If I email email@example.com and they open it then guess what I've worked out?
> Also how do you as an attacker know that it was user not a e-mail server checking those images?
It would be arbitrary to have the image links switched out by the server so they always go through a proxy/urldefense and it would never be the user ip address or user agent the attacker sees.
I would assume a company like Gitlab would have such measures if this info was indeed abusable.
Do you put your IP number on LinkedIn?
When you travel do you put the hotel you're staying in on LinkedIn?
Also, not everyone is on LinkedIn in the first place.
> It would be arbitrary to have the image links switched out by the server so they always go through a proxy/urldefense and it would never be the user ip address or user agent the attacker sees.
The word 'arbitrary' doesn't make any sense to me in this context so not sure what you mean sorry.
In general, I don't know what you're trying to say - that there are ways to try to defend against these attacks? Yeah I know. I'm not sure what point of mine you're refuting or replying to anymore.
You asked 'What can be done with this information?' - this is the list of things you can do with that information. Can you defend against some of it? Yes to some extent. But it still leaks for many people.
Which companies own which IP address blocks is public information.
> When you travel do you put the hotel you're staying in on LinkedIn?
Conferences are announced; advertised, even.
> Also, not everyone is on LinkedIn in the first place.
That's OK, companies do a fine job publishing employee information all on their own.
> You asked 'What can be done with this information?' - this is the list of things you can do with that information.
You've moved from Step A, getting the information to Step B, correlating the information, but you've left off Step C, which is profiting from the information. What is a benefit you can gain from knowing someone at some IP address opened your email? Can you get that benefit some other way, such as by looking in a phone book or viewing the company's website?
People are working from home! That's the entire context of this thread! They aren't using corporate IP addresses! And they don't do it when travelling either!
> Conferences are announced; advertised, even.
People travel for other things beside conferences. For example to a meeting or client site.
> That's OK, companies do a fine job publishing employee information all on their own.
Many don't do this.
> What is a benefit you can gain from knowing someone at some IP address opened your email?
I've already listed all these things.
> Can you get that benefit some other way, such as by looking in a phone book or viewing the company's website?
Yes, people not listed in a phone book or the company website.
You're listing exceptions, but they don't apply to everyone. If they don't apply to everyone then you can catch some people.
Try this to help yourself understand - people do in fact use tracking images. Therefore, do you think that maybe there's a benefit to doing this? Otherwise why do you think they do it?
> Agent signatures.
Can you Expand? Googling isn't helping me understand what this means/how it works.
The user agent is the simplest example. That can be spoofed, but there's more subtle traces as well, all the way down the stack https://en.wikipedia.org/wiki/TCP/IP_stack_fingerprinting.
Now you know who's curious enough to open a shady-looking email, and perhaps click a link out of curiosity. It means your list for the next round of attacks is much smaller and more targeted, making it easier to evade detection.
That makes it less than ideal, but describing it as a ‘failure’ isn’t going to help any users pay more attention to phishing mails, because they get tons of legitimate emails with images in.
Email clients, just like browsers, are made specifically to handle untrusted user content. That then some people/clients allow information leak, is another thing. Just like websockets in modern browsers.
Meanwhile in the real world some of us have actual users. Pretending we should stop using widely used and useful technology while flailing your arms and shooting "but security!" is not going to help anyone.
What? No. No one is arguing that...
The only thing I'm refuting in my previous comment is "Some email clients try not to do this, but that's actually somewhat recent" which seems to indicate chrisseaton thinks that email clients that don't load images is a new thing. So the idea is that first we had email clients, then the email clients added the option to hide images.
When in reality, email clients started out without images, then they added images.
Way to reply to a comment without reading the context and subsequently completely miss the point.
1. A user opening a phishing email means the email made it into their inbox (spamming failure unless whitelisted for the sake of a test) and the user was moved to click the email based on the subject line. This in itself is the lowest risk of the failure modes we're about to describe, But some risk will exist considering e.g malware has spread through the simple opening of emails before.
2. Clicking a link in a phishing email is much higher risk and, regardless of how the phishing test was crafted, is considered with absolute certainty to be a failure mode of any phishing test or event for three reasons: A user has definitively disclosed their presence within a company (email clients today may block trackers from loading, but clicking a link gives it away), the user has disclosed their receptivity to the message, and in a real world attack, merely landing on the page may trigger an event such as the delivery of a malware payload via a functioning exploit against the browser and the underlying operating system.
3. Entering credentials is probably the most obvious one.
Rather than a "password alert" control that just alerts a user that their account was signed into, what would be more helpful is a second factor; a bare minimum would be a prompt on a user's phone indicating that a login attempt was detected and requesting confirmation before that attempt can succeed. This at least helps a user potentially preempt an attack against their own account (assuming they're trained on how this works) even if they never figure out that they've entered their credentials into a phishing site, And if the second factor challenge is never met, an alert to the security team could automatically get the security team to triage the risky login.
Pardon typos. Voice to text.
Also I assume Gitlab already has 2fa.
Follow it up with a much more tailored spear phishing attack - https://www.knowbe4.com/spear-phishing/
Reworked to sales terms: it's the difference between a cold lead and a hot lead. A user who's clicked through has proven themselves to at least be warm or receptive to phishing campaigns in general.
As an adversary, I'd probably couple unique links (for tracking clicks) with heatmapping and other front-end tracking technologies to see what exactly the user is doing and how far they've gone before backing out, which helps me refine the attack. Most attackers probably wouldn't go that far (spear phishing the people who clicked would probably be the extent of it), but if someone is after something of particular value at your firm, there's no reason why they wouldn't put more effort into sharpening the attack.
Most people are probably not worth the effort of this, but I could imagine a source code hosting company could be, as a step to try to compromise some other software...
I'll go further:
It is impossible to never open a phishing email.
From addresses can be spoofed. Path information... well, it isn't available at all unless you open the email, is it? Also, it can be spoofed right up to the point it enters your company's email system. The Subject can be made appropriate and innocuous, or it can be made just as "OPEN THIS EMAIL IF YOU WANT TO KEEP YOUR JOB!" as the sender desires, and there isn't a person on Earth who has to respond to emails who will be able to divine the inner intent of the sender from just the Subject line.
Should corporate email systems prevent address spoofing? Argue amongst yourselves. My point is, they don't, or at least they haven't anywhere I've worked.
I can hear the developers raising Hell at just the suggestion that they don't have local root and free reign with brew, docker, and npm. PMs and marketing can be relied upon to react similarly to being told that they have to use SSO-equipped tools that have been through procurement, and not someone's random free shared Retrium or whatever. That SSO tends to add a zero or two to the cost makes them even more skittish, on top of the chance that the procurement process says no.
Trying to enforce SSO use can be challenging.
Out of curiosity, would you mind explaining what this means and how one would achieve it?
The counter-argument is that if an attacker sees you interacted with the phishing attempt they may try again with a more targeted attack in the future.
No, they weren’t, which means chances are you are using one that wasn’t.
> web browsers are designed to be able to load untrusted content from the internet safely
Yes, but they aren’t perfect.
Noone would delete that email without reading it, just like the finance department when they see something claiming to be a bill. Now what if your email said "I have sample website that demonstrates this bug". Again, there's no reason for you to not click that. The only thing that you should be able to reasonably expect a person to "fail" on is getting there and downloading a .exe or providing a set of credentials.
Please read my message again.
The second time, someone was about to steal $30k worth of cryptocurrency from me with a very convincing page on śtellar.org, where I nearly entered my wallet seed (did you notice the accent over the s? I didn't), and was saved by the fact that I keep my cryptocurrency in a hardware wallet, so I had no seed to enter.
Both times, what saved me from being phished wasn't that I'm trained or that I'm more observant (which my parents have no hope of ever being), but that I had used best practices so I didn't have to rely on being trained or observant.
I'm hoping WebAuthn takes off, which will really kill phishing for good, but you can take steps now: Use hardware U2F keys as second factors, use a password manager, don't use SMS auth. Make long, random passwords, etc.
"If you have a password manager, use the password manager's 'take me to site' function instead of anything on the email. Just open the site from your password manager instead"
As for WebAuthn and U2F, unfortunately they chose every trade-off possible away from practical usability. They're doomed. Go look up the impl/ux flow for WebAuthn right now for example.
We need less of that and more good ideas that people would actually implement and use.
Hell, it even supports a mode where you don't have to have a username or password at all (e.g. log in and try adding a key on https://pastery.net, you can then just log in with the key with no username/password at all).
The reason it's a cost upgrade? Those credentials have to live somewhere, and that means they're using Flash storage baked inside the FIDO2 key, ordinary FIDO keys don't have close to enough storage.
Next you might wonder: Wait, how does a FIDO key log me into Google if it isn't storing the keys?
Magic. Well, cryptography. When you registered the key it minted a key pair (Elliptic curve most likely) and obviously it gave Google the public key, but it also provides Google a large random-looking "Identifier" which Google must give back each time you authenticate. That identifier could, by the specification, just be some sort of hidden "serial number" but in reality what everybody does is encrypt the private key or its moral equivalent - with an AEAD scheme using a device-specific secret key and then use that as the identifier. So when Google gives you back the "identifier" the FIDO device decrypts it to discover its own private key for the site which it can use to log you in. The FIDO dongle doesn't actually even know you have a Google account, yet it works anyway. Magic!
FIDO2 is a much less clever trick, and that flash storage is too expensive to use it everywhere - but the UX is so seamless it makes username plus password look like they asked you to undergo a cavity search by comparison.
Yes, the fact that you need flash storage for FIDO2 resident credentials is unfortunate, but that's why I'm exited about the new SoloKeys, which I heard will have enough flash space for thousands of keys. In comparison, the Yubikey has 25, which makes it useless for what I want it, and they don't even advertise that limitation anywhere.
Logging in with this usernameless mode is just amazing, you can go to an untrusted computer, plug the key in, tap a button and you're logged in with no possibility of any credential theft anywhere (just make sure to log out afterwards).
try https://www.passwordless.dev/custom#heroFoot with the latest Firefox on a recent Android.
You can register and login with just a PIN code (or gesture pattern) from your Android
I finally have direct implementation experience (thanks COVID-19 I guess?) of WebAuthn now so I can speak confidently to this consideration.
I built a toy implementation on my vanity site and am gradually integrating it to a site friends built back when we all lived in the same city at the turn of the century. That site is old PHP (actually parts of it are terrifying Perl CGI code that looks like it was written before HTTP/1.1 existed) so my WebAuthn implementation is also PHP at the backend. This is neither the simplest, nor most capable technology, I have no doubt it can be done faster and better in your preferred language (it certainly can in mine).
I wrote <1 KLOC, no frameworks, no libraries beyond standard components, there's a little corner cutting in my PHP CBOR implementation but nothing likely to break in the real world for this purpose (we can treat all "I don't understand" cases as "Probably bogus, refuse entry" and be fine).
The JS is a little bit of Promises and some JSON processing, nothing every browser (that can do WebAuthn) doesn't offer already and I included it in my < 1 KLOC total.
Now you aren't going to get this done by thinking it's something else. Trying to do all the work on the client? Not going to make that happen. Hoping to hide all the WebAuthn credentials in a 64 character "password" field your database already has for each user? Not going to be like that.
But if a team has one person who understands in principle what this looks like, I'd say it's maybe a week for a backend person, a week for a frontend person and a week for a tester to spin up on what's going on and learn it. And that's the first time. And that's going to be markedly less for people who aren't learning the components (Web Crypto, public key crypto) as they go.
The pay off is huge. When you store passwords, that's a liability you've got there, it's like toxic waste you're storing. If somebody gets those passwords you can face fines, somebody might sue you, even at best you'll need a PR firm to help try to sell how sorry you are about it. But stored WebAuthn credentials aren't even secret. They make your preferred sock colour look like the crown jewels of PII by comparison, yet they're far stronger than a password as login credentials.
If you rarely use IDNs, toggling `network.IDN_show_punycode` in about:config can help with that - you would have seen `xn--tellar-2ib.org`.
Then they asked for my password. I was pretty confused but almost gave it to them. It was just a coincidence that some con-artist had called me to try to phish me when I had been trying to reach out to the company.
Those assumptions where you know it’s real are dangerous because it can make you ignore red flags.
I can vouch without hestitation for the Yubico Security Key (newer version has a "2" printed on it clearly, this also does the FIDO2 protocol with resident credentials). This is a relatively expensive option for the purpose but it's robust (lots of people put these on key rings and stuff then carry them everywhere) and simple and the people who built it know what they're doing. But it's a USB A device, if you need bluetooth or USB C or whatever then don't buy one hoping to like it.
That product skips all the fancy Yubico features other than being a Security Key, thus saving a big fraction of the cost - but there are much cheaper options that work if budget is tight, if you're just playing around, or to do testing for a potential deployment: I also have a "KEY-ID FIDO U2F Security Key" again USB A and it works nicely, but many people don't love the bright green LED (all the time, not just when authenticating, it's on all the time). However it clearly also feels cheaper than the Yubico product, this is not an heirloom product.
I'm excited about the new version of the SoloKeys (https://solokeys.com/) coming out next month, they aren't using secure elements like the Yubikeys are but I'm not really worried about someone stealing the key from me to extract the credentials with physical attacks, so they might be a good alternative.
Other than that, I eventually see password managers having built-in software FIDO2 implementations, so you just open your password manager and it automatically intercepts U2F requests and authenticates them, but that's a different thing.
Basically, anything you get that's U2F/FIDO2 compatible is fine, and much better than the second best thing (TOTP or whatever). Get something that's cheap enough for you to get two of, have one with you and the other at home as a backup, and that's it.
I think it is amazing that our res team make https://gitlab.com/gitlab-com/gl-security/gl-redteam/red-tea... public so other companies can learn from it and they where comfortable with sharing the results.
I try to emphasize to clients that it’s not a test but a phishing exercise akin to a fire drill. You don’t pass or fail a fire drill - you use it assess how prepared you are for a fire. And if you find that you’re totally unprepared, well wouldn’t you prefer to figure that out before anything is actually on fire?
The phishing emails are sometimes very good. They appear to be from senior management and address projects or other internal events everyone knows about. Some emails are very easy to spot, in the Nigerian prince category. It is very interesting that we have that 7% failure rate no matter how good or bad the phishing email is.
In general, I think internal phishing tests are a great way to educate the workforce.
Yes and no. I used to report phishing attempts to IT. Then we started running tests like every month, so I'd just delete suspicious messages and move on. Of course, that's when we got a real phishing message.
Frequent company-wide tests are, in my opinion, overboard. Once a year company-wide tests, followed up by more-frequent tests for sensitive groups and/or those who failed previous tests, makes more sense.
I should note that phishing tests are just one component of many company-wide education programs regarding physical, computer, data, and network security. My company deals with very sensitive data, so information security is a Big Deal.
The problem with targeting these tests is that new employees are constantly coming in and need to be educated/trained. Also, the persistent failures do not seem to be confined to only certain work groups; they're spread around the company fairly randomly, and they move.
Exactly how phishing tests are run probably depends quite a bit on what kind of company you have and what kind of employees work there. A workforce full of programmers would -- I would hope! -- be much less susceptible to phishing scams. The sales force, possibly more susceptible. That may be stereotyping, though.
It's the same issue as "ad companies"... if you don't cook the numbers that show your expensive service is worth it, then people will switch to the service that looks worse (this one has 7% fail rate but this one has 50% fail rate)
The reality is that humans are hard to secure, so defense in depth generally involves preventing compromised accounts from causing lots of damage, detecting them as early as possible, and controls for shutting them down.
Do people working in offices have IT staff come by to update their laptops? Would people in an office not open this email if they’d do so at home?
When I worked in an office nobody touched my laptop but me.
Also, please remember, it's not your laptop, it's company's laptop, merely given to you to do your work on it. Anybody within the company with correct credential would have the right to touch that laptop.
Not all companies do it this way. Many use a clear network and make services encrypted.
> Also, please remember, it's not your laptop
It is if you work for a bring-your-own-device company.
I very often work on my side projects and it is quite an annoyance having to move around with 2 laptops or paranoidly erasing my personal work from company computer.
Also from my experience working at a fang like company they definitely don't seem to lick every penny. We have company laptops because of security reasons, but phones are bring your own which they pay for. Also they pay for WFH office equipment as long as you can reason it makes you more productive or is good for your health. Basically anything that makes you more productive or sustainable they will pay for.
> FAANG companies never did this
Actually it's allowed in 3 FAANGs that I know of.
> Anybody within the company with correct credential would have the right to touch that laptop.
That is only partially correct. In many European countries people enjoy quite some protection also in work life. So in order not to do anything illegal the employer has to carefully control access rights to your PC. And the ones who have access rights cannot do whatever. Reading emails is typically illegal, yes emails on the work account! (Just to mention the legal concepts; of course in today's architecture emails are rarely stored on your PC)
I understand in the US employees enjoy little protection while at work. I could guess video surveillance in the toilets could still be unacceptable. Just to make the point, even if the location, paper and water is paid by the employer and more importantly the time is paid, it shouldn't be like that that the employer controls everything. (Although there have been reports that Amazon warehouse workers in the UK use bottles for their needs, because the employer does not provide for more human arrangements in practice. Some employers are always worse than others and that's why I have stopped ordering from that company.)
If you fail, the last page is corporate training on the topic.
I was so inspired to not have to do corporate training, that I assume everything is a scam now.
In my work, the policy is 3 strikes and you are gone. First two fails are trainings with tests and third fail is an instant fireable event. As we work with clients and and their data, this is strictly enforced too.
Also, what do you do if you have a draconian policy and someone important clicks on one?
I got curious about an obvious internal phishing test and decided copy the link to another machine and see how convincing it was... I hadn't clicked, it wasn't my work machine, and I didn't enter any details - but instantly received an email informing me I'd failed.
Yeah right, I obviously haven't done the associated failure training and I will forever refuse to do so out of principle.
People have different methods of exploring and learning to decide if something is legit or not. Nor should any "security policy" should be a 3 strikes zero tolerance policy. Everything needs context.
P.S. I'm pretty sure that the mental and behavioral damage done by this 3 strikes policy can easily be weaponized.
"Protected employees" is a weird way to put it to say the least. It's not about protecting employees, it's about protecting gitlab company and their customers. And the protection would have failed. The attacker would have needed to use the credentials (including the one-time credential) in real-time. That makes the attack-site logic a bit more difficult, but it would have allowed to break in. I doubt gitlab employees have to reauthenticate very often during a working day.
Well, unless they really use a challenge response system. At least what I use as a gitlab customer is not, it's just standard OTP. I would provide a valid one time password to a phishing site, should I fall for it.
(Edit: reworded. Commenting on the phone is never a good idea...)
WebAuthn (and the older U2F) works, because it's recruiting the browser (which knows perfectly well which site this is) to mint site-specific credentials every time.
An attacker with a phishing site https://fake-gitlab.example/ has a few options, none of which work out for them:
* Just don't do WebAuthn, now they don't have a second factor and can't get in
* Ask the browser for legitimate WebAuthn credentials for fake-gitlab.example. But, of course GitLab won't accept those credentials, any more than it'd accept a made-up username so they're useless.
* Show the browser the "cookie" GitLab offered for GitLab WebAuthn credentials, the browser will cheerfully give a user's FIDO dongle this cookie and the fake-gitlab.example name, and the dongle will explain that it doesn't recognise the combination, maybe use a different dongle? No joy.
* Show the browser that cookie and tell it this is gitlab.com. But this is fake-gitlab.example not gitlab.com, so the browser will just raise a DOMException SecurityError in the fake site's JS code. The code can hide that easily, but it doesn't get any credentials.
Older sites tend to support U2F rather than WebAuthn. If you're on a greenfield install, you should just do WebAuthn, but it can be complicated in some scenarios to migrate from U2F especially if you're huge so it's understandable that not all have. In at least Chrome and Firefox the UX is identical anyway.
So, not differentiating them:
Facebook, GitHub and Google are three popular examples
You can also authenticate for some US Federal Government business on Login.gov (even if you aren't a US citizen)
And the UK's "Gov.uk verify" authentication can use Digidentity's offering which in turn relies on WebAuthn or U2F.
Edited to add:
AWS can do it, but, for some crazy reason they won't let you register more than one FIDO dongle. So I would not advise securing an "admin" AWS account this way, only users who can go to someone with admin privs to reset if they lose the dongle, but it's good for a team of developers I guess.
Not allowing multiple dongles goes against the intended security design, ignores a SHOULD in the WebAuthn standard, and also makes a bunch of the fairly complicated design pointless, I can't tell if Amazon are incompetent or had some particular weird reason to do it.
They support U2F, of course completely opt-in for users/customers.
The question that remains is do they mandate it for employees.
You can also use it on a personal Microsoft account.
Seemed like a pointless box ticking exercise.
Funny enough IT sent out an email about a Windows update rolling out (upgrade to a new version like 1709) that looked ever dodgier than their fake email. That had people reporting that as phishing.
Phishing emails often look pretty obvious - that’s part of the program! It filters out people you can’t trick and leaves you only with the most gullible ones.
Had the same at a previous company. If you use GMail, IT needs to manually approve the mail to avoid it going into the spam folder. A huge warning saying “this message has been excluded from your spam filter by your IT department” shows up at the top. People still click through...
For frauds that requires the attacker to spend time with the victim, sure. For a fully automated phishing attack? There is no reason to lose out on people early on.
And for a targeted attack against a company? Makes even less sense to make it obvious.
U click it or open attachment, u are automatically enrolled in trainingg u must complete.
Very few people click anything remotely obscure, and are asking manager if the email is from a legitimate company we are dealing with.
Ex: I got a signup confirmation email from a legitimate website, asked our director about it.
He looked into it and confirmed with IT we had been signed up and infosec was fine with it.
We then relayed to the whole team that it is legitimate email.
I would say highly successful
Basically, don't try to solve a problem by humans when it can be solved more efficiently by technology!
Phishing exercises are absolutely pointless in my experience and contribute zero to increasing the awareness. Shaming does not address the underlying human weaknesses that make us fall for phishing, they simply make the IT Guys look cooler, and increase CISOs' and Red Team budget. :-(
Some technical measures used here were requiring 2FA for all internal services, and scoping keys/POLP to limit the damage from one compromised key.
The purpose of exercises like these is not to shame someone who "fell for it", but to educate workers about phishing attacks and strengthen the human security layer.
These tests are nothing but CISOs'(and Red Teams, and the whole industry around it) justifying their existence, and potentially doing a song-and-dance about it at the quarterly all-hands. Nothing more, nothing less. We can come back to this thread in another year/two years/five years/decade, and I can bet dollars-to-doughnuts, the industry will still be training humans, and claiming these pointless statistics about phishing. ;-)
On this note, see #6 "Educating Users", in Marcus Ranum's excellent article "The Six Dumbest Ideas in Computer Security": https://www.ranum.com/security/computer_security/editorials/...
There’s a button in the email client for “report phishing link” so I’m always on the lookout.
If you report a test evil message you get immediate feedback that you passed the test.
If you report a real one the security team immediately looks at it and let’s you know if it’s legitimate or not.
I think it’s a good system.
I think phishing exercises should provide much more details, e.g. the following metrics:
(1) # targets opened email
(2) # clicked link
(3) # who entered valid username (must match some identifier in email - to prevent trolling)
(3) # who entered password
(4) # entered valid(!) password
(5) # entered MFA/code or did Push
(6) # auth cookies stolen (full compromise)
Otherwise it's difficult to compare any of these tests and understand the actual risks and success rates.
If your organization is above a certain size, remote code execution in your network is a given. There's several technical measures you can take to make is _much_ harder to perform these attacks on Windows in general:
* Disable unsigned Office macro execution (if on windows with office)
* Disable mshta.exe or remove the .hta file association
If you can get away with it, productivity wise, enable whitelisting for all software.
Attackers can often times still find weak points in your organization. It's not always the marketing or HR department with Windows that gets phished. I once observed a colleague phish a webdev on a macbook with a recruitment 'challenge'.
It's like combat training, the goal isn't to train your army so they all become elite fighters and martial artists, the goal is to improve their fighting skills so that they fair a good chance at victory against similar ranking enemy troops.
So, if your people fall for an emotet phish,that's bad. If they fell for a pentester's phish where he did background research on his subjects and spoofed email header fields, that's normal, just like a navy seal beating up an airforce sergeant would be normal.
20% seems low if they're reasonably well put together emails. In the wild there's plenty of badly made, easy to spot phishing campaigns but one would hope any decent Red Team could put together a good one.
Someone told me they did the same thing at his company, send out fishing emails to see who fell for it.
Those who did (management was disproportionately represented) had to attend some training lessons.
They resent another phishing email a few months later.
Most people who fell for it the first time, fell again, despite the training.
What company has to make sure to communicate clearly is that the failure in the fake phishing test would not affect the employee's status in the company at all. But eventual failure in a real phishing event would have at least some consequences.
For non-IT companies the training should begin and end with the message above and in between should be short and concise with ideas how and where to learn more about the subject.
As I read through these comments and the linked handbook, it kinda makes me want to work for a company like that. As important as security is, even the security handbook has an appropriate tone vs. treating people (CS talented or not) as idiots who cannot be trusted. Good job gitlabs.
For a crafted spear-phish like the one used in this test, I wonder what the failure rate would be in larger, non-tech organisations?
I've been at companies where they did this and I usually 'fail the test'.
I received the email, but given the highly targeted nature (wasn't very generic) I get curious. When you can tell it san internal test its fun to see if you can trace it back to a particular person or department. So I created a VM in a secondary clean laptop and opened it.
So based on the test I failed because they detected I followed a link.
I don't for one second believe that 1 in 5 Gitlab employees also did this, but I'm certainly distrustful of test numbers like this.
I thought about it, then I understood why. My company uses a lot of saas products - for submitting expenses, for giving appreciations etc etc. These saas products regularly sends emails, and they come from other domains.
When my company used all home grown or on premise web apps I never ever opened any emails coming from a different domain or open them very cautiously.
And now I think these saas emails have probably taught my brain to trust emails from other domains.
I am not sure.
The fun arose when the company employed third-party service providers that required employees to respond to an external email (infrequent but it did happen). Inevitably there had to be a certain amount of internal comms to let people know that this external email was in fact safe to respond to.
While sad, I think it's important to acknowledge and don't be do harsh to people who fail in the first attempt...
... also because my biggest learnings also came from really embarrassing moments or failure too..
Every time I see a colleague laid off, and then see one of these stupid phishing tests land in my inbox, I think about losing my job during a pandemic in order to ensure the security team still had the budget to pull this stupid crap.
It doesn't help that our own customers send us stupider looking emails that are actually legitimate.
One in five isn't bad. As you target them, based on content and recipients, the results can get much worse. And when non-tech companies run these, the results are...scary.
It's no wonder the most sought after entry point into a network, the most reliable and probably the cheapest, is phishing. All it takes is one out of 50,000 to fall for it.
My company has just decided to enable 2FA in order to combat phishing. I'm not sure how this would help. What amazes me is that we allow HTML email at all. That alone would greatly reduce successful phishing attempts. Requiring all emails to have valid signatures doesn't even seem too difficult for an organisation.
It says in the article that they never asked for passwords.
I wonder if the statistics would have been different if they did? You usually think twice before entering a password.
I also don't understand why they keep mentioning that their staff is all-remote. I don't see what difference that makes.
It seems logical to me that self phishing is a good way to educate on how to spot phishing/unusual emails, and to realize they are a target
Wouldn't that be legally required to end with something about selfishly self-phishing shellfish?
It does read a bit like SEO copy for a training consultancy that offers an alternative to the intuitive self-phish/reprimand cycle, but it brings up some interesting ideas.
however, spam is not a solved problem. phishing is hard to stop, and spearphishing basically impossible. professionals you know get compromised, upstream toolchains get compromised, etc. the attack effort and risk vs reward is wildy skewed in their favor. it has been a vector of compromise for many highprofile breaches.
find a reputable company, pay them, and whitelist them in your spam filters. they will generate incredible phishing emails (using your domain and corporate info, since you let them) and give you a way to train your users in a way that is irreplaceable.
Results are largely driven by the kind of phish that is sent and if its click-worthy.
Some companies do these exercises every month.
From the 200 people, only one gave up his credentials. from marketing, as expected. We don't let them near anything important anyway.
A clever insider can get that to 100%, say with "Benefit Plan Updates." lol
(I know, I know, this is serious... )