
Gitlab phished its own work-from-home staff, and 1 in 5 fell for it - samizdis
https://www.theregister.co.uk/2020/05/21/gitlab_phishing_pentest/
======
londons_explore
It's important to note the nature of the failure.

Opening a phishing email should not be considered a failure. The email client
is specifically designed to be able to display untrusted mail.

Even clicking a hyperlink in a phishing email isn't too bad - web browsers are
designed to be able to load untrusted content from the internet safely.

It's only entering credentials by hand into a phishing website, or downloading
and executing something from a phishing site that is a _real_ failure.

IT departments should probably enforce single sign on and use a password alert
to prevent a password being typed into a webpage. They should also prevent
downloads of executable files from non-whitelisted origins for most staff.

~~~
chrisseaton
> The email client is specifically designed to be able to display untrusted
> mail.

Email clients often do things like load images, which can tell the sender
you've read the email, which is an information leak.

Some email clients try not to do this, but that's actually somewhat recent,
and I wouldn't say they're 'specifically designed to be able to display
untrusted mail', rather 'they try to avoid common exploits when they become
known'.

~~~
mewpmewp2
What can be done with this information?

Most companies have e-mail addresses that are completely predictable, so you
can pretty much assume that this e-mail address exists. If this really was a
security risk shouldn't you have UUID emails for everyone?

Also how do you as an attacker know that it was user not a e-mail server
checking those images?

~~~
chrisseaton
> What can be done with this information?

It will reveal if they're working right now, what time they work otherwise,
their IP address, their approximate physical location, their internet
provider. A lot you can do with that.

> Most companies have e-mail addresses that are completely predictable

That's the point. Predict an email address, send it, find out if such a person
works there.

If I email unusual.name@sis.gov.uk and they open it then guess what I've
worked out?

> Also how do you as an attacker know that it was user not a e-mail server
> checking those images?

Agent signatures.

~~~
pbowyer
>> Also how do you as an attacker know that it was user not a e-mail server
checking those images?

> Agent signatures.

Can you Expand? Googling isn't helping me understand what this means/how it
works.

~~~
chrisseaton
Also called agent fingerprinting. You can look at exactly how the agent is
responding and make educated guesses at what agent it is. You think one HTTP
request looks like any other, but there's enough little bits of information
here and there to leak info.

The user agent is the simplest example. That can be spoofed, but there's more
subtle traces as well, all the way down the stack
[https://en.wikipedia.org/wiki/TCP/IP_stack_fingerprinting](https://en.wikipedia.org/wiki/TCP/IP_stack_fingerprinting).

~~~
pbowyer
Thanks!

------
StavrosK
I'm a web developer with a focus on security and _I_ nearly got phished
multiple times. Once was a legitimate-looking email from Linode, which I
opened and was fooled by (I didn't check the domain because I trusted my spam
filter too much to consider that it might be fake), I was saved by my password
manager not auto-filling the credentials because the domain didn't match,
which made me look and see that I was on the wrong domain.

The second time, someone was about to steal $30k worth of cryptocurrency from
me with a very convincing page on śtellar.org, where I nearly entered my
wallet seed (did you notice the accent over the s? I didn't), and was saved by
the fact that I keep my cryptocurrency in a hardware wallet, so I had no seed
to enter.

Both times, what saved me from being phished wasn't that I'm trained or that
I'm more observant (which my parents have no hope of ever being), but that I
had used best practices so I didn't _have_ to rely on being trained or
observant.

I'm hoping WebAuthn takes off, which will really kill phishing for good, but
you can take steps now: Use hardware U2F keys as second factors, use a
password manager, don't use SMS auth. Make long, random passwords, etc.

~~~
hombre_fatal
Two years ago I was fooled by "colnbase.com" (L instead of i) to the point
that I was annoyed that 1Password "wasn't working". Of course, 1Password
didn't have a uname/password for a phishing site. I almost opened it to copy
the password in manually when I spotted the L. It's sobering.

As for WebAuthn and U2F, unfortunately they chose every trade-off possible
away from practical usability. They're doomed. Go look up the impl/ux flow for
WebAuthn right now for example.

We need less of that and more good ideas that people would actually implement
and use.

~~~
StavrosK
Really? What do you think is impractical about it? I just tap my USB key and
I'm logged in.

Hell, it even supports a mode where you don't have to have a username or
password at all (e.g. log in and try adding a key on
[https://pastery.net](https://pastery.net), you can then just log in with the
key with no username/password at all).

~~~
tialaramex
Note that to do the latter ("Usernameless login") you need a FIDO2 key. A
relatively modern Yubico product can do FIDO2, but cheaper alternatives mostly
don't offer this.

The reason it's a cost upgrade? Those credentials have to live somewhere, and
that means they're using Flash storage baked inside the FIDO2 key, ordinary
FIDO keys don't have close to enough storage.

Next you might wonder: Wait, how does a FIDO key log me into Google if it
isn't storing the keys?

Magic. Well, cryptography. When you registered the key it minted a key pair
(Elliptic curve most likely) and obviously it gave Google the public key, but
it also provides Google a large random-looking "Identifier" which Google must
give back each time you authenticate. That identifier could, by the
specification, just be some sort of hidden "serial number" but in reality what
everybody does is encrypt the _private_ key or its moral equivalent - with an
AEAD scheme using a device-specific secret key and then use that as the
identifier. So when Google gives you back the "identifier" the FIDO device
decrypts it to discover its own private key for the site which it can use to
log you in. The FIDO dongle doesn't actually even _know_ you have a Google
account, yet it works anyway. Magic!

FIDO2 is a much less clever trick, and that flash storage is too expensive to
use it everywhere - but the UX is so seamless it makes username plus password
look like they asked you to undergo a cavity search by comparison.

~~~
antpls
I don't think you need a FIDO2 key for usernameless.

try
[https://www.passwordless.dev/custom#heroFoot](https://www.passwordless.dev/custom#heroFoot)
with the latest Firefox on a recent Android.

You can register and login with just a PIN code (or gesture pattern) from your
Android

~~~
tialaramex
That's fair, there probably are more people with a suitable Android phone than
with a FIDO or FIDO2 dongle, and you're correct that the phone (having more
than enough storage) offers this feature and unlike a dongle I think you can
be comfortable the phone won't "run out" of space if you sign up for frivolous
nonsense this way.

------
sytse
Maybe this article came about because of my tweet:
[https://twitter.com/sytses/status/1263216521175642112?s=20](https://twitter.com/sytses/status/1263216521175642112?s=20)
“ I'm grateful for the red team at GitLab doing an amazingly realistic
phishing attack [https://gitlab.com/gitlab-com/gl-security/gl-redteam/red-
tea...](https://gitlab.com/gitlab-com/gl-security/gl-redteam/red-team-tech-
notes/-/tree/master/RT-011%20-%20Phishing%20Campaign) with custom domains and
realistic web pages. The outcome was that 20% of team-members gave credentials
and 12% reported the attack.”

I think it is amazing that our res team make [https://gitlab.com/gitlab-
com/gl-security/gl-redteam/red-tea...](https://gitlab.com/gitlab-com/gl-
security/gl-redteam/red-team-tech-
notes/-/tree/master/RT-011%20-%20Phishing%20Campaign) public so other
companies can learn from it and they where comfortable with sharing the
results.

~~~
elliekelly
I’ve seen this a lot in my work where companies hesitate to conduct phishing
exercises that are “too convincing” (or, put another way, too realistic)
because they fear documenting poor results. Of course that means the exercise
and the learning opportunities are much less impactful. I’ll concede it’s a
little different with financial institutions because regulators and auditors
will usually see the results at some point but I really admire Gitlab’s
commitment to transparency.

I try to emphasize to clients that it’s not a test but a phishing _exercise_
akin to a fire drill. You don’t pass or fail a fire drill - you use it assess
how prepared you are for a fire. And if you find that you’re totally
unprepared, well wouldn’t you prefer to figure that out before anything is
actually on fire?

------
LordGrey
My company regularly runs internal phishing tests like this, using an outside
organization. We apparently have a near-constant 7% failure rate. Personally,
I cheat: Long ago I discovered that the outside org puts some identifying
headers into the email, so I wrote an email rule that adds "[PHISHME]" to the
subject line.

The phishing emails are sometimes very good. They appear to be from senior
management and address projects or other internal events everyone knows about.
Some emails are very easy to spot, in the Nigerian prince category. It is very
interesting that we have that 7% failure rate no matter how good or bad the
phishing email is.

In general, I think internal phishing tests are a great way to educate the
workforce.

~~~
JumpCrisscross
> _My company regularly runs internal phishing tests like this... I think
> internal phishing tests are a great way to educate the workforce_

Yes and no. I used to report phishing attempts to IT. Then we started running
tests like every month, so I'd just delete suspicious messages and move on. Of
course, that's when we got a real phishing message.

Frequent company-wide tests are, in my opinion, overboard. Once a year
company-wide tests, followed up by more-frequent tests for sensitive groups
and/or those who failed previous tests, makes more sense.

~~~
WrtCdEvrydy
That's the thing, reporting a phising email in my org excludes you from one
month's worth of email... then two months... then four months... I spoke to
the guy in charge and he checked (my account is set to not receive for 2
years)

------
heipei
I'm not a huge fan of these phishing-test exercises. I run the service at
[https://urlscan.io](https://urlscan.io) which a lot of folks use regularly to
check out suspicious links in mails / chat messages. I've been approached by
some of these phishing-test companies asking me to prevent scanning their
domains/IPs. They flat-out told me that they weren't happy about users using
my service to check the link, which I always found odd, and I never got an
explanation for it. Probably less spectacular findings for these companies if
users can figure out a phishing test by themselves...

~~~
WrtCdEvrydy
> Probably less spectacular findings for these companies if users can figure
> out a phishing test by themselves...

It's the same issue as "ad companies"... if you don't cook the numbers that
show your expensive service is worth it, then people will switch to the
service that looks worse (this one has 7% fail rate but this one has 50% fail
rate)

~~~
tikkabhuna
Perhaps they should look at doing integration that shows how much urlscan.io
is blocking the phishing test companies?

------
ilikebits
When I worked at Google, orange teams weren't allowed to use phishing tactics
because they worked so reliably every single time that they provided no new
information about the security of internal systems.

The reality is that humans are hard to secure, so defense in depth generally
involves preventing compromised accounts from causing lots of damage,
detecting them as early as possible, and controls for shutting them down.

------
chrisseaton
I don’t understand how working from home is relevant to this?

Do people working in offices have IT staff come by to update their laptops?
Would people in an office not open this email if they’d do so at home?

When I worked in an office nobody touched my laptop but me.

~~~
unnouinceput
While in office you're connected to internal network, supposedly within
internal domain and IT dept. would have direct access to push updates
automatically. When outside you're suppose to connect via a VPN (best case) or
communicate via encrypted something (email, ftp etc) but you'll need to enter
your credentials somewhere.

Also, please remember, it's not your laptop, it's company's laptop, merely
given to you to do your work on it. Anybody within the company with correct
credential would have the right to touch that laptop.

~~~
chrisseaton
> While in office you're connected to internal network

Not all companies do it this way. Many use a clear network and make services
encrypted.

> Also, please remember, it's not your laptop

It is if you work for a bring-your-own-device company.

~~~
unnouinceput
Bring your own device is bad for companies. Any of them using this approach
are just begging to have their talent pool drained. If I do work for company
on my own device there absolutely no difference between my personal research
and the company research and in eyes of the law these companies will always
lose if they try to enforce some "secret sauce" to not go to their
competition. Wondered why FAANG companies never did this, those that will lick
every penny from whatever corner they can? Exactly because they know too well
they'll lose badly. Just look at that guy that got bankrupted by Google after
he went to Uber - HN had an article a few weeks back.

~~~
mewpmewp2
Shouldn't that exactly be appealing to the talent, not having to worry about
the company claiming their side projects as their own?

I very often work on my side projects and it is quite an annoyance having to
move around with 2 laptops or paranoidly erasing my personal work from company
computer.

Also from my experience working at a fang like company they definitely don't
seem to lick every penny. We have company laptops because of security reasons,
but phones are bring your own which they pay for. Also they pay for WFH office
equipment as long as you can reason it makes you more productive or is good
for your health. Basically anything that makes you more productive or
sustainable they will pay for.

~~~
unnouinceput
use a VPN to work on your own server/computer from the company issued device.
This way there is no need to keep anything of your on their.

------
mensetmanusman
Our company informed us 2 years ago that they will be attempting to phish us
continuously (no frequency specified).

If you fail, the last page is corporate training on the topic.

I was so inspired to not have to do corporate training, that I assume
everything is a scam now.

~~~
blntechie
> If you fail, the last page is corporate training on the topic.

In my work, the policy is 3 strikes and you are gone. First two fails are
trainings with tests and third fail is an instant fireable event. As we work
with clients and and their data, this is strictly enforced too.

~~~
Igelau
Sounds like a hellhole. That policy is perfectly tailored for corruption and
paranoia.

~~~
fargle
concur. I do hope that the "well meaning" security team that thought this up
is diligent in investigating and accounting for false positives. "Oh, I
clicked the link in the fishing email IN A VM to see what the F* it was" and
"I entered 'fakeceo' and 'mrpassword123'".

People have different methods of exploring and learning to decide if something
is legit or not. Nor should any "security policy" should be a 3 strikes zero
tolerance policy. Everything needs context.

P.S. I'm pretty sure that the mental and behavioral damage done by this 3
strikes policy can easily be weaponized.

Shame.

------
usr1106
> Hunt said GitLab has implemented multi-factor authentication and that would
> have protected employees had the attack not been a simulation.

"Protected employees" is a weird way to put it to say the least. It's not
about protecting employees, it's about protecting gitlab company and their
customers. And the protection would have failed. The attacker would have
needed to use the credentials (including the one-time credential) in real-
time. That makes the attack-site logic a bit more difficult, but it would have
allowed to break in. I doubt gitlab employees have to reauthenticate very
often during a working day.

Well, unless they really use a challenge response system. At least what I use
as a gitlab customer is not, it's just standard OTP. I would provide a valid
one time password to a phishing site, should I fall for it.

(Edit: reworded. Commenting on the phone is never a good idea...)

~~~
Nullabillity
Gitlab.com has used U2F/WebAuthn for years (not sure which, but they're both
isolated by origin anyway).

~~~
usr1106
Right, according to
[https://en.m.wikipedia.org/wiki/Universal_2nd_Factor](https://en.m.wikipedia.org/wiki/Universal_2nd_Factor)
it's U2F. So I would not be surprised if gitlab requires their employees to
use the dongle instead of simple OTP which they allow for customers/users. A
shortcoming of the article not to mention whether that's the case or not.

------
Jonnax
At a place I worked at they did something similar with the most obviously fake
email possible.

Seemed like a pointless box ticking exercise.

Funny enough IT sent out an email about a Windows update rolling out (upgrade
to a new version like 1709) that looked ever dodgier than their fake email.
That had people reporting that as phishing.

~~~
jiofih
> Seemed like a pointless box ticking exercise.

Phishing emails often look pretty obvious - that’s part of the program! It
filters out people you can’t trick and leaves you only with the most gullible
ones.

Had the same at a previous company. If you use GMail, IT needs to manually
approve the mail to avoid it going into the spam folder. A huge warning saying
“this message has been excluded from your spam filter by your IT department”
shows up at the top. People still click through...

~~~
zulln
> Phishing emails often look pretty obvious - that’s part of the program! It
> filters out people you can’t trick and leaves you only with the most
> gullible ones.

For frauds that requires the attacker to spend time with the victim, sure. For
a fully automated phishing attack? There is no reason to lose out on people
early on.

And for a targeted attack against a company? Makes even less sense to make it
obvious.

~~~
Rexxar
It could be a strategy to make people less careful : send one or two "obvious"
fake phishing email and then the real one a little later when they are
confident they can avoid phishing.

------
sergers
My company sends phishing emails every few weeks, for like past 5 years

U click it or open attachment, u are automatically enrolled in trainingg u
must complete.

Very few people click anything remotely obscure, and are asking manager if the
email is from a legitimate company we are dealing with.

Ex: I got a signup confirmation email from a legitimate website, asked our
director about it. He looked into it and confirmed with IT we had been signed
up and infosec was fine with it.

We then relayed to the whole team that it is legitimate email.

I would say highly successful

------
whydoyoucare
A better approach is to implement anti-phishing measures way up in the chain
-- at the MTA level itself. Simpler ideas like: stripping URLs' from mail,
stripping attachments if email origin is outside the organization, converting
HTML email to plain-text, disallowing HTML email, yield substantial benefit in
stopping phishing.

Basically, don't try to solve a problem by humans when it can be solved more
efficiently by technology!

Phishing exercises are absolutely pointless in my experience and contribute
zero to increasing the awareness. Shaming does not address the underlying
human weaknesses that make us fall for phishing, they simply make the IT Guys
look cooler, and increase CISOs' and Red Team budget. :-(

~~~
somebodythere
The best security is multi-layered. The human layer is the weakest part of any
security system, and both technical and human measures must be taken to
achieve defense in depth.

Some technical measures used here were requiring 2FA for all internal
services, and scoping keys/POLP to limit the damage from one compromised key.

The purpose of exercises like these is not to shame someone who "fell for it",
but to educate workers about phishing attacks and strengthen the human
security layer.

~~~
whydoyoucare
Two decades of experience suggests that "strengthening human security by
training" ain't happening, no matter how hard/smart you try. The technical
controls have to be beefed up to a point where that human-weak-link is
eliminated.

These tests are nothing but CISOs'(and Red Teams, and the whole industry
around it) justifying their existence, and potentially doing a song-and-dance
about it at the quarterly all-hands. Nothing more, nothing less. We can come
back to this thread in another year/two years/five years/decade, and I can bet
dollars-to-doughnuts, the industry will still be training humans, and claiming
these pointless statistics about phishing. ;-)

On this note, see #6 "Educating Users", in Marcus Ranum's excellent article
"The Six Dumbest Ideas in Computer Security":
[https://www.ranum.com/security/computer_security/editorials/...](https://www.ranum.com/security/computer_security/editorials/dumb/)

------
fossuser
We do this a lot where I work and it’s fun.

There’s a button in the email client for “report phishing link” so I’m always
on the lookout.

If you report a test evil message you get immediate feedback that you passed
the test.

If you report a real one the security team immediately looks at it and let’s
you know if it’s legitimate or not.

I think it’s a good system.

------
Rafert
Time to send every employee a FIDO compatible security key, implement
WebAuthn, and make it mandatory for employee login.

~~~
sokoloff
Is that meaningfully more secure than something like Auth0 with Duo MFA?
(Which doesn’t require a dongle hanging off my USB-C port and works seamlessly
on virtual machines.)

------
rhipitr
The problem seems to me that companies and orgs want to send emails when it is
convenient for them to do so (paystub ready, benefits enrollment open click
here, etc.) but distribute the cognitive load to its employees/customers to
figure out which emails are trustworthy and which emails are not. You
eventually get trained to click on links in emails as a form of legitimate
interaction.

[https://www.cl.cam.ac.uk/~rja14/book.html](https://www.cl.cam.ac.uk/~rja14/book.html)

------
kerng
If you look at the logs of phishing exercises you can often see employees
messing with the red team, like entering invalid creds for CISO or CEO and
stuff.

I think phishing exercises should provide much more details, e.g. the
following metrics:

(1) # targets opened email

(2) # clicked link

(3) # who entered valid username (must match some identifier in email - to
prevent trolling)

(3) # who entered password

(4) # entered valid(!) password

(5) # entered MFA/code or did Push

(6) # auth cookies stolen (full compromise)

Otherwise it's difficult to compare any of these tests and understand the
actual risks and success rates.

------
Trisell
The company I work at works with and contains a significant amount of PII
regularly phishes on our staff. It’s usually between 1 in 5 and 1 in 4 that
will click on the link. Despite all of the education and quarterly repeated
phishes those number really aren’t improving much. I think at some point you
have to accept that end users will click on things, and add additional
protections in place to help mitigate the risk.

------
awd
I regularly perform tests like these. Overall there's a flat 10% 'critical
failure' rate across organizations. You send a phishing e-mail pretending to
be from the IT department, with some instructions to install the 'anti-virus
scanner' or whatever, and 1 out of 10 people will open the e-mail, click the
link, give their credentials, follow all instructions, click through all
warnings and infect their machines.

If your organization is above a certain size, remote code execution in your
network is a given. There's several technical measures you can take to make is
_much_ harder to perform these attacks on Windows in general:

* Disable unsigned Office macro execution (if on windows with office)

* Disable mshta.exe or remove the .hta file association

If you can get away with it, productivity wise, enable whitelisting for all
software.

Attackers can often times still find weak points in your organization. It's
not always the marketing or HR department with Windows that gets phished. I
once observed a colleague phish a webdev on a macbook with a recruitment
'challenge'.

------
badrabbit
100% of people will fall for a good spear-phish, when you fail to accept that
you start doing things like punishing people who fail. The point of these
tests is to raise awareness and train people so that successful phishing
attacks will need that much more targeting precision in addition to accuracy.

It's like combat training, the goal isn't to train your army so they all
become elite fighters and martial artists, the goal is to improve their
fighting skills so that they fair a good chance at victory against similar
ranking enemy troops.

So, if your people fall for an emotet phish,that's bad. If they fell for a
pentester's phish where he did background research on his subjects and spoofed
email header fields, that's normal, just like a navy seal beating up an
airforce sergeant would be normal.

------
m0zg
All companies should be doing internal penetration/security testing. If you
don't do it, someone in China or Russia will do it for you, you just won't
know. I hope GitHub is doing this too. Google, for example, has an entire team
whose task it is to exploit such attack vectors and close the holes in all
sorts of products and processes, often with stunning results. I'm not sure if
the rest of FAANG does this, although I'd be surprised if Facebook doesn't do
essentially the same. I would not be surprised if Amazon or Apple don't do it,
at least not to the extent you'd see at Google (no holds barred, the red team
gets to pwn everything). Netflix, I'm not sure, they probably have something.
Microsoft probably doesn't do it, since it'd make people look bad, and in
their back-stabbing corporate culture people can't afford to look bad.

------
lucideer
Is this newsworthy? My company does this very regularly, and the phishes are
well crafted convincing.

20% seems low if they're reasonably well put together emails. In the wild
there's plenty of badly made, easy to spot phishing campaigns but one would
hope any decent Red Team could put together a good one.

------
illuminated
I support this action and wish more companies do it. It would tremendously
improve the security in every organization. The people that "bought" the fake
login link feel ashamed, I'm sure, and they'd think twice before logging in
next time. Kudos for Gitlab.

~~~
Leherenn
Not really.

Someone told me they did the same thing at his company, send out fishing
emails to see who fell for it. Those who did (management was
disproportionately represented) had to attend some training lessons.

They resent another phishing email a few months later. Most people who fell
for it the first time, fell again, despite the training.

~~~
illuminated
I don't think an additional training is needed, at least in an IT company. The
fake-phishing success should be enough to make everyone who fall curious
enough to at least research the subject.

What company has to make sure to communicate clearly is that the failure in
the fake phishing test would not affect the employee's status in the company
at all. But eventual failure in a real phishing event would have at least some
consequences.

For non-IT companies the training should begin and end with the message above
and in between should be short and concise with ideas how and where to learn
more about the subject.

------
jamieweb
This is especially concerning considering that GitLab is a technology company
consisting of mostly technical staff.

For a crafted spear-phish like the one used in this test, I wonder what the
failure rate would be in larger, non-tech organisations?

------
fargle
I appreciate the article, gitlabs, and da'reg commenter "Spencer" for pointing
out that gitlabs publishes their security handbook:
[https://about.gitlab.com/handbook/security/](https://about.gitlab.com/handbook/security/)

As I read through these comments and the linked handbook, it kinda makes me
want to work for a company like that. As important as security is, even the
security handbook has an appropriate tone vs. treating people (CS talented or
not) as idiots who cannot be trusted. Good job gitlabs.

------
donohoe
I take the point but I also take it with a degree of realism.

I've been at companies where they did this and I usually 'fail the test'.

I received the email, but given the highly targeted nature (wasn't very
generic) I get curious. When you can tell it san internal test its fun to see
if you can trace it back to a particular person or department. So I created a
VM in a secondary clean laptop and opened it.

So based on the test I failed because they detected I followed a link.

I don't for one second believe that 1 in 5 Gitlab employees also did this, but
I'm certainly distrustful of test numbers like this.

------
unlimit
My company does this often. It sends legitimate looking emails and at last I
fell for one recently.

I thought about it, then I understood why. My company uses a lot of saas
products - for submitting expenses, for giving appreciations etc etc. These
saas products regularly sends emails, and they come from other domains.

When my company used all home grown or on premise web apps I never ever opened
any emails coming from a different domain or open them very cautiously.

And now I think these saas emails have probably taught my brain to trust
emails from other domains.

I am not sure.

------
KineticLensman
I worked at a place where they sometimes sent phishing emails to see what
people did. They also had mandatory annual training on e-risks, which wasn't
in fact too painful.

The fun arose when the company employed third-party service providers that
required employees to respond to an external email (infrequent but it did
happen). Inevitably there had to be a certain amount of internal comms to let
people know that this external email was in fact safe to respond to.

~~~
arkitaip
That's hilarious but it also highlights that ultimately, there are no 100%
inherently safe communications channels. A sufficiently motivated actor can go
through extreme lengths to compromise your IT even if it means faking email,
voice, letters, physical interaction.

------
m463
This is common in many workplaces, and while a little strange, I think it's a
good exercise, especially for less sophisticated folks who get a lot of
external mail.

~~~
marvion
It's not that strange. In a talk at last year's CCC someone explained that
it's a good learning experience when you educate the people, that clicked on
phishing, right in/after the phishing process. He also found that the learning
effect only applies to the method the people failed in - so learning from
phishing doesnt teach anything about passwords.

While sad, I think it's important to acknowledge and don't be do harsh to
people who fail in the first attempt... ... also because my biggest learnings
also came from really embarrassing moments or failure too..

------
trynewideas
Man I hate these, and I hate that companies get paid serious real US American
Dollars to stage these for other companies.

Every time I see a colleague laid off, and then see one of these stupid
phishing tests land in my inbox, I think about losing my job during a pandemic
in order to ensure the security team still had the budget to pull this stupid
crap.

It doesn't help that our own customers send us stupider looking emails that
are actually legitimate.

------
relaunched
Company runs phishing simulation (they were already remote)...is that news?

One in five isn't bad. As you target them, based on content and recipients,
the results can get much worse. And when non-tech companies run these, the
results are...scary.

It's no wonder the most sought after entry point into a network, the most
reliable and probably the cheapest, is phishing. All it takes is one out of
50,000 to fall for it.

------
globular-toast
But the little green padlock was there! It must have been OK.

My company has just decided to enable 2FA in order to combat phishing. I'm not
sure how this would help. What amazes me is that we allow HTML email at all.
That alone would greatly reduce successful phishing attempts. Requiring all
emails to have valid signatures doesn't even seem too difficult for an
organisation.

------
benbristow
"While an attacker would be able to easily capture both the username and
password entered into the fake site, the Red Team determined that only
capturing email addresses or login names was necessary for this exercise."

It says in the article that they never asked for passwords.

I wonder if the statistics would have been different if they did? You usually
think twice before entering a password.

------
ufmace
I'd just note that it seems Google documented that U2F keys were the only tech
they'd tried capable of reducing credentials to Google systems being stolen
from employees in phishing attacks to zero. Maybe we need more of that going
around.

I also don't understand why they keep mentioning that their staff is all-
remote. I don't see what difference that makes.

------
als0
There have been studies which suggest that phishing your own staff has
significant negative effects. I have trouble finding the study names but the
NSCS website has a good article about it: [https://www.ncsc.gov.uk/blog-
post/trouble-phishing](https://www.ncsc.gov.uk/blog-post/trouble-phishing)

~~~
k2xl
This article is more of an opinion piece. Would be interested in evidence.

It seems logical to me that self phishing is a good way to educate on how to
spot phishing/unusual emails, and to realize they are a target

~~~
usrusr
> This article is more of an opinion piece

Wouldn't that be legally required to end with something about selfishly self-
phishing shellfish?

It does read a bit like SEO copy for a training consultancy that offers an
alternative to the intuitive self-phish/reprimand cycle, but it brings up some
interesting ideas.

------
EndXA
Reading this article brought this one to mind:
[https://krebsonsecurity.com/2018/07/google-security-keys-
neu...](https://krebsonsecurity.com/2018/07/google-security-keys-neutralized-
employee-phishing/) (about Google using security keys to deal with phishing)

------
kryogen1c
buying a phishing as a service trainer is the single best bang for your buck
in the realm of all security. obviously, all computer security is relative to
your use-case and threat model, so your mileage will definitely vary. if all
your servers are publicly routable with no firewall or antivirus, emails are
the least of your worries.

however, spam is not a solved problem. phishing is hard to stop, and
spearphishing basically impossible. professionals you know get compromised,
upstream toolchains get compromised, etc. the attack effort and risk vs reward
is wildy skewed in their favor. it has been a vector of compromise for many
highprofile breaches.

find a reputable company, pay them, and whitelist them in your spam filters.
they will generate incredible phishing emails (using your domain and corporate
info, since you let them) and give you a way to train your users in a way that
is irreplaceable.

------
kerng
Seems pretty typical for results of phishing campaigns - although they
targeted 50 people, which is not very representative to get overall numbers
and stats on different disciplines in a larger organization.

Results are largely driven by the kind of phish that is sent and if its click-
worthy.

Some companies do these exercises every month.

------
redis_mlc
20% is low for a typical test.

A clever insider can get that to 100%, say with "Benefit Plan Updates." lol

~~~
spartas
We had one earlier this year about staff raises

------
rurban
We did a similar phishing attempt in my previous company, which had a bit more
technical background than gitlab.

From the 200 people, only one gave up his credentials. from marketing, as
expected. We don't let them near anything important anyway.

------
naetius
My company (big Valley corp doing robotics) does exactly the same, and it's
very good at it: if you get phished, you'll automatically get signed up for a
long and tedious training.

------
ramoz
The most brutal phishing I've seen an enterprise use: "[SPOT BONUS] Your hard
work and dedicated efforts are being rewarded!"

------
lightlyused
At work, failing clicking on a test phishing email can result in a dismissal
if you do it too many times in a year.

------
7174n6
Several have noted email rules set up to flag phishing simulation emails.
Anyone care to share one of those rules?

------
michaelcampbell
My company does this but it's somewhat outsourced to phishme.com, so a simple
outlook rule finds them all.

------
korijn
I wonder if the other 4 out of 5 even read any _other_ (proper) e-mail.

------
xtat
Seems irrelevant that they work from home

------
tobyhinloopen
32 per cent? Is their % key broken?

