I worked for a defense contractor that had a 3 strikes policy for security violations. Failing the phishing emails was a strike. Other breaches of security policy (like getting caught letting someone tailgate you in) could be strikes too. You got fired at 3. Nobody thought this was unreasonable. Part of your job when you work in defense or finance is giving a sufficient number of fucks about things that people in other industries don't have to give many fucks about, like security. If you don't care enough about security that you click on obvious phishing emails then you're not doing your job. Don't do your job and get fired.
They also did have a reporting system. Presumably you wouldn't get a strike if you clicked and reported. People who reported "legitimate" phishing attempts were rewarded. Spear phishing is a totally different game and nobody in their right mind would fail people for clicking on a (well crafted) spear phishing email.
I actually like the idea of having consequences for allowing tailgating, assuming the company cares about it. Maybe not firing, at least right away, or if you get tricked/someone sneaks in behind you, but put some teeth in the policy and actually enforce it.
If the company just says "don't do it" there is still social pressure to be polite and not slam the door in someone's face. But if there are consequences that everyone knows about then no one is going to begrudge you if you tell them they have to swipe their own way in.
Heck, put up signs that say "allowing tailgating is a serious offense" so that visitors are aware as well.
You can wave any object at the sensor. Maybe an unauthorized tag will yield a different beep or make the light flash a different color. Maybe the person in front of you will be in a position to see the light on the reader, maybe they'll notice, and maybe they'll consider it odd. Getting that far, and then actually deciding to challenge you or report it, is a vanishingly small chance.
There is no point in badging an unlocked door, or in expecting people to do so. You have to actually close it between entries. This is a physically and socially ridiculous thing to do with traditional doors; if it's what you want, you need a turnstile.
The whole idea of 'challenging tailgating' falls apart because someone walking in after you is not performing a strange act.
You would have to actively close the door _on_ people, including your colleagues, which goes against social norms to such an extreme extent that it's just not happening.
If, as a company, you actually care about tailgating, there is no "challenge" aspect. There's a barrier that lets one person through at a time at most, and you remove any and all social aspects to it.
Right, which is why you need a more secure system of entry. I've worked in finance and defense. Finance used turnstiles, defense used some light in person security.
We have passcarded doors and then inside we have gates like many subway stations do that are timed only long enough for one person to pass through.
So I can hold the door open for someone on the way in—especially if they have their badge out— but there's nothing I can do about those giant plexi gates once inside. They have to swipe.
The German U-Bahn has a brilliant solution to this. No turnstiles or gates, you're just expected to have a ticket. The penalty for getting caught without a ticket is considered sufficiently high to make "Schwarzfahren" statistically more expensive.
While I agree that the solution is brilliant (and obvious), I'd like to note that freeriding (Schwarzfahren) is not statistically more expensive in monetary terms (the fine is not that large, controls are not that frequent and tickets are not that cheap).
The social stigma, unpredictability and inconvenience of having to pay the fine is a big part of it (not having a ticket is just stressful). Another part is that you don't need to prevent freeriding, nor recoup all the lost ticket sales. You just need enough of a nudge to keep most people honest most of the time.
Indeed. The city I live in has this random-fare-inspection system, and considered turnstiles. The study's result was surprising: the additional fare from mandatory inspections was much lower than the capital and ops cost. Of course, this depends on an overwhelming majority of people being honest.
Obviously "not having too many unauthorized riders" is a very different objective from "only allowing authorized access and refusing all else".
But I don't think that will work for corporate building security, where a single attacker could cause a large amount of damage, not just losing a ticket fare.
"Screw 'em hard enough for breaking the rules and they'll follow the rules out of fear" is generally not considered to be a good model for organizational policy.
For public transport it makes perfect sense. You don't care about people getting on for free, you care about ticket profits falling because of people getting on for free. If the fine is high enough that they totally cover all the lost profit from people not paying the you are doing well.
I feel you've made a huge blanket statement here. Militaries are one organisation where the rules have really sharp teeth and it works well enough. In fact, military organisations have had strong selection pressure applied to them over the past few thousand years and heavy-handed punishments are the norm.
The City does not want to be sued by the estate of someone cut in half by a turnstile gate. This limits the available force and material strength.
The specifications for those gates almost certainly include a requirement that they allow a sufficiently determined person through without breaking themselves, and probably sound an audible alert.
True for turnstiles that open and close. New York's full-height subway gates are just metal revolving doors that only revolve enough for one person per card.
BART gates are entry and exit, but require card entry on both entry and exit (there are separate emergency exits which set off alarms). So it's not possible to trick them to open, though their current configuration does make it easy to jump over.
I haven't been on BART in a few years, but I remember many stations having a wheelchair entrance that anyone could go through if security wasn't watching.
Some revolving door systems (not the ones you mean, probably) actually improve a lot on the classic turnstiles or gates.
They allow one authorized person to pass from one side through but sensors on both sides can easily detect if another person is trying to piggy back from the other side of the door. And there's no way over or around them. In higher security environments where unauthorized persons getting on the other side of the door is already an unacceptable risk they can also allow security personnel to trap someone in the door (rotate only 90 degrees).
And as far as usability and efficiency goes, they can allow traffic both ways at the same time (as long as both people are authorized).
It might be easier to try detection (and embarrassing alarms) instead of physical prevention. For example, floor sensors could detect when multiple sets of feet are enter on the same activation.
Granted, they might not know the difference between one person and a handcart versus two people where one is in a wheelchair, but I doubt many would-be infiltrators would draw attention to themselves that way.
I know for a fact that people will ignore it and set off the alarm anyway even with a large sign covering the entire top half of the door. Then the alarm goes off so often that all the bystanders ignore it too, so there really is no point.
The company has a one strike policy for letting someone use your badge, or letting someone tailgate.
They also have people who go out and try to tailgate, and say they left their badge on the other side of the door and ask if you can badge them in, or let them borrow your badge to get theirs. If you help, they walk in and get security and HR and you're done.
In my building, we have a similar tailgating policy, and its quite strict. A sign states that it's a federal offense and that police will be contacted.
At first I thought it was weird, but since being here I've noticed more and more unmarked and federal police vehicles parked downstairs, and several offices/floors that are unlabeled and used mainly by those guys.
I'm guessing there's some kind of diplomatic etc services that go on here which not everyone's privy to.
Are you prepared to pay your employees a significant premium for the requirement that they engage in fisticuffs with random strangers who may try to tailgate into the building?
Tailgating is a problem for your physical security staff, not your run of the mill white collar employee.
> Are you prepared to pay your employees a significant premium for the requirement that they engage in fisticuffs with random strangers who may try to tailgate into the building?
I have zero experience with this, but I imagine the policy would be "Don't enter the building if someone is too close behind you."
If you don't feel comfortable asking for space (fine!), turn around, go back to your car, and call building security as necessary.
The policy at our building is that, if you don't feel comfortable, let the person follow you in, but let security know.
We have the advantage that all entrances end up going through a central area, and we have ample camera coverage, such that security can reliably find people who tailgated if they are informed.
Tailgating is actually a very frequent and problematic occurrence for us, sometimes by people who will be aggressive, and this has seemed like the safest solution for us.
Good point. My brother was radiated into his condo building in D.C. one night. They robbed the office after he went up to his condo. He didn't feel safe refusing then entry, and knew this was a risk of letting them in.
After the incident, he was contacted by the building management, who asked him what happened and warned him not to do it again.
This seems like a reasonable policy since many people would not have thought in advance what to do if a potentially threatening person tries to tailgate.
Tailgating is a problem when you must keep a log of employee entries (and possibly exits).
If you’re just trying to ensure only employees are on-site, tailgating is less of an issue (unless somebody got fired but their colleagues were never told).
That’s not true. My workplace has employee only entrances where even visitor/temporary badges don’t work. No one is standing guard and they tell everyone to not allow tailgating.
That's the point. I was in the infantry, am 6'2, and a guy. I don't have a problem with challenging folks who are tailgating. That is not the case for everyone. Do you expect disabled folks to challenge tailgaters? What about physically small people? Setting aside the office dynamics around discrimination issues, how many people actually have the confidence to challenge an unknown person who is tailgating, knowing that there are practically no repercussions for allowing it, versus the fallout of alienating coworkers, potentially senior folks who might react negatively?
It's one thing to say "don't let people tailgate", it's quite another to actually enforce a policy that says that unless you provide proper physical security onsite. During security awareness training I always stress that people know who to notify onsite as well as telling them that they can choose to challenge them directly if they encounter someone tailgating.
We expect tiny people making minimum wage to ask thieves to pay for the cheese they’re shoplifting. This seems pretty minor by comparison.
I wouldn’t expect any physical force to be used. If asking politely doesn’t work, call security. If they threaten you into letting them in, comply, then call security.
> We expect tiny people making minimum wage to ask thieves to pay for the cheese they’re shoplifting.
We don't actually. All sane employers have them record and report the incident and not engage, because petty shoplifting isn't worth somebody getting shot and it's built into the margins anyway. If the store is big enough, they may have "loss prevention", who are people who are very much not tiny and will verbally engage the shoplifter and pretend to be scary, but they are also not allowed to engage physically, because again, it's not worth somebody getting shot, and liability is going to be a nightmare even if they were stealing.
I’ve heard of policies that cashiers are not to chase, let alone fight, but never that they’re not even supposed to ask someone to pay. Is that really true?
It might vary by chain, but everywhere I'm familiar with you're not supposed to accuse people of stealing, which has the same effect, perhaps for different reasons.
But you can report that it happened, taking silent note of their details, demeanor,direction, description. Nobody is challenging little Stacey from accounting to take on an intruder barehanded.
You are getting really hung up on a very tiny edge case. No reasonable manager would punish you for being physically overpowered. That doesn't mean you should encourage people to ignore the security policy.
99.99% of the time, saying to the tailgater "you need to swipe" is enough. If you do work somewhere where people are physically trying to break in often, then you ought to have real security personnel.
It's not about being punished for being physically overpowered - it's about being a five foot 3 intern and having someone 6'1 250 lbs, in a suit and in a hurry, behind you, tailgating.
The implications are enough to make it a shitty situation for such a person have to turn around and say "sorry person that looks c-suite, you can't come in with me."
This triggered a memory from my second "real" job.
We had a secure building with glass entry turnstyles. In my second week, a suited important-looking person was standing behind the gates at 8:20AM (we started at 8:30AM). It was busy and everyone was ignoring him (that seemed odd).
The suited guy picked me from the line of drones going through the turnstyles and asked if he could jump in behind me (he didn't even mention if he worked for the company).
I was still doing the HR training program stuff (the general wear deodorant, don't plug in flash drives from outside, don't ask for teamviewer, etc stuff) and the last thing we did the day before was end on the tailgating policy.
I told the suited guy that I couldn't let him in due to company policy. He smiled and said "all good" and went back to the corner.
He ended up being the head of logistics. Apparently, he liked to scope out the new hires and "test" their compliance. He tried this with 5 or 6 of the new hires and only managed to get let in once. The lady that let him in wasn't fired, but she did get a warning.
Rules are rules in Australia, even if you are a 20-times Grand Slam champion and one of the most recognizable people on the planet.
Roger Federer found that out this week when he was blocked access to a locker room at the Australian Open by a security guard who took his job very seriously.
A video circulating Twitter on Saturday showed the Swiss double defending champion stalled at the entrance for lacking his tournament accreditation.
I’ve had it done to me when I was hired and they explained we don’t allow tailgating here. Even though they know explicitly who I am, they are my bosses boss, I still need to swipe my badge on every locked door.
I’ve done it to VP level and I’d do it to my CIO too. I’d be that guy who badged the CIO but I try to take basic security and company policy seriously. I’d like an intern who is professional enough to “challenge” someone. Not sure I would’ve at that time.
There’s no need to confront anybody. Just notify security immediately if you see someone go through a security gate without swiping their access card, or if they tailgate you.
It's also a very tiny edge case that someone is trying to gain improper or unlawful entry to a workplace. It's not my job to put myself at risk in order to stop an intruder. It's not my job to play policy police with my co-workers, either.
My employer recognizes this and uses mantraps to physically prevent tailgating at unguarded entries.
You don’t have to put yourself at any risk, but if you work in a secure facility, you are usually explicitly required by contract to “play policy police”. That is, report any security violation like someone jumping the turnstile, or not having a badge.
Then why bother? If most of the time, asking is enough, then why bother at all? Because occasionally a bad actor wants in, I don't want to have to confront a bad actor.
The only response that’s reliable across doors, sites, and institutions is the door actually unlocking, which you may or may not be able to discern when already opened. Do you know the beep and light pattern by heart for valid credential vs. recognized but unauthorized credential vs. bus pass at every door you use? Each one in my office is a bit different, my apartment is something else entirely, in college they were uniform within buildings but different across buildings.
Electric mortise locks and strikes will click, though sometimes they are held in unlocked state for a few seconds so you won’t hear a second click, or the second click might be reverting to locked state. Depends on hardware and configuration, and maybe a what the person in front is doing with the handle, and when/whether the exit sensor trips. Different things on the door can make clicking sounds, they’re a bit different from each other, but pretty close. Magnetic locks, forget it. Sliding doors, forget it.
I’m an engineer interested in security, I pay close attention to these systems, I’ve run their cabling and installed their admin panels, and I doubt I could tell even if I were actively paying attention.
I do not agree. This should also be implemented in financial institutions and any company that has access to overly sensitive information, especially that which you can not easily change or that would put your family at risk of harm.
I would add in my proposal that if a percentage of employees under a director fall for it, the director gets let go. If a number of directors are let go, the C-Level is let go and so on.
Like the sibling comment, I think it all should depend on the roles of the people as well. You need strict access controls in place to ensure that access rights are well defined such as no/read-only access for certain data in certain environments, physical access control, etc. Someone who does client-facing retail at a financial institution should not have access to production data. As such, them getting phished won't have the same impact as senior developer with production read access.
I completely agree. If the company has performed proper compartmentalization of access and clearly documented who has access to what and it isn't just pencil-whipping, but you can prove the access is really compartmentalized, then the risk is reduced.
I mention the pencil whipping because I have seen financial institutions put on a really good show, but under the covers they are not doing proper management of ssh key trusts, ssh multiplexing, port forwarding, sudo or network access or encryption keys and they know which engineers to put in front of the auditors.
In practice there's often a way to escalate privilege. Defense in depth requires that one be good at each layer. Being good against phishing attacks is important even when the victim has apparently-inconsequential access.
At a financial institution I worked for, they had a separate entrance with separate badges and alarms for the really sensitive stuff.
I only entered that area once, as I was a low-level programmer.
It was also the only company where I never ever made a query on the production server :) (I worked for another financial institution on a more senior role and I did have scary access to the production DB).
I agree that a very strict version of this is too draconian for most workplaces, but depending on the person's role and how many times they've failed a phish test I think it's reasonable to have consequences. For positions where getting phished would be disastrous, something along the lines of a warning or training after the first and second strikes then firing after the third doesn't strike me as exceptionally draconian.
Seagate had all of its employees W2s phished because someone in HR messed up badly. If you have access to PII or other confidential information, phishing is a big deal.
That’s the problem with HR: the same role deals with lots of outside emails and lots of employee data.
They have to compromise between security and keeping it easy for people to apply.
The more companies that have complicated application portals, the fewer applicants they’ll have. Particularly from occasional job-seekers that already have other jobs.
> b) Does the phishing test service detect if the link is accessed via a sandboxed env?
In any company likely to be doing phishing testing internally, there are two kinds of people who might try this. One is the infosec group, which isn't going to do this because they're running the test. The other is engineers who think they're clever and are equipped to fsck around with things.
The former are professionals. The latter are dangerous and not actually an exception. The frequency with which their confidence is justified approaches zero and in the vast majority of shops simply not worth the time it takes to contemplate.
I wouldn't classify the majority of "Blue teams" I've worked with as professional. I'm currently dealing with a new Infosec group at my company that thinks the CEH is a high quality cert, that doesn't understand how open relays can be a problem, and believe that everything Qualys spits out is the word of God. I feel sorry for the CSO we just hired, but he's not much better, and a classic example of why "CSO" often stands for Chief Sacrificial Officer.
Good lord, you make it sound like dealing with highly radioactive plutonium. This is a site called hacker news, if you're a web developer and you can't figure out how to pull an html page without executing the scripts involved (a TRIVIAL thing to do) you shouldn't have a job. And honestly if your network is so insecure that someone running a wget on a domain poses a risk then your network has almost assuredly already been hacked.
> Good lord, you make it sound like dealing with highly radioactive plutonium.
That sounds about right. Your average developer dealing with malware is roughly as safe and sane as playing pool with 6-kilo balls of pu-239. Especially since a lot of places, developers are trusted with things like access to production from their workstations.
> This is a site called hacker news, if you're a web developer and you can't figure out how to pull an html page without executing the scripts involved (a TRIVIAL thing to do) you shouldn't have a job.
You know what's interesting? Even if you can do that, you've already made a mistake and leaked information. You've demonstrated for an attacker deliverability, who is curious and amateurish enough to think they can handle it (but hasn't thought it through), and some useful information about how they believe they are protecting themselves. Fetching a malicious server's HTML safely isn't as easy as might be readily supposed - both curl and wget (https://www.cvedetails.com/vulnerability-list/vendor_id-72/p...) have suffered remote exploits in the past. Those are almost certainly the tools a random dev would reach for and they cannot be assumed to be safe. The odds that said random dev is equipped to set up a sandbox to do so reasonably safely are not great, and the odds of them doing so much smaller.
Curiosity isn't a bad thing. It's a wonderful and powerful trait that has driven humanity relentlessly forward through the ages. Unfortunately, it can also be used against people. Being curious when playing with fire can be dangerous. Especially if you just think the fire is pretty and haven't figured out that it burns yet.
This site may be called hacker news, but it's not full of the kind of hacker that congregates at DEFCON and understands the House of Prime. It's full of the other kind.
Deliverability is pretty trivial to prove, and if they knew enough to get something in your inbox they probably already knew enough to be confident of that anyway. With regard to wget: 15 vulnerability’s in the last 20 years for such a highly used peace of software doesn’t scare me that much, and I can easily run it from a sandboxed container or vm since I’m using those all the time anyway. And if I don’t want to confirm deliverability, since I’m a web developer and I have a brain, I realise they probably included a unique token to know I’m the one that clicked the link, so I’ll leave that token off the request.
Look the problem with this kind of attitude is people take security less seriously when security experts go overboard. It’s classic boy who cried wolf. If you want people to take security seriously it starts with honest conversations where you treat people like adults and don’t immediately go to hyperbole.
OK. How should we - I - go about this differently?
Adults are perfectly capable of believing that their expertise extends further than it actually does and taking risks they do not fully understand or appreciate. I see it daily in the developers I work with. I have worked with more than one developer brimming with confidence in their ability to tackle areas beyond their expertise, who will try to engineer on-the-fly around any shortcomings pointed out in their approach (this is unrealistic, in real attacks adversaries don't give you friendly feedback iteratively).
I'm plenty willing to listen and take on board feedback here. What attitude should I take? How do I convince responsible adults, in a constructive and serious way, that they are not equipped to entertain their curiosity in this arena and should not try? How should I communicate to you, and to hundreds of developers at once, this message without going overboard or crying wolf?
It's one thing to play with malware and phishing at home, on your own hardware, on your own network, and with your own data. That's all your own risk to assume as you like. It's quite another to do so with company hardware, network, and data. That's not your risk to run and not your risk decisions to make. If you can advise me on how to communicate this to engineers who honestly and earnestly believe in their ability to safely handle things well beyond their expertise, I am absolutely all ears.
It would start by taking an honest assessment of what the other person actually knows before you make blanket statements about what they don't know. I mean, don't get me wrong I'm not personally insulted, but in my case you have zero idea what my actual expertise is but you're already telling me that retrieving link is beyond what I can safely handle -- and you literally have no idea what my experience is. (I used to work in security!) Part of my job as a developer is security, I need to know attack vectors people use, so that sites I build aren't vulnerable, and besides that, I'm not some criminal mastermind but you don't even know if when I was younger I had fun hacking people and creating mischief. Point being, you can't come at people with this blanket statement of "you have no idea what you're dealing with!" because YOU don't know what they know. If you do that you're going to lose their empathy and attention before you even communicate what you want to communicate.
Here's what you can do: tell them what can go wrong, with specifics, and don't make assumptions about them. Be realistic about what the likely consequences are and what the worst case is.
If we wanted perfect security we'd never connect machines to the internet and superglue the usb ports shut. But in a realistic world, the level of security we choose is measure against how much risk we're willing to take. Lets say my risk profile is this: I don't work for the NSA and I'm not important enough that someone is going to try a unique zero day exploit on me. But it would be trivial to figure out my work email based on my linked in and my name, so I don't really care about them discovering deliver-ability. If the phishing attempt is bad, I'll probably spot it immediately and not bother to even open the email, but if it's good I'll probably at least investigate because I'll want to know if it was a legitimate email. Chances are, they're just trying to convince me to type my password into a web form (and assuming I use the same password everywhere). Of course, chances are also that, just based on my past experiences, about 90% of phishing attempts come from corporate security departments anyway. If someone really has a very clever zero day exploit of wget then they'll get access to a container with little sensitive data that I rebuild about a hundred times a day.
Yes it's not my hardware, but on the other hand, my company has entrusted me with local admin to get my work done and use of the internet. That's the risk profile they're comfortable with, and sometimes I get external emails and need to figure out if they are legit or not, and I don't work for a giant company with a huge security department so sometimes I need to take a glance to see if it's a legit email or not.
That's an excellent and highly empathetic approach!
My only issue with what you've described is that I cannot scale it. When I have hundreds of developers to educate, sitting down with each of them and spending hours hashing out what they do and don't know and educating them over the gaps can at times become somewhat time-consuming.
How do I deal with hundreds of developers, the vast majority of whom have no significant background in security, many of whom earnestly and honestly believe that their understanding of web development protects them? How do I collectively treat them like adults and not lose their empathy or attention in a scalable way? A highly individualized approach isn't workable in this context.
>Does the phishing test service detect if the link is accessed via a sandboxed env?
Does it really matter? I used to play these games with my org's absurdly obvious phishing trainers, but the truth is it's not my job to determine whether an apparent phishing email is genuine or not. If you know that getting phished is a fireable offense, then just don't access the links, obvious fake or not.
I forward all emails that are not directly from people in the company to the trash.
I also forward anything with the word "phishing", "test", "audit", and our our IT and security department.
The other strategy I have is simply not checking email.
This is so useful, while others are taking security test, and failing phishing link test. I am in the clear.
What test? I did not get the email.
You clicked what? Humm, I did not get that email.
Man you got a virus -- I run Linux and access my email via Emacs/Mu4e -- also I did not get that email.
The env it was accessed in shouldn't matter too much. Simply clicking on a link is a fairly small risk. Not zero risk but unlikely to be an issue. The real issue is when the link you clicked on asks you for a password and you type it in. You should only fail the test for giving a password.
* That their attack was delivered successfully to inboxes.
* What email addresses are live.
* Who is curious enough that they will investigate.
* Who believes they understand and can handle the risks.
These are not small things for an attacker to learn. Further, in a world where drive-by browser attacks are real, it's worth thinking very carefully if clicking a link in a phishing email should be regarded as essentially harmless.
If tailgating is that big of a deal, especially for the defense industry, then they need to install man traps at the entrances. Make tailgating physically impossible. It's unreasonable to expect, say, a smaller female employee to stop a larger male who she only realizes is tailgating her after she's already swiped her badge. If physical employee is that important, install physical security or have guards. Plenty of important facilities have both.
I don't think any company's policy requires every employee to physically stop the tailgater. It would be enough that she e.g. alert security to the situation.
Not immediate, but they have been tipped off by the employee who made the call, as you noted, so they can review the most recent video from just a few minutes ago for the specific corridor where that employee was tailgated.
I had to take a security training class because I failed to report a phishing attempt. Didn’t click the link and likely ignored the email altogether. My boss was confused why they contacted him. I don’t work there anymore.
I work at a financial company and we have a similar policy around phishing email. Embarrassingly, I failed this once and then created an email rule which filters out the fake Phish. No idea if it gets real Phish.
...and now we see why such policies are bad, and it's even covered in the article: while people falling for phishing are bad, what's even worse is when they fall for it and don't report. Creating a culture where the security is the enemy is _not_ good.
I mean, sure, if it's 20 times, we're getting into outrageous territory and you have reasons to suspect employee is trolling you. But other than that, the reality is that your employees _will_ get phished eventually. Reduce the risk and work on reducing the harm caused when it happens, instead of antagonising your workforce.
Edit: also, if you could just "filter out" the test means that the tests were about as good as most corporate "compliance" training is. Just as the firings, it feels designed more to coddle the C-levels than actually achieve anything.
I also incidentally found an easy a filter for these. I found once that our proxy auto config has 10 misspelled domain names. Those are all used for the Phishing Tests.
This can be equalized by having a perk or bonus on reporting phishing attacks to IT. Anything from casual Friday (for offices that are not relaxed wear) to a Starbucks card or whatever. IMHO the act of timely reporting the click on a phishing email should negate the email click penalty. The idea is make the wanted behavior pleasurable and the unwanted ones painful.
> This can be equalized by having a perk or bonus on reporting phishing attacks to IT.
...there is no way you can equalise "you're going to lose your job" with anything less than "we're going to give you enough money that you won't really need the job anymore."
And with a Starbucks card or casual Friday? I'm not even sure if you're being serious, because that sounds like a joke.
I was trying to make the case that you can give perks for natural reporting to IT (passive and active) + ensure that a user that has acted on a phishing email that reports it in a timely manor is treated as a non-fail (at least at some level).
Both of these things together leave a system in place where:
users are highly penalized for failing a phish test (or real life phishing attempts)
User's that fail the test (or real phishing attempt), but follow it with a timely notice have less pain.
Users that notify on apparent phishing attempts get small rewards.
> Embarrassingly, I failed this once and then created an email rule which filters out the fake Phish.
how did it get you, if you don't mind sharing? It seems if someone who works in IT (guessing you do) and is very careful fails it, this is an impossibly high standard to meet.
I recently failed a suspicious email / phishing test for the first time, and I am also one of those people who never thought it would happen to me...
The email was a newsletter I didn't care about, and the unsubscribe link was (fake) malicious. That one impressed me because it preyed on what is now a pure reflex to click the unsubscribe link.
From this and other comments in this thread it seems you have failed these phishing tests as soon as you click a link. Is the assumption here that you are completely pwned as soon as you visit an url controlled by an attacker? I can't imagine myself compromising company data/funds via a website where I ended up through a newsletter unsubscribe link so this seems quite unfair on the part of the phish-testers.
That's exactly how my organization does it as well. I had to go to an incredibly asinine training because I clicked on a fake phishing link after verifying that the domain was owned by a computer security company that sold phishing prevention services. The link just went to a static "you could have been phished" page and a few weeks later I got an email telling me that I had to got to a phishing awareness training. There was no attempt at all to actually collect any credentials from me.
This is exactly the case. If you click on a link in the fake phishing email you've failed the test. It does not require you to install anything, open an attachment, etc.
If you think visiting a webpage in Chrome, or any other browser, even inside a VM, is totally safe, especially against a nation-state level actor, I have some bad news for you.
You would literally have to never click on any link that isn't 100% under your own control in that case. Yes 0-days exist, but if I'm in an environment where that level of security is necessary, why do I even have access to a web browser?
If you think that just clicking a link is so dangerous that it needs to be a firing offense, then you should probably lock down the computers so that the browsers cannot view anything besides approved domains.
From the original article which that page links to [0]:
The breach centered around a hacker getting hold of a Microsoft customer support worker’s login credentials; from there, the hacker could dive into the content of any non-corporate Outlook, Hotmail, or MSN account
This is a security concern for any mail that an administrator can read, although it isn't at the same level as being compromised just through parsing an email.
If you are in a high enough position that nation states are burning zero day exploits to launch targeted attacks against you, there should probably be a security professional filtering your email. (This is also a situation where disabling Javascript would be very reasonable.)
For the remaining 99.999% of the population, I really don't think opening a web page in an up-to-date browser is cause for concern. Certainly if that browser is also in a VM. People have more pressing concerns in their lives.
Did the link take you to a site that auto-ran malware? Or did it take you to some kind of page that said "login to unsubscribe"?
The latter is why password managers can be so valuable. I never type my passwords in so if my auto-fill doesn't activate I immediately become suspicious.
If it's the former, it seems like your company must be using an insecure browser or the site was running some kind of 0-day? I never think twice about clicking links.
When my organization does these, the link just goes to a static page that says "you could have been phished". The fact that in fact no serious attempt at phishing has yet taken place and that my work machine is far too insecure if they're worried about browser 0-days seems totally lost on them.
It's a valid comment, but if you're in a position where you are worried about 0-days from random web browsing then you should be using the internet on a fully segregated machine.
If Firefox or Chrome has an RCE + privilege escalation in it that can be triggered just from browsing to a page then, congrats, you got me.
The recent CPU-level vulnerabilities have exploits that can run in the browser. See https://www.zdnet.com/article/intel-cpus-impacted-by-new-zom... for pointers to video evidence. They're not zero-day attacks once they're made public, just the same as Meltdown and Spectre. Go download the PoC code, switch calc.exe to something useful, and phish away.
https://news.ycombinator.com/item?id=20028108 from earlier this week shows that just loading a page can lead to network information disclosure or other compromise / attack vectors. It's not a zero-day, it's a feature.
I don't think background noise of broad, low-effort phishing emails can be directly compared to a more focused attack. If you work somewhere with interesting data the odds of a good phishing attack leading to an exploit could be much higher because you're being specifically targeted and they're not going to send the message until they have a current exploit ready (probably hoping to get it in before your IT department's change window, too).
> If someone had a working browser exploit, wouldn't they just deliver it to their targets via an ad network?
I've heard more people at enterprises using ad blockers for security so I wouldn't rule that out but in general this is hitting that the broad vs. targeted distinction I mentioned: each time you use an exploit you're risking discovery, which will lead to it being patched & AV signatures going out. Using an ad network increases the number of people who are not your target getting the payload, not to mention any scanning the network does, and since ad networks require payment there's another trail pointing back to you which might not otherwise be the case if you are hosting things on compromised servers.
I'm sure this [reporting spam rather than unsubscribing] happens all the time but it's sort of obnoxious if the email is legit and, especially, if it's a list you requested to get put on at some point.
If you got my email address from a third party, then I do not want to be marketed to.
If you got my email address because I applied for a job, then I do not want to be marketed to.
If you got my email address because I signed up for a service, then I do not want to be marketed to.
If you got my email address because I purchased something, then I do not want to be marketed to.
If you got my email address because someone else "legitimately" entered my email address into your field, then I do not want to be marketed to.
In short: your definition of "legit" likely does not meet my definition of legit. The only email that I deem to be legit is an email that:
1) is @from a domain name that I recognize (walk like a junk, talk like a junk, it's junk)
2) is @from the same domain name as the correspondent (no third party bulk email or proxies; eg mailchimp et al)
3) does not have a no-reply@ as the reply-to address (I must be able to talk to a human)
4) does not hyperlink to third party domains (from@domain must match hyperlinked domain text)
Any legitimate email outside of those parameters are specially treated with liberal amounts of filtering.
Your attitude doesn't account for the possibility that since you do business with somebody, you need or want to receive some of their emails.
This is the nature of any relationship. You can't be ruthless in eliminating aspects you don't like, if you don't want to end it entirely because it's net positive.
> Your attitude doesn't account for the possibility that since you do business with somebody, you need or want to receive some of their emails.
You are incorrect and interpreting my comments very narrowly.
Doing specific business with somebody should not give that somebody carte blanch to use my email address for whatever reason they wish. I absolutely can be ruthless in eliminating aspects I don't like and if businesses don't like that: tough shit. My world doesn't revolve around your business. If that means that the business relationship ends right there, then I'm better off for it.
And, assuming there are clear and reasonable ways to inform them that you don't wish further communications (as there should be with any professional marketing), you should take that route. Otherwise anything is fair game. But I attended an event and got a follow-up email? Calling that spam is mostly being an asshole.
Why is obtaining a follow-up email from an event impervious to being spam? Attending an event is not consent for marketing. Marking spam as spam is not being an asshole.
I know it sucks when an IP gets burned, but when you're acting in good faith it's a rarity - or has been in my experience.
One of the reasons you'll pry Evolution from my cold dead hands is Right Click -> Create Filter -> @domain.co.uk and done.
I have filters for almost everything, my boss goes into one folder and gets set one color, automated notifications from my internal system another (green if everything is OK, orange if there is something I really need to look at).
What I really* want is a desktop client that exposes a nice clean Python (or similar) API so I can automate even further - I mean Python has everything I want if I want to do that from the CLI/cron but having it built in would be really nice.
Nearly all of them are tech companies trying to sell me stuff because I once bought something from them, tried their service or briefly thought about it.
As I’m the head techie (by dint of been the only techie) , I’d be the one purchasing from them in the first place.
I find a filter that sends everything to Marketing/Hardware and marks as read fairer than flagging them as spam.
If you filter out all the people telling me they’ll be out the office, birthday announcements etc, all the vendors trying to sell me stuff and all the automated stuff (which I do automatically), I get less than 5 emails a day (I put my boss on Trello, it’s just better for what we need) which I check once at 11 and at 10 to 5.
I’m ruthless about my time since I’m the only programmer.
The unsubscribe link is more effective at stopping unwanted emails.
I automatically unsubscribe people who mark email as spam but lots of marketers dont do that so you will still receive emails.
Most legitimate companies will respect the unsubscribe links as that is required by law and they invested more infrastructure around that functionality.
I wouldn't click on links from phishing attempts or emails from sketchy services I never signed up. Anything semi-legit is better handled through the unsubscribe link first.
Heh, you could make a phishing attempt that looks like a really bad phishing attempt, and inside of it have a link that says "CLICK HERE TO REPORT SUSPECTED PHISHING ATTEMPT TO IT"
I nearly fell for a real fishing link once recently, due to changes that have been made by our IT department.
Firstly all external senders have the mail reformatted with a red bar at the top and some text, and secondly all hyperlinks are forced through a proxy, which makes it effectively impossible to know what the URL is from the email.
I'd received a (rare to my work account) fishing email and I was about to click on a link in there just before I started thinking. I'd been trained to trust emails with the red bar, as most external mail I get is trustworthy, and I reflexively check links before I click them but this was just another going through the proxy.
I'm not sure how much these changes help less technical users, but it made me less secure.
That's just it, I simply wasn't being careful. Everyone gets distracted sometimes. I don't remember it in detail, but it was a fairly standard phishing email from a fake domain and I clicked a link. Not only am I in IT, my department is somewhat security related.
I guy I worked with fell for one when he was selling his car online, he got an email from an interested buyer (car thief) the linked to a very good replication of the site he was selling it on. The car thief turns up and scouts the location but the give away was that he showed little interest in the car, at which point he went back and checked the email to discover the phishing.
Even trained intelligent people have momentary lapses of concentration. Imagine opening a link from email on your phone then looking away for a second while it loads, but then the address bar has disappeared and you've missed the ssl indicator.
> It seems if someone who works in IT and is very careful fails it, this is an impossibly high standard to meet.
It's the same reason you automate things. People make mistakes. It's not about messing up once. It's about always being absolutely positively sure that you aren't making a mistake.
If you've ever clicked on a link from an email, you are vulnerable.
I think it's reasonable, but there should also be reasonable common-sense support for these policies.
"No tailgating" can be supported through decent vetted entrances and exits.
"No phishing" could have mailserver and mail client support to properly flag the origin of emails and/or enforce "no remote loading" or "restrict html" or "check attachments" sorts of things
Say what you want about air travel security but the TSA has figured some stuff out. For all of the behind the scenes employees, working in what’s known as the SIDA (security identification display are), there are protocols that must be followed. Most are what seems like security common sense but to really drive the point home there are stiff financial penalties for not following them. Typically the employer will pick up the bill but ring up a couple of them and you better start looking for a new job because you’re definitely going to get fired.
I think some kind of strike system is completely reasonable and if you’re working on sensitive information or systems then phishing most definitely needs to count as a strike.
It is absolutely unreasonable to get fired by putting your life in danger to stop trailgaters.
There are essentially three types of tailgaters, people who belong there, tourists, and nefarious across.
People who belong there are just getting lazy, but it is OK. Tourists you can stop (these are people who might be guests, or are just curious) but these people aren't bad people, and most lonely wing do any harm. Then there are the nefarioys types, the criminals... The ones who are there for a bad reason, and I'm supposed to stop them?
I like the three strikes rule. Sometimes it just slips through.
I got caught by a phishing link once because I had just gotten off the phone with a co-worker John and got an email 5 minutes later that said "hey James, it's John". Didn't even think twice about clicking it.
Rohyt Belani, CEO of Leesburg, Va.-based security firm Cofense (formerly PhishMe), said anti-phishing education campaigns that employ strongly negative consequences for employees who repeatedly fall for phishing tests usually create tension and distrust between employees and the company’s security team.
This is the key. If you think security teams aren’t hated enough for having to change your password every 90 days. Just wait until their “games” are the reason for people getting fired. This is a guaranteed way to get your users to not only not want to help you. But actively work against you. And if enough people scream the C ring will eventually listen. And I don’t think the security team will win.
On the other hand, all the security team needs to do is point to the number of billion-dollar breaches that have happened due to phishing. If phishing tests are a game, then so are DR tests, so are code reviews, so is the QA department. If phishing tests are a game, then so are your yearly performance reviews, or showing up to work on time, or meeting your deadlines.
Not destroying the company through your own negligence should be basic standard practice. Repeatedly failing a phishing test even when given proper security education (like PhishMe provides) is negligence that can destroy an entire company.
I worked in security at a company where the IT security department didn't report up the IT chain but was under HR alongside the Internal Audit department. Enforcing policy and holding people accountable were fundamental expectations of our managers all the way up, no different than someone repeatedly harassing a coworker or watching porn at work.
>all the security team needs to do is point to the number of billion-dollar breaches that have happened due to phishing
A problem here is that a company whose leadership is receptive to your argument would probably already have mandated some form of security/phishing training. The ones who are likely to fall victim to these problems are the same types who do not plan for this stuff to begin with, and also would not be receptive to your hypothetical argument, imo. (e.g. "I don't have time for thought exercises, how many performance bugs have you fixed this week!?")
The path to victory is far more often along the lines of teaching your leadership to care for themselves about security, rather than trying to beat them over the head with heavy-handed hypotheticals of doom and gloom if they don't listen to you and do what you say. They need to feel it in their bones themselves. Otherwise you're never going to get cultural buy-in from the rest of the organization.
"Not destroying the company through your own negligence should be basic standard practice. Repeatedly failing a phishing test even when given proper security education (like PhishMe provides) is negligence that can destroy an entire company."
The issue is that people are trained and required to ignore warning signs most of the time, so it is impossible to crack down too harshly.
Employees by definition do not really care about destroying the company, because they do not own it and can walk away if they like. So the company does not have unlimited leverage over them.
If they do it wrong, get corrected with proper training, do it wrong again, receive training again, and continue to do it wrong, then yes. Anyone would.
When a new phishing test goes out everyone in my department announces to everyone else to watch out for it. So it's a bonding experience of the non-security people against the security people.
That is why you pay consultants. They send out the phishing test, and hopefully regular people bond with the security people in an effort to pass it.
I mean, after all, security and regular people in the company should want the same thing (company success... which implies not giving away things to phishing probes)
Yes, you don't want to create an environment in which people don't want to ask IT/opsec people for help for fear they will be getting themselves in trouble.
This doesn't seem like much of a reason to try and dissuade employees from discussing these things though.
I think there are probably a great many sysadmins, security analysts, and ciso's who can only dream of a day when run-of-the-mill employees are having casual conversations about phishing and identity security at the office.
Well lest we forget that some companies only pursue those who management at some level wants out. Many times negative consequence campaigns are just used to hide real intent.
then throw in all the people excluded from being judged and it can affect morale to where people get ambivalent about other security issues.
I'm capable of not clicking on random links in email. I'm not cognitively capable of remembering dozens of passwords and then "forgetting" them (i.e. memorizing that the password you previously memorized is not the password any more). And then throw in the fact that schemes like that also don't work.
Phish tests need to be fair to people who actually understand something about security.
"Opening an email" is not actually an issue (spearphishers that sit on drive-by 0-days in current browsers or email programs are not a threat model that most orgs can possibly defend against). Opening attachements is hard to measure and again needs context: What kind of software and sandbox was the attachement opened with? Attackers using some ancient forever-day word processor exploit is realistic. Attackers sitting on fully patched VM outbreaks is unrealistic. If the used VM has unimpeded network access, then the attacker needs no VM outbreak. If the target opens a phish link in a current browser, but then refuses to enter valid credentials (because user is wary), then the user can be argued to have passed the phish test.
If you make failure fireable, then you need to demonstrate that the victim was actually successfully phished.
If failure requires remedial training, then you can afford a high false positive rate: Clueless victims learn not to click on links, and sophisticated "victims" get to talk with a security person about why their action was dangerous or harmless, and in accordance or in violation of policy.
This definitely needs to be considered. I open WSL and use curl on suspicious looking email links. I've been logged as doing so before. I'd hate for that log to actually go somewhere significant.
It might be worth considering carefully how safe the practice of opening essentially random email links might be. Are you opening the links with a full suite of forensic measures in place, or are you dropping curl $URL into your terminal on your workstation? It looks like WSL isn't exactly a sandbox. It does seem to already be used by some malware: https://research.checkpoint.com/beware-bashware-new-method-m...
In a world with drive-by exploits and where opening a link leaks information, it perhaps could be considered unsafe to open essentially random links from emails. I've definitely worked with developers who seem to believe that curl is magical and inures them against every possible attack.
Curiosity is a wonderful thing! It's just sometimes it can be dangerous to a person and to the people around them. It might not be a bad thing for people to learn a smidge of caution.
One of the best policies I ever witnessed: There was a second guest network with internet and nothing else for guests/consultants and facebook/twitter/porn (the company just paid for internet twice). Employees had a second crappy machine connected to the isolated guest network for this purpose.
I'm a tech professional and security is a regular part of my jobs. At one point -- while contracting for a Fortune 500 client that shall remain unnamed -- I received an email that was quite clearly phishing. Curious as to what the payload was and whether it was worth reporting, I fired up lynx and followed the link in the email from the command line.
I was promptly informed that I had failed the test and I would be receiving a formal reprimand.
Can't speak to whether a reprimand is warranted or not and I think many here will disagree, but unless your job is investigating phishing, you shouldn't do this because you ARE ultimately putting the corporate network at risk unnecessarily - what if it was a real link and happened to exploit a zero day on your box? Management wouldn't accept your reasoning for following the link I suspect.
The risk of hitting an exploit on the command line, especially with something like wget, is enough orders of magnitude lower that I think it falls under acceptable. The standard cannot be zero risk because that's impossible. Even shutting off the internet link doesn't get you all the way to zero.
The issue isn't how much risk there is in opening it. The problem is that regardless of how much or little risk there is in opening the link, it wasn't op's job to examine it. It was unnecessary risk to open the link.
I mean, it's not my job to refill the office coffee pot when I take the last cup of coffee, either, but since I'm decent to my coworkers I'd probably do it. Not the OP, but since I know how to open a malicious link safely in wget I would happily do that for similar reasons, and if I got reprimanded for it "not being my job"... I'd start looking for a new job where I'm respected for what I'm able to do.
My job is not investigating phishing but security is everyone's job. Reporting what was obviously a targeted phishing attack to the people who do investigate phishing is a basic expectation. Now before you try to tell me "well you should've forwarded the email and been done with it" those guys are going to be pissed as fuck if I pass along every spam link trying to sell me boner pills, so I've got a duty to make sure it's a credible threat. As somebody who knows how to safely investigate such a link and did exactly that it was ridiculous to be penalized. Security (albeit not for email) is a part of my job. I don't think people should be trained not to use their brains when it comes to security threats. If people used their brains more often phishing would hardly be a problem to begin with.
"Now before you try to tell me "well you should've forwarded the email and been done with it" those guys are going to be pissed as fuck if I pass along every spam link trying to sell me boner pills"
I know you say the team would be pissed, but it's actually the exact opposite! Firstly, most sophisticated companies have automated the abuse inbox management process, but even when it's not automated, I'd rather 100 easily ignorable reports about boner pills than one person not send an actual spear-phishing email. Plus we can use the generic spam reports to better train our spam filters so please do keep sending them, even the Nigerian prince stuff.
It's been over a decade since there was an RCE for Lynx. The difference between the attack surface of Lynx compared to a regular browser is several magnitudes. No code is safe, but giving someone grief over following a link using Lynx is security theater at its worst.
Downloading and executing code is only one way a browser session can be abused. At the very least you're giving away everything your browser (even Lynx) puts in the headers of a request. That's often a heck of a lot of useful information for an attacker. Lynx supports cookies too so it would be possible to track a user between sessions. I don't know how that might benefit an attacker but I'm not an attacker[1].
I think a reasonably paranoid approach like "Hackers might think of ways to abuse this that I haven't thought of" is best. Unless your job is to take a risk and visit a phishing site, don't take the risk. Even with Lynx.
The point is that you're giving data to a known phishing site by visiting the link in a phishing email. It's true that ESPN might also be a phishing site but it's less likely.
I doubt most if not all exploits would work in lynx/links.
But your point is spot on, don't take it upon yourself to do things that aren't in your job description. Otherwise you become that person who takes it upon themselves to "fix" things and makes the problem worse for the people responsible for fixing things.
Unless you're certain how that ID is generated and/or linked to your identity, you've probably just put someone else at your company on the naughty list.
Yup. This is why people are taught not to click links in emails for things like banking. It's a dangerous world out there, even for seemingly innocuous things.
Presumably you were able to explain your case and have the reprimand expunged from your record. As long as they are reasonable in that way I don't think occasionally testing the people handling sensitive data is a bad idea.
Should it be expunged though? They've indicated they were aware it was quite clearly a phishing attempt, but they still accessed the link. If the test was to see if a user would try accessing the link, then this user failed the test. Why should that be expunged?
Curiosity shouldn't preclude security, and intent shouldn't preclude policy if the operator operated knowingly.
This isn't to attack maxk42, but to engage the question head on.
I was hoping implicit in this statement, along with other contexts offered that this would have been read with "information security" in mind, on me to communicate that better next time.
If I’m curious I’ll open the link off the company network. Easiest way I can think of is just opening with a browser on my private iPhone while on a 4G connection.
Yeah - I’d be wary of doing it without changing the parameters for that reason. But obviously you can’t check a link without checking a link.
It would also be interesting to hear whether someone actually considers me to have failed anything when visiting a (faux) attacker’s link on my own device off the company network and entering no credentials.
There should be a line drawn between real security-conscious workplaces, and the kind of self-important chickenshit places that seem to delight in playing games and harassing their employees with this kind of thing.
I see this a lot as a security guy. There has to be a healthy medium. Users can be fired, sure, but this should be a last resort. It's not really fair to say "you're fired" when you don't have DKIM/SPF/DMARC, you haven't tagged external emails as such, you haven't provided here awareness training, you have provided training but not in a gradual form (ie. From Nigerian prince emails right the way through to sophisticated attacks), you're not providing outbound filtering, educational resources or a reporting tool, you've not registered similar domain names or <my company>. otherTLD, sandboxing, AV...
Users know what phishing is, even the most naive of them. You need to do your darndest to make sure nothing gets in to your network first off. If people are repeat offenders then you have to chat with them first off and figure out what's going on . If they're being intentionally obtuse - clicking emails to see what happens even if they know it is phishing - then look into firing them, otherwise just act like you're on the same team, provide them with education but not overwhelming amounts, and it s eems to work has been my experience anyway
Your organization becomes more secure if people aren't afraid of revealing their mistakes.
Seriously, you should give people who fail phishing tests cupcakes and additional future phishing tests. If there is a continued failure or inability to learn then there is a problem to be fixed perhaps with firing.
>give people who fail phishing tests... additional future phishing tests
This usually happens, especially if they're using something like PhishMe. If you fail the phishing test, you're immediately told you were tricked, and scheduled for mandatory training within a few days. After you complete the training you're put on a re-targeting list.
What we're talking about isn't firing someone for making a mistake. It's firing someone for gross negligence over and over again even when given proper training and incentives. At some point it becomes clear that the employee is a danger to the company. If they're that careless with their emails even after getting caught and going through training, what else are they neglecting to do? And who might be injured/killed because they don't care?
I think it's very case by case. On first fail of a phishing test, absolutely not. They should have phishing explained to them again, maybe in a more personal setting (instead of educational video/talk).
It's definitely true that anyone can be spearphished or can fall for a sophisticated enough phishing scheme, but if someone is continually failing the most basic phishing tests (responding to random emails asking for your password for example) I think that's grounds for firing.
It's akin to locking up after you leave. Is it a fireable offence to fail to lock up the office when you leave? Probably not the first time. But if you never lock the door, at some point it becomes a liability.
Sure a professional could break in even if you lock the front door, but it's not like locking up is pointless.
Proportional to the degree of damage that can be done by the employee in question... yes, absolutely. If you have the responsibility and authority to disburse millions of dollars to a random bank account number, then you've got a high degree of responsibility not to be spear-phished, and it would be a disqualification if you are unable to resist it.
On the other hand, firing a front-line call center employee because they failed the spear-phishing tests is fairly pointless and more damaging than helpful.
Where exactly the line falls would be up to the business and like so many things, involves too many factors to be reasonable to discuss here. With the typical concentrations of power and authority in a business, it's only going to be the minority of employees that would be faced with termination for this problem, because only a minority will have the power to do significant damage to the business in general.
I think it's not too difficult to think that the article is mostly talking about the situations where it isn't proportional to the degree of damage that can be done by the employee.
A few years ago I received one of these at work, before I even knew they were a thing. I would have been very annoyed if they'd taken any action against me for following the link in it.
The email itself looked like a standard spam email, but the link was really weird, having a few tokens as part of a query string. Normally fishing emails have simple URLs in them.
So I did the obvious thing of opening the link in a fresh, zero data, locked-down VM just to see where it would take me.
I got the message that I was an idiot, and my company also was notified that I'm clueless about information security.
I can only imagine how difficult it might be to explain to someone what I had done, and why I probably shouldn't have to go on some tedious training course let alone be fired. Luckily all I saw was an increase in the number of these emails I received.
There's was the general "don't follow links in unknown emails" but nothing about what to do if you're sure it's a bad email but terminally curious.
As far as I could tell nothing bad could happen (even JS was off in the browser I used to open it) when I followed the link, but is there something I should be aware of?
The security teams are correct in the training they run about these: report the suspicious email and leave the investigation to them, don't try to DIY the investigation. Note you aren't penalized for false positives (reporting a legitimate email as a phishing attempt).
Okay, but if I'm not supposed to click on unknown links, why do I even have a web browser installed on my work machine? 99% of the time I'm using it to access unknown and untrusted external websites.
I don’t understand this attack: if attacker can control CSS on the page - then they probably can also control javascript. Which means they can extract any data from it.
That is a nifty way to steal data. Luckily I was running in a completely clean environment inside a VM so wouldn't be an attack that could have occurred this time.
I wonder how many such phishing e-mails a company gets a day.
If the volume is not that high and it's something manageable by the (proper) security team, I wonder if a company could implement a policy where the employee can report a phishing e-mail to the security team and get to sit with them to watch them investigate. If that's not possible, maybe have the security team write up about investigations into phishing e-mails from time to time and send the results to employees as internal memos.
I work at a big company with 10,000+ employees (only 100s of software people, mainly sales people). We get phished probably once a year no lie. You've got older folks working HR departments and someone gets an email like.
from: jim.bob.sales@bigcompony.com
"hey cindy it's bob your bosses boss boss. I forgot my password and have a MAJOR presentation coming up. Can you give me yours for login so i can see our powerpoint?"
Since you are making the more extraordinary claim, you need to provide evidence that your computer usage practices are 100% infallible to sophisticated attacks against you by people who know a lot about you.
How do you prove this though? It's the classic "you can't prove a negative" scenario. I have no particular desire to associate this account with my real name or I would post my work email here just to let people try.
Oh sure everyone can be vulnerable to certain cons. Fortunately all spear phishing attacks can be very easily avoided through technical means. I guess those who down voted me must be unfamiliar with the functionality available in current enterprise email systems.
That's great. Not all spearphishing comes in via email though. See Dark Caracal or Magic Hound for relevant history here.
Also fraudulent invoices are another form of spearphishing that you'll still have a pretty hard time against.
I don't know, I'm also on the operations side in a large enterprise and I help with our internal phishing efforts. Pretty sure I'm familiar with the same tools that you are and I vehemently disagree with your assessment.
I have a client in the banking industry who performed these tests. Everyone failed. I'm not sure if they ran them again but there's a point where you need to sit someone down and explain how serious the situation is. If they still don't get it, you should probably fire them or transfer them to a department that isn't vulnerable.
If Security/IT is so dense that they see any value in testing before training, we’ve already identified a problem: culture or a “our employees are too smart for this issue”.
Nope. Of course not. These opportunistic campaigns use inherent human weaknesses to lure and snare suspecting and unsuspecting users.
Now, if someone is told that official policy states you must only use approved devices and services and you violate that and that introduces additional weaknesses, then yes. But that’s different.
I mean phishing experts in active campaigns get phished. So, regular Jane and Joe? ‘Course not.
Honest question: why do so many workplace penalties come with only two levels of punishment?: words ("reprimand") and getting fired. This would be like only having speeding tickets and the death penalty in normal law. Losing part of your bonus for the year would certainly sting enough to provide a disincentive without having to fire anyone.
I'm a reasonably well paid software engineer in Silicon Valley, but I don't get a bonus or options of any kind. I suppose my employer to could take vacation days from me or not give me a raise next year, but if they did either of those things because I "failed" a phishing test (where "fail" doesn't even involve giving up any credentials) I would probably be looking for a new job anyway so they might as well fire me.
Forgone bonus was just an example. It could be any sort of penalty or loss of perk, and my question is directed at the stead state equilibrium of the job market, not at behavior given whatever contracts are currently signed. E.g., instead of offering someone $85k, offer them $84k plus a $1k bonus if they avoid phishing attacks. Or give a bonus vacation day to each employee who passes the test. Etc.
I'm pretty sure something like phone banks (in or out-bound) are filled with low-skill workers whose credentials would be valuable to data thieves.
I've worked low wage jobs for the Government and Private Industry where we were hit with ransomware and phishing attacks. I think you are underestimating how many workers are really in that position. I'm not sure if you're American, but it's very common in America.
I claim: Out of the fraction of jobs that fit the description you gave, fewer than half are realistic targets to phishing. Of the fraction of jobs that are realistic targets to phishing, fewer than half fit the description you gave. That's more than is necessary to make my question relevant.
Phishing is frankly an embarrassment for the mainstream security community. The temptation is to "blame the stupid users" -- but the truth is that even a script kiddie can take a real email from a mainstream brand, "Save As HTML...", change one link, and resend... and snare even sophisticated victims. This BlackHat talk (https://www.youtube.com/watch?v=Z20XNp-luNA) shows just how easy it is to phish even users who think they are too good to be fooled.
At Inky (https://inky.com) we're using a combination of computer vision, anomaly detection, and domain-specific hacks to identify zero-day phishing emails "from first principles" (as I like to say). And it works! But the pushback from the security establishment is impressive. I like to say that there are two widely-held but false beliefs about phishing: 1) phishing is solved; 2) phishing is unsolvable.
The truth is that we can already see clearly that within 3-5 years machines will be good enough at identifying phishing emails that attackers will move to another vector... but you'd never know it listening to "Security Thought Leaders."
> The truth is that we can already see clearly that within 3-5 years machines will be good enough at identifying phishing emails that attackers will move to another vector...
Claiming that a complicated problem involving a lot of humans, that is very much not solved at the moment, can expect to be fully "solved" in 3-5 years stretches my credulity.
I fully expect the next decade to look much like the past several decades, with both sides of the security arms race making incremental adjustments and improvements.
I might consider this - if my employer gave me tools to deal with looking at email headers, etc etc etc. That means iff I have to use Outlook/Exchange, and nobody will tell me what the external SMTP server IP address is (and other information) this is unreasonable.
I've had two different large, corporate employers do the phishing training thing. I've failed occasionally at both of them. You can make a phish as close to indistinguishable from a legit email as you want.
In my experience these "phish-your-employees" programs have 2 side effects, both possibly unwanted:
1. Reluctance to even look in Outlook for fear of getting a drive-by. I know these haven't shown up in a while, but Outlook is a strange beast. That is, I'm just not going to look for, or even open, emails.
2. Enthusiastic reporting of false positives. After getting burned by a decent phish, I reported a few legit emails, including one that had a salutation of "Dear Joe User:" or something equally generic and stupid, but was a genuine email. There's sort of a Poe's Law in the relationship between phish and real emails. This wastes security staff's time. Or maybe you want that. They tend to be a bit weird and annoying.
I used to work in a casino that sent out a notice to all employees urging them to report more suspicious activity. There was no information or training given on what specifically to look for.
After some time the initiative was deemed a great success. Although there had been zero improvement in the rate of dangerous activity stopped or prevented, there had been a giant increase in the amount of reports that turned out to be false.
We just implemented the Phishhook Outlook addon. I'm sure our Security team will love getting 9000 emails a day to sort through (3 per day per employee).
I don’t think parent means that in the absolute sense, just the relative sense. As in, not even opening emails where the sender or subject suggests spam.
I’ve almost direct-deleted an email like this, but it turned out that the sender got married and changed their name.
This is a good point, and I think there are parallels to other areas of the industry.
For instance, let's say I'm a junior developer and I'm told that merging code that fails a suite of unit tests is a serious offense.
If I one day forget to run the test suite and merge code that breaks stuff ... it might be my fault at an acute level.
But at an organizational level, someone should be saying, "If it's that important to not merge code that breaks tests ... then we should change our process so you _cannot_ merge code until all tests have passed."
And if nobody gets faulted at the organizational level, then the junior dev is really just a scapegoat.
We still issue speeding tickets instead of firing everyone at the DMV.
Educate all you want; no consequences, no behavior change. Incentives matter. Employee opsec metrics should be a part of corporate cybersecurity insurance pricing IMHO.
The fortune 50 company I work for sends out what I must consider the stupidest phishing test emails I've seen. They are blatantly simplistic and transparent.
I have had this fantasy of trying to see if I could trick the IT people who send them with a phishing attempt. It would involve perhaps reporting that my virus scanner had reported something suspicious in an email to get them to open something.
Or maybe register mimecastprotection.com, then send out a fake email to IT as if it was a big marketing announcement from Mimecast that "We've changed our name! We are now Mimecast Protection as part of our commitment to serving you!"
My theory is that a really well crafted phishing email is going to be very hard to avoid.
It makes no sense to blame users for doing perfectly normal things like clicking on web links, reading email, opening attachments, reading a memory card, connecting to a wireless network, etc. rather than blaming hardware and software developers for designing systems where perfectly normal actions result in criminals taking over your computer.
It also makes no sense to blame users for thinking an email message is from their bank when there is no obvious, visible difference between messages from their bank and messages from criminals.
I was just talking to a coworker yesterday and at his previous job part of his security was to go out to the employee parking lot and dump thumbdrives, if they were plugged into the corporate network they would send a message to the security department on the terminal and user account. I actually said to him, no one would be stupid enough to do that, he told me they did this monthly and at least 2 to 3 people would get caught.
He said employees had training and still failed. No one got fired for it though.
It’s like people don’t have (private) computers anymore. If you get caught doing that, or watching animal porn on your company laptop or whatever, the problem isn’t poor IT training, the problem is that you should have bought your own computer! How are people so eager to look at the thumb drive that they can’t wait until they get home?
Is someone trying to apply AI and Deep Learning to phishing attacks? One of the things which PG noted in "A Plan for Spam" back in the day, was that the Bayes classifier found markers of Spam he never would have thought of.
That is such a fascinating document. Back in late 2002 or early 2003 I did an implementation of the algorithm in that document in C (because I didn't understand Lisp) with the help of a more senior programmer at my company.
Once my implementation started to work I was really amazed how such a simple algorithm could be so successful.
Firing people for this would leave the entire company in a state of fear. Have you seen those sci-fi dystopias where a script or AI unfairly decides the fate of people...?
As the article implies, absolutely not, and obviously so. Do not make enemies of your own staff, a hostile workplace is exploitable, not to mention unpleasant and demotivating.
My god are people bad at security. Security people especially so. Actual security is not bound to the mechanics of securing things. It is bound entirely to risk. Did you just fire the best accountant your company has because they were too focused on solving your huge tax liability to notice a phishing attempt. Risk.
Everyone is fallible including your IT security group. If phishing attacks are actually causing appreciable damage to your company, it's the security group who needs replacing. Can they report quantitatively how much more value your organization has captured with it's 90 day password replacement policy, and does it account for all the passwords written on post-it notes laying round, and the productivity impact of constant forgotten passwords?
The purpose of security is to mitigate the risk of loss, but so is insurance. Don't fixate on the machinery of security, and don't fire people for poor email filtering who's value is not to filter emails.
I think it depends on the level of trust that you are given in your position.
I got reamed on another forum for saying someone shouldn't be allowed in a certain role, after they sent $1 million to a fake bank account to someone posing as a supplier. But if your work place doesn't have controls in place to prevent that, it's part of your job to be that control and take additional steps to protect yourself and your employer.
My company uses similar tests - you get a random email and if you click on the link you're required to take some training. One of the things they emphasize is to ensure the actual URL seems legitimate, or is pointing to a company domain if the email claims to be from within the company. Ditto for the From field.
Recently there were reports of an active shooter on site. Everyone got email alerts about it. Many (most?) employees ignored the alert because the From address was an unknown external domain. Fortunately there wasn't an active shooter (although the person who was arrested was armed).
And then the company sent out an email asking us not to ignore those types of emails even if it appears to be a phishing attempt.
I think from now on, just for the heck of it, I'll click on the links but modify some of the characters in the URL. Hopefully someone else in my/some company will be notified that they need training.
Depends on what you're doing, but if your employees are dangerously gullible, of course firing them should be on the table if they (especially repeatedly) exercise that gullibility, and it is not feasible to give them tools to mitigate that risk (like PGP, though that's not perfect either).
My general approach is to create computing environments which make it generally impossible to send/receive general communications, and access sensitive information (or the web), at the same time on the same machine. The communication channels available to an agent while accessing a customer file are heavily sanitized, and the environment does not allow for opening links; images are transcoded in fresh containers on a remote machine with no general access to the database or the internet.
The real question is: do many businesses understand the risks well enough to make that determination well?
We definitely do not enforce tailgating enough at DoD. I find it best to stand aside at let people pass before I swipe. The funny thing is that they have security that ensures there is ample space between us, but still some jackhole wants to bend the rules because they are special.
I work at a firm that creates and sends these phishing tests for our clients. Prior to doing this type of work we always assess the "tone at the top" regarding the culture of the workplace, to assess the suitability of doing these tests.
However, if there are staff that repeatedly fail these tests and receive constant training, then that's a question for the business in how willing they are to accept the risk.
Given that there are tools that can quite often successfully block these types of emails before they get to the end user. Most often when we are crafting these emails we need to ask the IT teams to unblock the domain.
In my opinion I think in most cases no, however depending on the industry and the strike rate you might have a case for it at some point.
I think I'm a pretty technical guy. I'm fairly certain if my job was more dependent on email, I'd get phished eventually. If the IT department hasn't made a move towards using 2FA, it doesn't seem right to punish employees with termination.
I'm interested if the company which would fire a random employee for 3 strikes would also fire a VP-level employee. If they don't, then it's just BS, not security, given how much more access a VP level person has.
OK, if you do that, I won't open any links in any emails from itdepartment@example.com, where itdepartment@example.com is the group email of my company's IT department.
I may or may not open any emails from that address period, depending on how paranoid I'm feeling.
Or... and catch me if I'm talking crazy here... or do you want to fix the email software so I can trust that only the IT department can send me emails from itdepartment@example.com which actually make it through the firewall and email filtering software and internal email security policy to reach my email account?
It depends on the industry and their regulatory obligations as well as their risk tolerance. Defense and Finance should have a 3 strikes rule for specific role within their orgs that produce the greatest risk. Health care would be next up and may or may not benefit from a 3 strike rule.
I think a better question would be is Sr Leadership supporting the security and risk mgmt teams in developing proper training as well as implementing and spending the money on the proper controls to help reduce the risk to the end user of being spear phished?
One thing to understand about phishing is that it isn't necessarily from outside.
Our (large) company recently had a sort of big (we think) leak of internal source code from a GitHub Enterprise server - done by an internal person who DL'ed a bunch of code and put it outside.
Basically no security system in the world would have stopped that, as long as we think the idea of sharing source code internally is a good idea.
So yea - the guns all point out, and if anyone inside your organization ever tries to phish you, there's a good chance you'll never see it coming.
> Should Failing Phish Tests Be a Fireable Offense?
In one way, it's pretty easy to answer: if firing offenders results in real costs from successful phishing efforts decreasing more than the cost of hiring and training people and any side effects from worse morale... then yes.
But unless you're working with state/military secrets where lives could be at risk, or on the security teams of financial institutions where a mistake could lose tens of millions of dollars...
In my experience, gullibility has little to do with innate intelligence and is rather correlated with how much trait neuroticism you have (ie distrust of other people/the world). So in a sense if you were to fire the most gullible employees you might be inadvertently be selecting for neuroticism, which you may not want (unless you're in a very security oriented business where that could be a useful trait)
I know no civilised country which law would allow such a thing.
Maybe for military or something, but anything else would simply not fly as it instigated and fake.
What if they responded OK in a real situation? Also it would completely kill your workforce morale. Do you really want to run a fear based organisation?
In the real world, i.e. Evolution, if you fall for the predator’s camouflage, you help your species thrive by removing yourself from the gene pool. If your fellow herd-members see you being taken down, that has a tendency to raise their awareness. If not then they follow the same evolutionary path…
As an employee education campaign, my last Bigco employer started sending out their OWN phishing emails, and if you clicked a link in one of them, you'd be taken to a page explaining how you got tricked and what not to do. Pretty good way of targeting the message to those who need it most.
That's pretty standard. The problem is that they don't actually attempt to phish you. These emails are only good if your security model is that you can't click untrusted links (i.e. you want to defend against browser 0-days). If that's the security model, why do I even have a browser on my computer? In fact, my organization's policy says that I'm allowed to use my work computer for personal business (like reading a HN article while I'm taking a break)... If they have no problem with me browsing reasonable parts of the public internet, they have no business failing me on a phishing test that never even asks for any credentials.
Can we fire the security people at our company who test us for phishing attacks, and then send us emails (with off-company links!) to polls, etc., that are required... and all the "security" in these emails is words like "THIS IS A REAL EMAIL FROM THE COMPANY!!!"?
How hard is it to make people understand what a business email should or shouldn't include? If you're being asked for data by someone you don't know, either ask a manager or someone connected to the account in question.
That's only true if you have a shitty enterprise email system. On a proper system such spear phishing attempts are blocked before reaching end users, or at least immediately obvious to anyone paying attention.
Right, no false negatives. It's trivial to determine whether a message purporting to come from one of your corporate domains was actually sent through an authorized internal server. Plus the IM integration in Outlook makes it super obvious which messages came from legitimate internal senders and which did not.
What about the sending and reply-to address? If the account is actually compromised at a system level, that is an IT issue. Again, are people so trusting that they don't check when asked for confidential data?
From and reply-to looks just like your company unless you catch the misspelling. "l" and "I" in the domain name go to different companies, but when everything else in the email looks just like any other email from IT you aren't going to notice that small difference - you probably won't even read the from/reply lines.
The address doesn't matter in some of these scams. It could be a message spoofed from your boss's real email address saying, "Hey Sam, I sent you the wrong account we need to wire money to. It's actually 123456. Can you fix that ASAP? Thanks!"
But wouldn't you reply to the address it specified instead of the source of the spoofing? I thought that was why most phishing emails include a link rather than asking you to reply.
Have you ever worked for a big company? (Think several US offices, half a dozen European and Asian offices, 500m in revenue).
Someone forwards me an e-mail from our Dutch office that says, essentially, "The world is burning down, we are boarding a plane in a couple of hours to go to IFA (show), and we don't have the latest copy of product X to demo for customers."
I do builds by hand of this product because I can't get resources allocated to automate it
I have never heard of any of these people before. I reply asking, "I am sure I can accommodate you, but, Who are you and why I haven't been told about this before now?"
To my shock, the guy replies, "I'm the European Vice President for product X, we didn't ask before now because it has never been a problem in the past. Who are you?"
I reply, "I'm the only person in the entire world with the encryption keys to provision the product, and that has to be done on one single computer in Santa Ana. It's only a fluke that I am here today-- my car wouldn't start this morning, and I had planned to take the day off to fix it, and then by some miracle an hour later it started. That's why it's important to know about things ahead of time."
That's what working at a big organization is like, you interact with people who don't know all the time. And frankly, nobody is ever allocated time for "security" in their schedule. My dance card at that company was scheduled for 8 hours of development a day, no e-mail answering, no security, no time to do the build system. No time for meetings. Nothing.
Yes, and it is not a “bad” thing outside the niche of security I think. We should all hope to live a life where we can implicitly trust other human beings.
The tests I have seen are more like a macro enabled office document attatched to an email that says it is from an address actually on the corporate email server. And it has a line like "review my schedule update, a week less for test creation is fine right?". Except I don't work with the sender and the corporate firewall as marked as having actually been sent from outside.
The trouble is that people have a ton of intuitive buttons to push, and being intuitive they're not usually aware of them. (In straight cons, the buttons are often emotional, which is why the term con comes from building "confidence." Phishing is more faking authenticity than appealing to emotion.)
Skepticism and rational thinking require effort, and thus as people are rushed or tired, those are the first defenses to fail. You can train to recognize patterns of phishing, but learning those patterns takes time, repetition and effort.
All that a good phish requires is finding the right buttons to push on the right person, and they have many potential victims to press them on, and little to no consequence to getting it wrong.
So, the answer is people are not particularly gullible, but the weak links are a constantly changing, largely unknown dynamic and the phishers can hammer at all of them simultaneously until they get through.
I suspect hiring practices and company culture may be selecting for gullability either accidentally or deliberately in many cases (cynically the gullible are easier to motivate cheaply). Like how "being a team player" is used to refer to willingness to work unpaid overtime as opposed to actual ability to cooperate.
If they value "responsiveness to their authority over procedure" then people will send the entire financial records to "the CEO" for fear of getting fired otherwise.
> If you're being asked for data by someone you don't know, either ask a manager or someone connected to the account in question.
Most cases I've seen of successful (or nearly successful) spearphishing would have been solved by someone picking up a phone and calling their co-worker.
"I know you sent me an email, but can you just explain again why you want me to buy $1000 in iTunes gift cards and email them to you?"
"I got that invoice you sent. Just wanted to confirm the amounts -- $50k wire transfer?"
I learned to look at reply-to back when I used Juno & Netscape Communicator. Heck, Communicator used to warn you if reply-to differed from the displayed name.
Two factor auth prevents most phishing attacks from being ultimately successful, and two factor probably is easier and better on morale to successfully implement, especially with hardware security keys nowadays.
In my office we can get into trouble for this. However, they always send a magic header in the email to get through the firewall. My solution: filter out emails with the header.
They also did have a reporting system. Presumably you wouldn't get a strike if you clicked and reported. People who reported "legitimate" phishing attempts were rewarded. Spear phishing is a totally different game and nobody in their right mind would fail people for clicking on a (well crafted) spear phishing email.