> b) Does the phishing test service detect if the link is accessed via a sandboxed env?
In any company likely to be doing phishing testing internally, there are two kinds of people who might try this. One is the infosec group, which isn't going to do this because they're running the test. The other is engineers who think they're clever and are equipped to fsck around with things.
The former are professionals. The latter are dangerous and not actually an exception. The frequency with which their confidence is justified approaches zero and in the vast majority of shops simply not worth the time it takes to contemplate.
I wouldn't classify the majority of "Blue teams" I've worked with as professional. I'm currently dealing with a new Infosec group at my company that thinks the CEH is a high quality cert, that doesn't understand how open relays can be a problem, and believe that everything Qualys spits out is the word of God. I feel sorry for the CSO we just hired, but he's not much better, and a classic example of why "CSO" often stands for Chief Sacrificial Officer.
Good lord, you make it sound like dealing with highly radioactive plutonium. This is a site called hacker news, if you're a web developer and you can't figure out how to pull an html page without executing the scripts involved (a TRIVIAL thing to do) you shouldn't have a job. And honestly if your network is so insecure that someone running a wget on a domain poses a risk then your network has almost assuredly already been hacked.
> Good lord, you make it sound like dealing with highly radioactive plutonium.
That sounds about right. Your average developer dealing with malware is roughly as safe and sane as playing pool with 6-kilo balls of pu-239. Especially since a lot of places, developers are trusted with things like access to production from their workstations.
> This is a site called hacker news, if you're a web developer and you can't figure out how to pull an html page without executing the scripts involved (a TRIVIAL thing to do) you shouldn't have a job.
You know what's interesting? Even if you can do that, you've already made a mistake and leaked information. You've demonstrated for an attacker deliverability, who is curious and amateurish enough to think they can handle it (but hasn't thought it through), and some useful information about how they believe they are protecting themselves. Fetching a malicious server's HTML safely isn't as easy as might be readily supposed - both curl and wget (https://www.cvedetails.com/vulnerability-list/vendor_id-72/p...) have suffered remote exploits in the past. Those are almost certainly the tools a random dev would reach for and they cannot be assumed to be safe. The odds that said random dev is equipped to set up a sandbox to do so reasonably safely are not great, and the odds of them doing so much smaller.
Curiosity isn't a bad thing. It's a wonderful and powerful trait that has driven humanity relentlessly forward through the ages. Unfortunately, it can also be used against people. Being curious when playing with fire can be dangerous. Especially if you just think the fire is pretty and haven't figured out that it burns yet.
This site may be called hacker news, but it's not full of the kind of hacker that congregates at DEFCON and understands the House of Prime. It's full of the other kind.
Deliverability is pretty trivial to prove, and if they knew enough to get something in your inbox they probably already knew enough to be confident of that anyway. With regard to wget: 15 vulnerability’s in the last 20 years for such a highly used peace of software doesn’t scare me that much, and I can easily run it from a sandboxed container or vm since I’m using those all the time anyway. And if I don’t want to confirm deliverability, since I’m a web developer and I have a brain, I realise they probably included a unique token to know I’m the one that clicked the link, so I’ll leave that token off the request.
Look the problem with this kind of attitude is people take security less seriously when security experts go overboard. It’s classic boy who cried wolf. If you want people to take security seriously it starts with honest conversations where you treat people like adults and don’t immediately go to hyperbole.
OK. How should we - I - go about this differently?
Adults are perfectly capable of believing that their expertise extends further than it actually does and taking risks they do not fully understand or appreciate. I see it daily in the developers I work with. I have worked with more than one developer brimming with confidence in their ability to tackle areas beyond their expertise, who will try to engineer on-the-fly around any shortcomings pointed out in their approach (this is unrealistic, in real attacks adversaries don't give you friendly feedback iteratively).
I'm plenty willing to listen and take on board feedback here. What attitude should I take? How do I convince responsible adults, in a constructive and serious way, that they are not equipped to entertain their curiosity in this arena and should not try? How should I communicate to you, and to hundreds of developers at once, this message without going overboard or crying wolf?
It's one thing to play with malware and phishing at home, on your own hardware, on your own network, and with your own data. That's all your own risk to assume as you like. It's quite another to do so with company hardware, network, and data. That's not your risk to run and not your risk decisions to make. If you can advise me on how to communicate this to engineers who honestly and earnestly believe in their ability to safely handle things well beyond their expertise, I am absolutely all ears.
It would start by taking an honest assessment of what the other person actually knows before you make blanket statements about what they don't know. I mean, don't get me wrong I'm not personally insulted, but in my case you have zero idea what my actual expertise is but you're already telling me that retrieving link is beyond what I can safely handle -- and you literally have no idea what my experience is. (I used to work in security!) Part of my job as a developer is security, I need to know attack vectors people use, so that sites I build aren't vulnerable, and besides that, I'm not some criminal mastermind but you don't even know if when I was younger I had fun hacking people and creating mischief. Point being, you can't come at people with this blanket statement of "you have no idea what you're dealing with!" because YOU don't know what they know. If you do that you're going to lose their empathy and attention before you even communicate what you want to communicate.
Here's what you can do: tell them what can go wrong, with specifics, and don't make assumptions about them. Be realistic about what the likely consequences are and what the worst case is.
If we wanted perfect security we'd never connect machines to the internet and superglue the usb ports shut. But in a realistic world, the level of security we choose is measure against how much risk we're willing to take. Lets say my risk profile is this: I don't work for the NSA and I'm not important enough that someone is going to try a unique zero day exploit on me. But it would be trivial to figure out my work email based on my linked in and my name, so I don't really care about them discovering deliver-ability. If the phishing attempt is bad, I'll probably spot it immediately and not bother to even open the email, but if it's good I'll probably at least investigate because I'll want to know if it was a legitimate email. Chances are, they're just trying to convince me to type my password into a web form (and assuming I use the same password everywhere). Of course, chances are also that, just based on my past experiences, about 90% of phishing attempts come from corporate security departments anyway. If someone really has a very clever zero day exploit of wget then they'll get access to a container with little sensitive data that I rebuild about a hundred times a day.
Yes it's not my hardware, but on the other hand, my company has entrusted me with local admin to get my work done and use of the internet. That's the risk profile they're comfortable with, and sometimes I get external emails and need to figure out if they are legit or not, and I don't work for a giant company with a huge security department so sometimes I need to take a glance to see if it's a legit email or not.
That's an excellent and highly empathetic approach!
My only issue with what you've described is that I cannot scale it. When I have hundreds of developers to educate, sitting down with each of them and spending hours hashing out what they do and don't know and educating them over the gaps can at times become somewhat time-consuming.
How do I deal with hundreds of developers, the vast majority of whom have no significant background in security, many of whom earnestly and honestly believe that their understanding of web development protects them? How do I collectively treat them like adults and not lose their empathy or attention in a scalable way? A highly individualized approach isn't workable in this context.
In any company likely to be doing phishing testing internally, there are two kinds of people who might try this. One is the infosec group, which isn't going to do this because they're running the test. The other is engineers who think they're clever and are equipped to fsck around with things.
The former are professionals. The latter are dangerous and not actually an exception. The frequency with which their confidence is justified approaches zero and in the vast majority of shops simply not worth the time it takes to contemplate.