That disinterest is the entire raison d'etre of I am The Cavalry. From their webpage:
No one is rising to meet these challenges. The cavalry
isn’t coming. We are the domain experts and we are the
adults in the room. It falls to us. We Are The Cavalry.
I went to a two-hour presentation at $serious_security_conference on $my_product_domain. I was really excited to see what there was for us. We're not running webapps on racked pizza boxes, so there is a lot of topics in our systems that aren't really explored in public security literature. Instead all I got was some dummy thinking he's hot shit because he found some vulnerable systems on Shodan. I got angry enough to walk out, and then went back thinking there was probably some value to be had. 3 times.
Anyway, the market rejected that to maximize profit with buggy software and consumers wanted unnecessary complexity at a rapid pace in every aspect of computing. So, I typically recommend medium assurance solutions these days. Safe-by-default languages, interface checks (esp Design-by-Contract), usage-based testing to keep users happy, fuzz-testing to find lack of input validation, static/dynamic analysis tools, good architecture, good middleware, OS with few 0-days, and so on. These cover you mostly.
What type of apps do you build on what platforms?
EDIT to add: Link below has a link and description to lots of defense techniques that aren't boring at all. What do you think on that?
To expand on this, I've also noticed that "security" in enterprise-size companies tends to be a dumping ground for helpdesk+ staff -- people that can read a CVE or parrot "best practice", but not really grok the subject or think critically about it (e.g. jumping from one password to another every 3 months isn't "more secure"). It's been my experience that people in security teams can't implement the fixes to the things they complain about because if you're a security-minded dev, your career path will be in dev.
>It's not as fun to be a developer that is really into security but only have that be part of your job.
I think it would be ideal to have security as a parallel track to regular dev staff -- with their placement in the ecosystem being between dev and QA. Much like how "devops" was used as a hiring filter to have sysadmins that can at least read code, I think we should have a "devsec" group that bridges security-QA and dev.
So much this.
One place I worked had a head of security that switched us to having passwords that auto-expired every 30 days, had annoying requirements (mix of upper, lower, numbers, and special characters) and not allowing reuse of a password for 1 year. The increase in people having problems was handled by having "security questions" and self-service password resets.
My insistence that "security questions" were just unexpiring, easy-to-guess passwords was met with "But this is the standard!". Pointing out that rapid turnover of hard-to-remember passwords led people to write them down generated a similar reaction.
I put a lot of effort into running environments securely, and eventually, someone always decides to get a second opinion.
The only recommendation I have ever seen come from a security audit in an enterprise, is bikeshedding password lifetimes. Every time I point out "SQL Injection in the logon form" level vulnerabilities, I get directed to the "security experts", which is why I've seen password policies go from 180 days, to 90, to 30, while the mentioned issues get ignored.
People flash a PWC business card and it's seen as credential. Then they drag me to the comms room and spend all day auditing the serial numbers on individual patch leads going to desktops.
I don't intend to go back. Great conference, don't get me wrong, I got personal enjoyment out of much of it, but it's impossible for me to professionally justify it.
Defense isn't fun enough, for anybody; neither the one defending, nor the developer who was just told the code the spent a week developing is a gaping security hole, nor the managers being asked to spend money to prevent black swans, nor the open source developers having fun developing frameworks and libraries, and on it goes. It just isn't fun.
Posted above in response to Schneier's call for INFOSEC after Snowden leaks. The title is both a parody on cryptography papers and a trick I used to make sure I can Google a link with a few, unique words. Still works haha. Anyway, that link covers both high-assurance and medium-assurance INFOSEC methods out of CompSci and Defense sectors. Skim through the titles and abstracts then come back to tell me if you still think defense is the boring part.
I mean, really, because I think people think that because "INFOSEC" as you see in mainstream is (a) bullshit or (b) boring. The real deal is constantly innovating, works more than it fails, and quite interesting. To me at least.
I suppose I wasn't clear. I have fun with it too for the most part or I wouldn't be doing it. But I observe that I'm a rare duck that way.
And the unrelenting stream of bugs caused by the same fundamental root-cause issues over and over again can get old. How many times do I have to see an XSS brought on by using string concatenation on HTML? How many times do I have to see, well, any of several vulnerabilities based on string concatenation before the engineering community actually acknowledges it's a problem? That can grate on someone after a while.
It's due for an update in near future with resources that a few people here shared with me. Still has lots of interesting stuff, though. Might not help in your day job as you can't dictate the stuff to management or dev team. However, you could enjoy doing it in private on FOSS project exploring those tools or improving/cloning them for others.
There's also projects that try to automate prevention of known classes of bugs. If you find some easy to use, maybe bring it to attention of your management showing they make QA more efficient & liability lower while requiring no extra costs on dev's part. Win-win.
- the client (i.e. browser), protocol, proxies, etc. are very hard to reason about. For instance, content sniffing and HTTP response splitting should not show up in a sane protocol, yet here we are. An encyclopedic knowledge of - at least - browsers bugs is required to write "secure" web sites.
- type-based safety doesn't do a lot to ensure that e.g. the same-origin policy actually protects your users (i.e. that there is not some unrelated application on the same domain-ish). The public suffix list (https://publicsuffix.org/) is essential to modern web security, and there seem to be no plans to change this.
- "The Tangled Web" is very good (or at least the first half is). It makes me happy I'm not working on the web. ;-)
It's true. I think you could encode a lot of that in a framework or language such as Opa or Ur/Web. What's left is a tiny checklist. Edit to say maybe not tiny but smaller.
"type-based safety doesn't do a lot to ensure that e.g. the same-origin policy actually protects your users"
There's secure, browser architectures that do that in various ways. I'm not sure about the language level. Might need to be combined. Worth thinking about. The publicsuffix website is new to me so thanks.
" It makes me happy I'm not working on the web. ;-)"
You're the second or third to recommend Tangled Web. On my reading list for if I get back into Web development. I'm sure I'll have the same reaction. You might find it funny that a prior commenter found it shocking that I hadn't read work on secure, web applications. Whereas, I originally found it shocking that people were trying to build them.
Perspective changes everything, eh? :)
That said, understanding how the red team does things is extremely useful for formulating one's own defense.
What happens if your security program is really really really effective?
You can be a security superstar, or a total idiot, and you may achieve the same result!
It often seems like a myth when you hear about that phenomenon, but then I experienced this myself in a different role myself and became a believer.
It could still be the peter principle, and that you need to better communicate your unseen accomplishments. Maybe there should be a modified peter principle, where people rise according to their ability in the job and corporate politics.
This information asymmetry problem means that, in a job implementing supposedly good security, an aptitude for security is not an advantage. In fact, it's a disadvantage, since that aptitude inclines you to do unprofitable things like auditing code, recommending to your customers that they scrap unsalvageable codebases, and staying up late at night patching production servers instead of ironing your suit and getting a good night's sleep before that important client meeting. Total incompetents wearing better suits will be able to bill a higher hourly rate than you do.
Because there are no customers for good security, the entire infrastructure of plausibly securable software that you would need to implement good security also doesn't exist, because the market has failed to provide funding to develop it. Big technical companies (Microsoft, Google), free-software organizations (Debian, Tor), and dedicated individuals (Bernstein, Moxie) have made significant progress, but it's nowhere near being a viable option.
By contrast, I imagine that if you "get paid to pop boxes", you can demonstrate to your customer that you succeeded at it, and you probably can't convince them that you did if you didn't. This means that aptitude and expertise are an advantage in that field. Don't get me wrong: even from this distance, I can see that you're right that the field is still full of the script kiddies who "think [they're] hot shit because [they] found some vulnerable systems on Shodan." But even they are head and shoulders above many of the high-priced security consultants on the defense side from places like Keane or Wipro.
The upshot of all of this is that if you want to secure your network, the best you can do is figure out who the expert attackers are, then convince them that it's worthwhile to secure it. But that's still nowhere near enough to get to "probably secure", much less "assuredly secure".
Wow, this is an interesting claim. You do this work for a living? Are you ever concerned about getting caught? How do you get paid?
I'm assuming that covering your tracks is some sort of ever escalating cat-and-mouse-game. How do you find out about your adversaries' capabilities regarding investigation after the victim detects the problem?
EDIT: oh, oops. Double-misunderstanding. "that's what I do" meaning "harden,test,repeat" and of course some folks are legitimately paid to do pen testing.
But penetration testing is a job that exists. People are hired to do exactly that under contractual terms with the target. (And I don't mean to imply that's the only security job in the more-rewarding/fun category.)
But it is steady, decently-paying work that will likely last until the singularity.
1. People who are into security to prosecute some immortal struggle between good and evil.
2. People who are into security because of the engineering challenge.
It's the people in group (1) that I tend to have a problem with. Often, for the "good guys" security professionals, engineering facts are just a means to the end of winning the war on the "bad guys". These are the people who tell you your Scala serverside application is insecure because of some recently-released Java applet bug; they're the people who fought against DNS query randomization for a decade because we need to finally release DNSSEC; they're the ones shipping grievously broken cryptography to try to "stop the NSA".
There are jerks in both categories, and competition is a dimension orthogonal to this one, but I find I can handle jerkiness and competition between when I know it's from someone who truly cares about understanding what they're talking about.
a) Have a real understanding of the needs of the people they say they are trying to help
b) Have the courage to make tough calls related to ideal vs good security.
Working in the NGO security space we get these people all the time. Send them out in front of a bunch of African/Middle Eastern/Asian/Russian activists who have been risking their lives for years for democracy etc etc and all they do it spend three days showing off, berating people and then leaving. Telling everyone that TAILS/PGP/Linux/TrueCrypt etc etc is for everyone and that anything else "is gonna get people killed," so by extension shouldn't be done. There are people who have made entire careers around taking that attitude, which most of the time actually leads to demoralisation and a long-term decrease in security.
It's easy to recommend the hard "cover your ass" stuff because you don't like the NSA, it's harder to say "Hmm, ok we gotta assume some risk that the NSA doesn't care what we're looking at and use a solid Google two-factor here and see how we get on...now how do we stop the security guard from selling you out?"
On a sidenote, as a digital and physical security training company for NGOs we manage to get a look at cases from both sides of the coin. Our very very rough guesstimate is that we see confirmed human penetration about 3 times more than we do digital penetration. Of course, this is very rough and has soooooo many other bias factors at play (numerical, cultural how many we see vs not see etc.). But I think it is a point that we keep having to reinforce. Too often powerful "infosec jerks" distort the the focus towards Western biases because of Snowden, Facebook, SnapChat, iPhone and this distracts time, money, energy, training and security measures from the human penetration aspect of things - which are very common in the developing world.
In the NGO contexts that I have seen that usually means someone who legitimately is works in an organisation but turns for the standard reasons. (More effective, faster and cheaper for an adversary that way)
To a slightly lessor extent, that means someone from the outside who has been placed on the inside. (Less effective, longer and more expensive for an adversary that way).
Sometimes both of these scenarios also include digital aspects, like stealing a USB drive or something but not always.
Before you ask, why do people do it in an NGO environment - fairly similar reasons as elsewhere (though I tend to order them differently based on experience):
US Method of Counter-Intelligence:
-Compromise or Coercion
-Ego or Extortion
or these days:
A good read for more info here:
Thanks for the insights.
I mean yeh, it's cool if you can get paid loads of money for a 9-5 job to throw a ton of resources and people at protecting your Pied Piper software company in suburban USA or Europe....But now take the exact same advisary (China gov for example) and try to think up ways to minimise their threats all while driving around with a hobbiest sysadmin (who the local gov may arrest, torture or disappear if exposed) with very little English in the middle of the night in darkest Africa/Middle East/Asia...The pay is crap or non-existent, it can be high stress but you get to make a real difference, which is rewarding.
We run our own. It's here www.secfirst.org - email me rory@
So, on top of engineering challenge, there's the need to fight for a greater baseline across the board. I'm just for doing it a different way than in OP where they call the dev's idiots. ;) More along the lines of convincing management to adopt whatever practices add great improvements to quality and security with little extra cost. Quite a few exist. The value is lower liability, more predictable schedules due to reduced debugging, quality/uptime differentiator vs competition, and possibility of charging extra for that.
Enough promotion of better-by-default in key areas will improve our baseline over time. There's actually been a lot of progress vs where we were in 90's. So, it's at least working a little. DARPA and NSF are also steadily funding strong stuff in variety of areas that has legacy compatibility where possible. More potential to push strong stuff in the future.
I don't understand this aspect of the strawman. Could someone help me out? In the context of the previous two examples of how members of group 1 act, is tptacek merely implying that members of group 1 are stupid?
Also, please do not try to represent my problem with the EFF's scorecard as somehow idiosyncratic. There probably aren't too many crypto engineers who don't share my opinion of it. You're probably giving me more credit than I deserve when you call it "my war".
Immortal, maybe. I don't know about it being a struggle, so much as polarization. Think how fucked up angry memes can make things.
Security is discrimination. Keeping you out of my shit involves saying "these things here are mine" and "those things there are yours" and "I won't allow your things and my things to entangle". Someone who hacks that discrimination is for entangling things in a one way direction (your stuff entangles with theirs, but not the other way around). Manning hacked for everyone, wanting the entanglement to go both ways. (I taught Manning how to drive, FWIW.)
In Zen Buddhism, discrimination is considered to bring consciousness. Something that is aware of itself completely (all knowledge) is non-discriminatory in nature. Security, in contrast, is all about keeping things separate. If you understand that, you understand consciousness.
People who "get" security on another level will struggle to find a happy medium between these two polar ends of not knowing and knowing. What is the right amount of security? What data of mine is mine to keep? How can seemingly innocuous data be used in secret to bring lower levels of consciousness to a given group? Government surveillance springs to mind.
We could stand to have more tolerance for "jerks". They may be jerks because they understand the gravity of the situation. Others may not, and need to be given the information to understand it. We are in interesting times. If we do not guide the transition carefully in the coming years, there will be much suffering...due mostly to poor security.
I think it's the #2s who are too quick to dismiss #1s that I tend to have a problem with.
I don't think jerkiness correlates with either of them even a little bit.
Your HN-karma & real-world accomplishments aside, do you honestly think you can psychologically categorize a group accurately enough that we should use your defined stereotype to improve ourselves and/or society?
(i.e. #1 Cops & Robbers -> #2 Realpolitik)
Everything is obvious to the person who has nothing to learn.
Your post seemed to be focused on the details of those you dislike. That's it. No story about how you built camraderie or struggled to understand them. What lesson should someone learn from your post?
This particular issue is buzzing in my head this week because of the stupid grsecurity Twitter drama, which is really bugging me. I guess I'd like an "infosec community" that spends more effort thinking about engineering and less about community dynamics.
We could definitely all be kinder.
We see this all the time in software development, where excellent engineers are borne of trying to build a video game at age x, where x is a relatively low number.
Similarly, there's no reason that someone initially motivated to eradicate "evil" can't become a great security mind, and I suspect Moxie and many like him are of this ilk.
"To them, we’re Chicken Little crossed with the IRS crossed with their least favorite elementary-school teacher: always yelling about the sky falling, demanding unquestioning obedience to a laundry list of arcane, seemingly arbitrary rules (password complexity requirements, anyone?) that seem of little consequence, and condescendingly remonstrating anyone who steps out of line."
That is often true.
It might help if security knew what the hell we did. The last three "Java is broken forever!!!" issues, when the problem was with applets, nearly killed me. Or possibly if they knew their job. The last employer's security policy was defined by the latest vendor tool they bought, never mind that it couldn't possibly find any issues in the applications we were writing.
Of course there was the code review I did where I pointed out that it had a giant, monstrous hole in it. The other dev nodded and said, "yeah, that's a good point". But the app had to be in production now, so nothing happened. To escalate, I would have had to go to security, which would be like complaining about a hangnail to an axe murderer. So, nope.
> Practice active kindness. Go out of your way to do kind things for people, especially people who may not deserve it. If you wait for them to make the first move, you’ll be waiting a while — but extend a hand to someone who expects a kick in the teeth and watch as you gain a new friend. Smile.
I really like this quote. A security engineer and a developer teaming up together as colleagues, are more likely to being taken seriously by stakeholders. Both teams working together have a much better chance of being given the time needed to make sure their software is stable and secure.
I'm suspect not.
No matter how much time you have, it's more exciting to work on something new than going through testing. Time is not all equal. Even if you have an unlimited supply of time (everlasting life), you cannot somehow use a time block that occurs 1000 years from now, in order to displace the boredom you feel from what you're doing now.
If anything, unlimited time will increase procrastination. "If this isn't debugged for another 500 years, that's okay; I will live long enough to see it debugged.".
Thus, I suspect, most developers would actually love to have a vast army of other people with unlimited time to do the QA to make sure their code is secure and well-tested. :)
What we actually see is a lot of apologizing, recalls and class-action suit settlements, and nobody actually seems to go to jail.
Not all engineering is about bridges not collapsing; conversely, there is some software that is equally safety-critical and carefully developed.
There is also "everyday engineering", like in consumer products. That's a category that fails miserably. Put simply, shit breaks. Past the one year warranty? Too bad!
This is certainly true. But competitive pressures will always force quick hacks over software robustness. The only real solution is a sort of developers' guild -- whose membership includes over 90% of all worldwide professional developers -- wherein an oath is sworn to always include security robustness as a required feature during the software estimation cycle. Or perhaps an analogue to the Hippocratic oath.
Which of course will never happen.
When users start choosing the robust and secure product over the quick and insecure product, sales will pick that up, product will follow, and programmers will treat security just like any other feature.
Many developers like to hack software together. To make it secure requires discipline and more time and knowledge when developing it. Most businesses don't really care about this. They just want it finished fast.
The only way it will change is if the government starts fining companies for software that has a crazy amount of bugs or requiring developers to be certified.
I honestly don't think many developers even have the ability to write software without the obvious bugs we see popping up today. For sure we would see less companies outsourcing to countries like India.
I think fixing development will be more optimal than fixing users. There is no legitimate reason that OWASP top ten or lack of buffer bounds checking should still be in the wild in 2016, whereas users will always fall for scams, phishing or otherwise.
...and if the CEO doesn't care, why is he going to want to pay developers to (as you say earlier) "always include security robustness as a required feature during the software estimation cycle"?
So, unless the users _do_ care, the only way I can see this happening is if it costs no more to the CEO/users to do this than it costs not to... and that either means
* all developers swearing your security "Hippocratic oath" (which, as you say, will never happen)
* languages/tooling that mean that this becomes automatic (where there have been steps forward, but we're clearly not "there", and I doubt ever will be).
In addition, there are people who are quite literally "dangerously wrong": They talk with great authority and can influence large numbers of people to follow them, even though what they advocate is non-productive. At best they can waste immense amounts of a community's time as they cause great debates among members. (Tabs versus spaces, for example.) At worst they gain actual authority, do great damage, and then leave the community to deal with the aftermath. (Cure your diabetes through positive thinking!) So while being "nice" and "empathetic" are excellent defaults when dealing with people, it's just as important to understand when ugly ideas need to be squashed and the undecided are urged to do what needs doing.
This isn't always the case, but in some software companies the developers capture the entire organization and run roughshod over everyone. I've seen developers tell support folks that because they are in support, they are utterly worthless and their opinions and views don't count and never will, right after they ask for their opinion. I've seen them obstruct QA people. Not arsehole, stick it to you QA folk, but quiet, unassuming QA guys doing manual test scripts and find a failure and send the case back to development.
Yup, having one-eyed folk around can really cause a toxic environment. Of course, I have to be careful, 6 years ago I was often that toxic one-eyed person, and I learned this the hard way.
Don't do that. Don't be me. Be nice and be willing to concede that the other person has good intentions. You'll be less angry, more likeable and probably more productive.
Organizations typically fail at security because they try to manage it as far away from the product as possible. That leads security and dev/ops teams to manage their goals in isolations: one cares about preventing incident, the other cares about shipping products.
If Agile & DevOps have taught us anything, it's that everyone in the organization should be focused on serving the customer, be it through features, reliable operations, or data security. The only good way to do infosec is to embed security with devs and ops and make everyone shares the goal of making the product better.
Presume a company with the budget to have a SOC, they're doing all the "regular" security jazz and them some. But are they auditing the network services they themselves run, or just applying patches? Auditing the products the company itself is producing, or just kicking the tyres? But they MITM all the outbound https connections, I'm sure that more than makes up for it.
Teaching blind compliance with any (unauthenticated) request based on "security" is the one way we could make the situation even worse.
The meta joke here is that is some ways every security issue is critical, but if everyone is immune to the fear then escalation will feel like the only choice. Either you slowly discharge that stress immunity by being sensibly mellow, or you start packing heat to 'convince' everyone to reboot NOW. (There's probably room for some amount of finesse between these two extremes.)
EDIT: There's two terrible responses that seem to come out of this. Either no one gives a damn, or no one gives a damn and just does whatever you say. The first is bad because nothing gets fixed, and the second is bad for `red_admiral's reasons (users treat anything that looks like a security rant as a EULA and just do whatever it says).
I often joke with my boss that if all tasks are high priority, all tasks are therefore of average priority.
I really like that phrasing. I've been intuitively thinking along the same line but rescale is just the perfect word. Thanks!
Got an email from my bank yesterday, "You have an urgent notification waiting on our website! We won't tell you anything about it via email for security reasons!" So I log into the bank, go to the notifications section, "Notification! You have a new IMPORTANT document." Click on documents: "Here's the monthly statement for your savings account."
Thanks guys. I hope you never need to contact me about something urgent.
This feels especially strange on the phone, when someone (who later turns out to be legitimate) opens up the conversation by insisting that I prove who I am, _when they're the one that called me!_
You cannot fix the jerks in the first kind of company. Learn from them, quietly, and look for your exit.
Some people think that there is a correlation between being really good and being a jerk. Only in the sense that the only way a jerk can survive is if they are very very good. Some other people try to pattern-match them. Mimicking the "jerk" part is easy. Mimicking the "very very good" part is not. (See also: making a name for yourself by being a jerk is a lot easier than making a name for yourself by being really good.)
There are good people who are not jerks. I've worked with a bunch of them.
The interface in question was called "lo".
I don't work there any more.
Ok that cracked me up. Happy Friday.
The vast majority of these people have never written a single line of code.
They don't understand security, because they can't understand the underlying logic in the code. They just write documentation to meet certain outside standards, and have no idea what I'm talking about when I talk about our security posture. They genuinely think that an online satellite campus degree qualifies them to manage security devs from top ten schools.
This coast's tech centers are starting to drive me batty.
For a data scientist, my normal trade, especially. So much red tape that it's a two month project to get myself a small server for test.
Anyone in need of a remote data scientist, Princeton University A.B., two years at a funded startup, four years work experience with a major firm, with specialties in machine learning, (real) cybersecurity, and big data experience?
And apparently the average graduation salary for these people at companies like Disney is around 120K. I want my friend to do well, but I also don't want the security industry to consist of people who have never used Linux.
Yet, he's further up the chart and makes the final decisions, when he can't understand basic variables that go into what applications (especially early stage platforms) are even supposed to do.
I'm coming to the realization that I just need to bite the bullet, and get one of those "cybersecurity" degrees. The classes are remote, self-paced, easy to earn a 4.0 GPA, and so heavily subsidized by the company as to be nearly free, so I might as well.
I'm also making my way through the various certifications they seem to value (CISSP, PMP, CEH (Certified Ethical Hacker).)
They're very easy to pick up. I just took a CISSP practice exam for the first time and passed quite easily.
Obviously, I did a SQL injection, cross site request forgery, cross site scripting and debugged malware with IDA Pro and later (when it was safe) with GDB. Did I talk about the concolic execution framework Triton that I used in combination with PIN to gather formulas for an SMT solver so we could crack the code of a binary? We did more things, I remember taint analysis being one of them with a PIN tool. Well, you get the point.
I don't know if we're a special snow flake. I like to think we aren't, but the way the courses have been taught to me is that writing code, breaking things and debugging (lots of it) are the norm in security. Smart conceptual thinking helps sometimes as well. So I have this (untested) view of the world that other security master degrees also let you code in C, Python and whatever else you need to analyze or break into a system. It would frankly be catastrophic if this isn't the case.
For me it was also a really interesting introduction to C! :) My main languages are Java, Python and Objective-C (where everything is a pointer, so I got away with not understanding it).
My only criticism is that some situations seemed a bit contrived, but on the other hand it improves practice because you know what to focus on. Another was that some parts were purely theoretical and had no practical assignment (e.g. return oriented programming).
Cool website to learn a couple of things more: https://www.win.tue.nl/~aeb/linux/hh/hh.html
I work for the National Satellite Operations Facility (NSOF.)
Neither my boss nor the three dozen or so "cybersecurity" folks with whom I've worked have ever written a line of code.
Also, everything you said in the first two paragraphs I either already knew from actual networking security courses, or are trivial. SQL Injection, CSRF, and dynamic executable analysis are unfortunately... Not the most modern techniques... Might as well teach naked buffer overflows.
That said, it's a decent basis. And far far better than what's emphasized on this side of the pond (Mostly policy and automated scanners.)
If what you're saying is truly reflective of "cyber" degrees across the EU, here it's far worse.
It's very frustrating to explain a DNS tunnel to these people, much less how to find a zero day in Cisco IOS.
Occasionally you'll get a hobbyist with some Python or C under their belts, but the vast majority of these "cyber" guys really should just read a book on Kali Linux and sing campfire songs in class for the rest of the year.
Right now these degree programs are turning out graduates that think running Nessus makes them a security god.
Things may well be different in Europe (I'm not going to lie, the Trident and PIN projects sound pretty cool), but it's a nightmare over here.
If you're ever in DC, look me up, we'll have lunch, I'll give you a tour of the facility, and I can introduce you to the cyber guys, and let you draw your own conclusions.
That's a pretty open invitation by the way. Any devs, start up folks or "cyber" people (that won't freak out my girlfriend) are welcome to crash in our guest room for a night or two, if you're in DC for a conference or something.
More then once I showed management they had a life ending bug. (Little Johnny drop tables) and was fired for being the messenger.
> and Bob’s demand that you explain the vulnerability is met with your impatient demand to “just do it".
Maybe rather than say "Just do it" you could say "Any user can delete our entire database and steal all of our data". I think Bob would understand why this is a bit more important than his current tickets, and by not doing it when told about it any fallout would be his problem.
He might hate you for giving him the task but it would be done.
See the problem is that all/most vulnerabilities wind up with this sort of description, which can lead to Bob building up an immunity to it.
Things are fucked, but not that fucked.
For example (and I know it's still a fairly young field), but simple activities like cataloging and sharing knowledge learned so far seems to be something that just doesn't get done. So the wheel gets reinvented over and over and over again. I'm actually kind of amazed at how many half-assed log parsing/visualization/analysis tools there are. Doesn't anybody in infosec centralize knowledge other than adversaries?
The feel I get from infosec conferences is more of a dick waiving contest and tribal activity than an honest information sharing activity.
I'd like to comment on a line that literally changed my life, from my favorite https://en.wikipedia.org/wiki/Vorlon, Kosh:
"Understanding is a three-edged sword: our side, their side, and the truth."
I think about that every single day. Speaking for myself, a lot of the other suggestions in the article naturally flow out of embracing this perspective.
I could literally go on and on about this, but I'll leave it be.
Oh, man, this is such a deeply, vitally true thing. It's something I've observed for a long time but never quite put it into words.
So often, when I see some person acting in a way that seems totally stupid to me, it turns out they were just operating with goals and constraints that weren't obvious and were different from mine.
I started off as an abuse investigator for a very large East Coast ISP, moved into firewalls, then penetration testing once I knew what I was doing. All along this path, the vast majority of guys I worked with, for, and around (on contracts) were sheer jerks of the highest order. Everyone seemed to be angry, upset, on edge. I can understand this. IT security--especially abuse investigations--necessitates seeing the seedy side of the Internet. Ditto IT security in general. You're not working for or with people who are "creatives". You're working for and with people whose sole job is to minimize threats and vulnerabilities.
The most stressful time was being a firewall engineer. Having to deal--on the fly--with impatient customers wanting a six-spoke VPN "right now!" and them being jerks about it was annoying.
I hated the calls that involved my placing a "tap" on some poor sod whose boss wanted an 8-hour tcpdump on him to see where he went when online. They wanted a gzipped log file placed on the server where they could SSH in and retrieve it.
I hated the double standards where executives were given static IP addresses on their machines and special rules created to allow them carte blanche access to the Internet at large--no filtering. They also avoided the proxy. These special executives were &^%#$@ and everyone knew it.
When you spend your life seeing nothing but evil, you take on a different view of life and the world around you. I saw this happening and I got out in favor of being a sysadmin. I'm much happier, and the IT security roles I do have are much smaller in scope and I know how to handle them quickly and quietly.
IMO, security orgs, especially within companies have this awful reputation because they borrowed the credentialing model of project managers and walk around like doctors with an alphabet soup of certifications. Talking to these folks is like crossing a circa-1998 MCSE with a lawyer.
The CPE requirements from CISSP and other certs encourage publishing and delivering talks, which is good -- if the authors/talkers are competent. The side effect within enterprises and some vendors is you develop a cadre of bullshit artists who are great at throwing spears, not so great at doing anything of value.
Help them to be excited to work with you, rather than to turn the other way when they see you in the hallway.
The mentioned topic of "Recalibrate 'urgent'" is a great message. We're rapidly approaching a desensitized feeling of security risk within organizations and in the general public. As breaches/incidents continue to rise we'll approach a "who cares?" level from the public.
What can they do? Steal my CC? Already been stolen. Steal my SSN? Already done.
What's left to fear when privacy is gone?
What really keeps me up at night is when security incidents are measured in lives. Scary...
I reported a major issue of comcast to an indirect associate of mine I know who's huge into security. I wanted some contacts with folk who know how to disclose it properly, since I wasn't getting much help from comcast. He pretty much shrugged me off because I don't have a traditional "infosec personality". A day later, I got a hold of some pretty high ups in comcast who actually fixed it. Their engineers were completely blown away by the issue and it sounds like it might do pretty well for me in the end. He wasn't the only one who shrugged me off, but if I'd gotten help from any of them, it might've gone well for them, too. The exclusionary attitude is pretty ridiculous.
I think a lot of the problem is that their threat models tend to cause them to become pretty reclusive, so that they don't really want to trust "newbs". Problem is that it now feels like a paranoid echo chamber.
I commented here a while back about working with Northwestern's security to try and protect my own HIPAA data. NW's response to my disclosure was "We're already doing scans. Report back when you find something serious." It's fucking sad that security's treated this way on many fronts. If you're not on the "in", then you're considered a script kiddie, if you're on the "enterprise in", then you're useless and completely caught in red tape, and if you're in the "in crowd", then you're part of an echo chamber.
Just yesterday, I was explaining my reasoning for wanting to make a LAN party insecure as possible to promote openness rather than exclusion, since any measure to add extreme security would prevent further appearances. The not-so-fine print wold be, "Hey guys, your stuff is insecure. This is intentional. Please treat it as such." His recommendation that for each different game (of maybe 30), you disconnect and re-request from dhcp for the vlan which is hosting the game you want to play. Two games wouldn't be allowed. Silliness.
Quick read of article like, "Rise of the Cypherpunk" is a reminder of how many awesome people are in cyber security:
Coming into meetings with other teams with a list of (often unfounded) assumptions does not help anyone. I wrote about this a bit last year: https://blog.conjur.net/devops-and-security-the-five-monkeys
Also passwords are dumb - Bob should use certs and or ssh keys and 2fa to access anything.
The point is that any security that hinges on hassling Bob is likely bad security.
me: "PR or GTFO."
The problem described is that ISOs are professional nags instead of software shippers.
If the organization has 200 of those apps, it's unlikely a transversal security team is able to write patches for each one of them.
This is The Reason (tm) :)
> Hanlon’s Razor says “Never attribute to malice that which can be adequately explained by incompetence,” but I would add, “Never attribute to incompetence that which can be explained by differing incentive structures.”
That's profound & pretty thought-provoking.
Ternus's Razor, anyone?
> If you don’t think this is a problem, you can stop reading here.
Thanks for saving me the time
Well, that's a great way to have an honest dialogue.
> You fume for several minutes, cursing all developers everywhere, but no response is forthcoming. Angrily, you stand up and march over to his cube, ready to give him a piece of your mind.
At this point, all you've done in response to finding a serious security issue is to send an email with a very poorly worded and vague title. Why are you getting upset that nobody has reacted to it within a few minutes? I'm also curious as to why Bob would think this email to be unimportant, does InfoSec just use one email subject for everything?
If something's important, people generally turn to synchronous communications, where we can verify that our audience has processed whatever it is we need to tell them. Async communication works just fine on smaller time slices, like chat/IM, but email overall tends to have relatively high latency, especially if you need to communicate a serious security issue.
> Many in the Infosec community are fond of casting the security world as “us versus them,”...
Wait. Is this a joke? Isn't "us versus them" the canonical wrong way to frame just about anything? At the very least, it's obviously the wrong way to approach InfoSec, in addition to virtually any other collaborative effort. This should be obvious if only because you need to work with other people. Conflict does not cooperation make.
> ...he gets lots of “urgent” security emails that turn out to be Windows patches, admonitions to change his password, policy reminders and so on.
That right there is entirely on InfoSec. They're not only boy-who-cried-wolfing, they're doing so with vague subject titles. But that's okay, InfoSec is going to get very upset anyway, because they've failed to properly communicate the severity of the situation, and people are acting accordingly.
> ...and Bob’s demand that you explain the vulnerability is met with your impatient demand to “just do it.”
Ouch. Why does InfoSec not want to share the wonders of 0days? Seriously, one of the most fun things to do is dissect, or read others' dissections of, an 0day. This is great knowledge to share, and I would applaud any developer who takes an interest in security by wishing to understand what security issues are, especially 0days.
Additionally, rejecting someone's request for additional information about a task you've given them is almost universally a bad thing to do. If it's not feasible to grant them the information they desire, it's on you to properly communicate that, don't just rebuff their request. Transparency is great for teamwork.
> Bob... can’t deal with this now, he’s too busy, it’s not his problem (there are other devs, right?) and you should take it up with his manager.
I actually thing this is valid. If a developer does honestly feel to busy, going to their manager (who should be in the loop for security issues anyway) seems like a reasonable escalation. If it's actually an urgent issue, it should be valid to disrupt the manager's day with it just as much as it is to disrupt the developers day.
It seems like it's the same response a developer would give to someone freaking out about a serious bug in their code. If it's a serious issue (developer, for whatever reasons, is unconvinced) then you should take it up with their manager. That's not to say the developer is correct in being unconvinced, but arguing that route is less timely and less likely to actually work.
> The jaundiced attitude among Infosec mentioned above...
When I first read the article, I thought the author was knowingly straw-manning, hence their opening warning about jerks. This seems to indicate otherwise, as the author is seriously referencing their story as something remotely realistic. Either the author's story was a terrible straw man, or I'm very ignorant of how unprofessional my professional compatriots are.
Regardless, the author lays out the solutions:
> Practice active kindness.
That's horoscope level advice. It is good to be nice, but it's not exactly feasible to do all the time to everyone or we'd have solved a great many problems in society a long time ago. Generally speaking, being nice requires some emotional effort, and not everybody has the same capacity for that as everyone else, and those that can afford to do it probably already do. Although I suppose I could believe that an adult capable of being nice all the time simply isn't doing so because nobody suggested to do so...
> Seek to understand and make this clear.
Always great advice, like being kind, but far more actionable and sadly, far more applicative. Yes, communication is critical when working with others, especially when attempting to delegate tasks to others. This should be obvious, but I've found many people to not take this seriously. If you need something done, it's on you to ensure whoever you delegate the task to understands it at least as well as you. You can't fault them for your inability to properly communicate.
One of the saddest things to see is when two or more parties get upset at their own failures at communicating. For example: Bob shouts across the office "Hey Alice, do X" but Alice is listening to music and doesn't hear. Some time passes, then Bob gets upset that Alice has not done X, and Alice gets upset at Bob for being upset that Alice has not done X. Now we've got two parties, both upset over an unfortunate circumstance, with no resolution in sight. Had Bob attempted to confirm his communication, this whole situation could have been avoided. Alice could also do her part by not being upset by Bob's failure at communicating, but emotions are fickle and it's hard to defend yourself stoically when your attackers are fuming with emotions.
Additionally, had Bob confirmed his communication, in the event Alice had still not performed X, Bob can escalate to whatever authority is appropriate with the evidence of Alice's understanding of his request, increasing the odds for meaningful resolution (at least from Bob's perspective).
> Be flexible. Recalibrate “urgent.”
The boy who cried wolf.
> Create stakeholders...
I'd be a little concerned with teams arbitrarily deciding their security goals. They should 100% be involved in the process, but leaving them entirely to their own devices would incentivize them to have terrible security, as that's generally the easiest thing to do.
>... and spread security knowledge.
This is good advice, and it's sad to think it is useful advice. If you're ever in a situation in your life where you need to communicate to another person a task to be performed, you should be more than willing to share information about said task in order to aid in the delegate's efforts. It's an obviously useful thing to do, and it's sad to imagine adults making it through life, surely having many tasks delegated to them by this point, and not understand how valuable additional information about the task can be.
I would be highly concerned if my coworkers did not already understand this. While we all can't have our dream jobs/offices/employers, compromising on communication abilities is pretty much always going to have both bad and unpredictable consequences (you can't easily predict how someone's going to react to (mis)information from poor communication).
> Fixing Infosec’s jerk problem benefits everyone: us, the people we deal with, and ultimately the security of the system — and since that’s our long-term goal, we should actively seek to fix the problem.
I think the easy solution here is to fire those jerks. Seriously. Talk to them about things, ask them why they're doing what they're doing, but this isn't a systemic issue. This is a personal issue. People being jerks can (and will) happen anywhere people are present, and the solution isn't to group everyone in a large category together and then proclaim it a categorical issue that they all must work to resolve. Find the bad apples, deal with them as you would in any other situation. Communicate to them that what they're doing is both technically (security issues are not being fixed in a timely manner) and personally (they're failing to communicate on multiple levels and upsetting people) ineffective. Ensure they understand that their actions are not desirable and are creating a hostile workplace (I never thought I'd say that non-sarcastically...). If they keep doing what they're doing, let them go. If they harbor animosity towards being repressed, let them go. There's a wealth of wonderful people in the world, seek them out instead. Be selective, it doesn't take a large security team to be effective, especially when developers are a part of the security effort (which they should be !!!!11).
Remember: If we, developers, used modern safe programming technologies, such as safe languages and OSes built around capability-based security, 99% of security exploits wouldn't even exist.
Stop doing that.
Second, the #1/#2 distinction has nothing to do with the US government. Overwhelmingly, the #1's I've interacted with have cast "criminals" in the "bad guy" role, not NSA. The point isn't who the adversary is, it's that they're animated solely by the idea that they're in a real struggle with some kind of adversary in the first place.
1. Lazy people
2. Not lazy people.
Haven't you encountered those 1. types that profit by passing on security rumors and pushing papers instead of examining software?
There was a nice presentation by some 2. types that recommended using delimiters in your network protocols instead of [length] data. But I can't find it yet.
There are no doubt a lot of smart security people with great judgment in the industry. Unfortunately, my experience is that such people are FAR from the majority. (This is true for other non-security segments as well, of course.)
Yet another sql injection, but ya, Infosec's the problem.