Hacker News new | comments | ask | show | jobs | submit login
Infosec's Jerk Problem (2013) (adversari.es)
210 points by rfreytag on Apr 29, 2016 | hide | past | web | favorite | 130 comments



One of the root causes seems to be that everyone with the aptitude for security crowds toward jobs that don't actually involve implementing good security. It's not as fun to be a developer that is really into security but only have that be part of your job. Even if it's all you do, if your days are just "analyze, document, harden, repeat," that's a lot less fun than getting paid to pop boxes. I know, because that's what I do.

That disinterest is the entire raison d'etre of I am The Cavalry. From their webpage:

    No one is rising to meet these challenges. The cavalry 
    isn’t coming. We are the domain experts and we are the 
    adults in the room. It falls to us. We Are The Cavalry.
The less-charitable interpretation of this is "Hey dummies! If we all want to play red-team, who the fuck do we think is left over to fix things?"

I went to a two-hour presentation at $serious_security_conference on $my_product_domain. I was really excited to see what there was for us. We're not running webapps on racked pizza boxes, so there is a lot of topics in our systems that aren't really explored in public security literature. Instead all I got was some dummy thinking he's hot shit because he found some vulnerable systems on Shodan. I got angry enough to walk out, and then went back thinking there was probably some value to be had. 3 times.


The funny thing is I quit hacking because it was easy, mostly unimaginative, and repetitive. The high-assurance systems and security fields were much more interesting. Especially the CompSci work. These are about straight-up engineering systems or security methods in ways that they'll always work given a specific threat model. Lots of good stuff came out of that field with things occasionally breaking in new ways due to immaturity of computing discipline. Yet, certain things are well understood to the point that 60's-80's era methods get 80-90% of the job done. Still unknown to most INFOSEC people in mainstream. So, don't worry when they act cocky. :)

Anyway, the market rejected that to maximize profit with buggy software and consumers wanted unnecessary complexity at a rapid pace in every aspect of computing. So, I typically recommend medium assurance solutions these days. Safe-by-default languages, interface checks (esp Design-by-Contract), usage-based testing to keep users happy, fuzz-testing to find lack of input validation, static/dynamic analysis tools, good architecture, good middleware, OS with few 0-days, and so on. These cover you mostly.

What type of apps do you build on what platforms?

EDIT to add: Link below has a link and description to lots of defense techniques that aren't boring at all. What do you think on that?

https://news.ycombinator.com/item?id=11596646


>A root cause seems to be that everyone with the aptitude for security crowds toward jobs that don't actually involve implementing good security.

To expand on this, I've also noticed that "security" in enterprise-size companies tends to be a dumping ground for helpdesk+ staff -- people that can read a CVE or parrot "best practice", but not really grok the subject or think critically about it (e.g. jumping from one password to another every 3 months isn't "more secure"). It's been my experience that people in security teams can't implement the fixes to the things they complain about because if you're a security-minded dev, your career path will be in dev.

>It's not as fun to be a developer that is really into security but only have that be part of your job.

I think it would be ideal to have security as a parallel track to regular dev staff -- with their placement in the ecosystem being between dev and QA. Much like how "devops" was used as a hiring filter to have sysadmins that can at least read code, I think we should have a "devsec" group that bridges security-QA and dev.


> people that can read a CVE or parrot "best practice", but not really grok the subject or think critically about it (e.g. jumping from one password to another every 3 months isn't "more secure")

So much this.

One place I worked had a head of security that switched us to having passwords that auto-expired every 30 days, had annoying requirements (mix of upper, lower, numbers, and special characters) and not allowing reuse of a password for 1 year. The increase in people having problems was handled by having "security questions" and self-service password resets.

My insistence that "security questions" were just unexpiring, easy-to-guess passwords was met with "But this is the standard!". Pointing out that rapid turnover of hard-to-remember passwords led people to write them down generated a similar reaction.


Your statement described my hell.

I put a lot of effort into running environments securely, and eventually, someone always decides to get a second opinion.

The only recommendation I have ever seen come from a security audit in an enterprise, is bikeshedding password lifetimes. Every time I point out "SQL Injection in the logon form" level vulnerabilities, I get directed to the "security experts", which is why I've seen password policies go from 180 days, to 90, to 30, while the mentioned issues get ignored.

People flash a PWC business card and it's seen as credential. Then they drag me to the comms room and spend all day auditing the serial numbers on individual patch leads going to desktops.


You don't need to create a term, you just need to want to hire good people and actually pay for them. With that said, as a security interested dev, I actually work in devops as it's simply the best paying job so maybe a term does need to be created, but if you actually deeply understood security, you'd probably be hiring a devsecops.


I'm also on the defensive side. I went to the last "DefCon", which you'd think expands to "Defense Conference". There was a grand total of one presentation that had defensive elements in it and I still don't recall it being dedicated to the topic.

I don't intend to go back. Great conference, don't get me wrong, I got personal enjoyment out of much of it, but it's impossible for me to professionally justify it.

Defense isn't fun enough, for anybody; neither the one defending, nor the developer who was just told the code the spent a week developing is a gaping security hole, nor the managers being asked to spend money to prevent black swans, nor the open source developers having fun developing frameworks and libraries, and on it goes. It just isn't fun.


Nonsense. Defense is plenty fun. Anyone can find a 0-day. The principles and some methods are the same crap as Schell et al did in MULTICS security evaluation they published decades ago. Same kind of stuff Burroughs was preventing in 1961. More interesting was when the old guard tried to build something that couldn't be compromised under any circumstances. Others tried to find and master key areas of the problem. The results were very interesting although not always fun to apply given they could be tedious. Even then, trying to automate that in tooling provided a whole new level of fun to distract one from the tedium. :)

https://www.schneier.com/blog/archives/2013/12/friday_squid_...

Posted above in response to Schneier's call for INFOSEC after Snowden leaks. The title is both a parody on cryptography papers and a trick I used to make sure I can Google a link with a few, unique words. Still works haha. Anyway, that link covers both high-assurance and medium-assurance INFOSEC methods out of CompSci and Defense sectors. Skim through the titles and abstracts then come back to tell me if you still think defense is the boring part.

I mean, really, because I think people think that because "INFOSEC" as you see in mainstream is (a) bullshit or (b) boring. The real deal is constantly innovating, works more than it fails, and quite interesting. To me at least.


"To me at least."

I suppose I wasn't clear. I have fun with it too for the most part or I wouldn't be doing it. But I observe that I'm a rare duck that way.

And the unrelenting stream of bugs caused by the same fundamental root-cause issues over and over again can get old. How many times do I have to see an XSS brought on by using string concatenation on HTML? How many times do I have to see, well, any of several vulnerabilities based on string concatenation before the engineering community actually acknowledges it's a problem? That can grate on someone after a while.


Ahh, you do web applications. Then you might like my summary of state of the art for that:

https://www.schneier.com/blog/archives/2014/04/the_security_...

It's due for an update in near future with resources that a few people here shared with me. Still has lots of interesting stuff, though. Might not help in your day job as you can't dictate the stuff to management or dev team. However, you could enjoy doing it in private on FOSS project exploring those tools or improving/cloning them for others.

There's also projects that try to automate prevention of known classes of bugs. If you find some easy to use, maybe bring it to attention of your management showing they make QA more efficient & liability lower while requiring no extra costs on dev's part. Win-win.


That's an interesting perspective, but some additions:

- the client (i.e. browser), protocol, proxies, etc. are very hard to reason about. For instance, content sniffing and HTTP response splitting should not show up in a sane protocol, yet here we are. An encyclopedic knowledge of - at least - browsers bugs is required to write "secure" web sites.

- type-based safety doesn't do a lot to ensure that e.g. the same-origin policy actually protects your users (i.e. that there is not some unrelated application on the same domain-ish). The public suffix list (https://publicsuffix.org/) is essential to modern web security, and there seem to be no plans to change this.

- "The Tangled Web" is very good (or at least the first half is). It makes me happy I'm not working on the web. ;-)


"the client (i.e. browser), protocol, proxies, etc. are very hard to reason about. For instance, content sniffing and HTTP response splitting should not show up in a sane protocol, yet here we are. An encyclopedic knowledge of - at least - browsers bugs is required to write "secure" web sites."

It's true. I think you could encode a lot of that in a framework or language such as Opa or Ur/Web. What's left is a tiny checklist. Edit to say maybe not tiny but smaller.

"type-based safety doesn't do a lot to ensure that e.g. the same-origin policy actually protects your users"

There's secure, browser architectures that do that in various ways. I'm not sure about the language level. Might need to be combined. Worth thinking about. The publicsuffix website is new to me so thanks.

" It makes me happy I'm not working on the web. ;-)"

You're the second or third to recommend Tangled Web. On my reading list for if I get back into Web development. I'm sure I'll have the same reaction. You might find it funny that a prior commenter found it shocking that I hadn't read work on secure, web applications. Whereas, I originally found it shocking that people were trying to build them.

Perspective changes everything, eh? :)


Let's be honest, Summer Security Camp is far more about socializing with "our tribe" than it is about learning.

That said, understanding how the red team does things is extremely useful for formulating one's own defense.


You're right. But at times it feels like the red team has gotten so far past the blue team that it's hard to learn much from them. The red team is on unicycles juggling flaming torches while keeping three plates spinning and still successfully picking the lock while the blue team is trying to figure out how to take three steps without breaking a leg and flinging the complete contents of their wallet at the nearest brigand, which they are really good at, sometimes nailing the guy from several miles away.


Isn't DefCon supposed to be a play on words named after the defense readiness condition alert state used by the US armed forces?


Had a good friend and colleague who made an interesting point about how/why a security role can get depressing.

What happens if your security program is really really really effective?

Answer: Nothing


That's the paradox of security.

You can be a security superstar, or a total idiot, and you may achieve the same result!


I could imagine that there would be a Peter principle situation where a successful infosec professional is given progressively more difficult responsibilities as long as nothing bad happens. Is this not the case?


The opposite could come into play at some companies where people start questioning the expense of your salary since you're never having to deal with successful attacks. It's hard to talk up what didn't happen. Then you have people always putting out fires who look like they're "getting shit done."

It often seems like a myth when you hear about that phenomenon, but then I experienced this myself in a different role myself and became a believer.

It could still be the peter principle, and that you need to better communicate your unseen accomplishments. Maybe there should be a modified peter principle, where people rise according to their ability in the job and corporate politics.


Or promoted into management where they implode due to being introverts who can't politic well.


Same problem in IT. It's an inverse recognition problem.


Jobs that involve implementing good security don't exist. That's because there are no customers for good security, because customers can't tell the difference between good security, bad security, and no security, unless they happen to get compromised. Even then, they usually don't notice. Often, they can't even tell the difference after they notice a compromise: they can tell that some of their security was bad or absent, but they can't tell what or how; so often their incident response is useless or actively counterproductive.

This information asymmetry problem means that, in a job implementing supposedly good security, an aptitude for security is not an advantage. In fact, it's a disadvantage, since that aptitude inclines you to do unprofitable things like auditing code, recommending to your customers that they scrap unsalvageable codebases, and staying up late at night patching production servers instead of ironing your suit and getting a good night's sleep before that important client meeting. Total incompetents wearing better suits will be able to bill a higher hourly rate than you do.

Because there are no customers for good security, the entire infrastructure of plausibly securable software that you would need to implement good security also doesn't exist, because the market has failed to provide funding to develop it. Big technical companies (Microsoft, Google), free-software organizations (Debian, Tor), and dedicated individuals (Bernstein, Moxie) have made significant progress, but it's nowhere near being a viable option.

By contrast, I imagine that if you "get paid to pop boxes", you can demonstrate to your customer that you succeeded at it, and you probably can't convince them that you did if you didn't. This means that aptitude and expertise are an advantage in that field. Don't get me wrong: even from this distance, I can see that you're right that the field is still full of the script kiddies who "think [they're] hot shit because [they] found some vulnerable systems on Shodan." But even they are head and shoulders above many of the high-priced security consultants on the defense side from places like Keane or Wipro.

The upshot of all of this is that if you want to secure your network, the best you can do is figure out who the expert attackers are, then convince them that it's worthwhile to secure it. But that's still nowhere near enough to get to "probably secure", much less "assuredly secure".


> that's a lot less fun than getting paid to pop boxes. I know, because that's what I do

Wow, this is an interesting claim. You do this work for a living? Are you ever concerned about getting caught? How do you get paid?

I'm assuming that covering your tracks is some sort of ever escalating cat-and-mouse-game. How do you find out about your adversaries' capabilities regarding investigation after the victim detects the problem?

EDIT: oh, oops. Double-misunderstanding. "that's what I do" meaning "harden,test,repeat" and of course some folks are legitimately paid to do pen testing.


No, I mean to say that I'm on the boring side of the line.

But penetration testing is a job that exists. People are hired to do exactly that under contractual terms with the target. (And I don't mean to imply that's the only security job in the more-rewarding/fun category.)


Rewriting someone else's code to eliminate the potential buffer overflows is a lot less glamorous and exciting than exploiting a buffer overflow to escalate user privileges, in the same way that watching a mason re-mortar an entire brick wall is less exciting than watching a ninja scale that wall, eliminate the guards in complete silence, steal the MacGuffin, and escape without a trace.

But it is steady, decently-paying work that will likely last until the singularity.


What do you mean it's an interesting claim? It makes perfect sense to me, and I'm just a regular old web developer.


Penetration testing is legal.


Oh, yes, of course. I misunderstood!


I carve up this problem differently. To me, the field can be divided into two basic categories of people:

1. People who are into security to prosecute some immortal struggle between good and evil.

2. People who are into security because of the engineering challenge.

It's the people in group (1) that I tend to have a problem with. Often, for the "good guys" security professionals, engineering facts are just a means to the end of winning the war on the "bad guys". These are the people who tell you your Scala serverside application is insecure because of some recently-released Java applet bug; they're the people who fought against DNS query randomization for a decade because we need to finally release DNSSEC; they're the ones shipping grievously broken cryptography to try to "stop the NSA".

There are jerks in both categories, and competition is a dimension orthogonal to this one, but I find I can handle jerkiness and competition between when I know it's from someone who truly cares about understanding what they're talking about.


One of the worst is the cyber-security political purist. The person who has drunk the kool-aid so badly and/or likes to use their knowledge to beat others round the head with it. The person who likes to talk about things like "risk/threat modelling" but actually doesn't:

a) Have a real understanding of the needs of the people they say they are trying to help

b) Have the courage to make tough calls related to ideal vs good security.

Working in the NGO security space we get these people all the time. Send them out in front of a bunch of African/Middle Eastern/Asian/Russian activists who have been risking their lives for years for democracy etc etc and all they do it spend three days showing off, berating people and then leaving. Telling everyone that TAILS/PGP/Linux/TrueCrypt etc etc is for everyone and that anything else "is gonna get people killed," so by extension shouldn't be done. There are people who have made entire careers around taking that attitude, which most of the time actually leads to demoralisation and a long-term decrease in security.

It's easy to recommend the hard "cover your ass" stuff because you don't like the NSA, it's harder to say "Hmm, ok we gotta assume some risk that the NSA doesn't care what we're looking at and use a solid Google two-factor here and see how we get on...now how do we stop the security guard from selling you out?"


This. If the Mossad/NSA/PLA thinks there is anything of value on our network, they either already have it or could trivially get it. It wouldn't be the end of the world if they had it, so I don't really care.


Yep, within reason - I don't buy into the "I have nothing to hide argument, so I have nothing to worry about." People like Trump or UKIP remind should remind us of the danger of that. Though we should remember, in a realpolitik world, the long-term interests of the NSA -> US Gov -> West etc is in supporting democracy activists for example. It's just a pity that the sensationalism of "counter-terrorism" interests in the short-term, actually damage the long game.

On a sidenote, as a digital and physical security training company for NGOs we manage to get a look at cases from both sides of the coin. Our very very rough guesstimate is that we see confirmed human penetration about 3 times more than we do digital penetration. Of course, this is very rough and has soooooo many other bias factors at play (numerical, cultural how many we see vs not see etc.). But I think it is a point that we keep having to reinforce. Too often powerful "infosec jerks" distort the the focus towards Western biases because of Snowden, Facebook, SnapChat, iPhone and this distracts time, money, energy, training and security measures from the human penetration aspect of things - which are very common in the developing world.


So I had to read that last paragraph twice. Could you explain what you mean by "human penetration"? And no, I'm not being dirty (though my mind did initially do a few mental flips when I first read that phrase, it's not my fault I never completely matured...) I'm genuinely asking what is meant by that. Do you mean that someone walks in and attaches a serial cable to a router and their laptop, or plugs in a USB stick into an unlocked workstation?


No, at it's most basic level I mean a spy or insider threat.

In the NGO contexts that I have seen that usually means someone who legitimately is works in an organisation but turns for the standard reasons. (More effective, faster and cheaper for an adversary that way)

To a slightly lessor extent, that means someone from the outside who has been placed on the inside. (Less effective, longer and more expensive for an adversary that way).

Sometimes both of these scenarios also include digital aspects, like stealing a USB drive or something but not always.

Before you ask, why do people do it in an NGO environment - fairly similar reasons as elsewhere (though I tend to order them differently based on experience):

US Method of Counter-Intelligence:

-Money

-Ideology

-Compromise or Coercion

-Ego or Extortion

or these days:

-Reciprocation

-Authority

-Scarcity

-Commitment

-Consistency

-Liking

-Social Proof

A good read for more info here: https://www.cia.gov/library/center-for-the-study-of-intellig...


Well that's scary as all fuck! I didn't realise NGOs were as susceptible to this sort of thing as commercial enterprises. I guess I was being naive and should have known better.

Thanks for the insights.


Honestly, in many cases NGOs actually have a far higher physical and digital security threat environment than corporations. Partly that's one of the things I love about my job.

I mean yeh, it's cool if you can get paid loads of money for a 9-5 job to throw a ton of resources and people at protecting your Pied Piper software company in suburban USA or Europe....But now take the exact same advisary (China gov for example) and try to think up ways to minimise their threats all while driving around with a hobbiest sysadmin (who the local gov may arrest, torture or disappear if exposed) with very little English in the middle of the night in darkest Africa/Middle East/Asia...The pay is crap or non-existent, it can be high stress but you get to make a real difference, which is rewarding.


Question for you, what company do you work for? I've really wanted to break into NGO security in particular for a lot of reasons. Spent a lot of years working in NGOs, and now do internal corporate security. Pay's better but it feels different. Could you possibly reach out to me? pdoconnell at gmail


Cool.

We run our own. It's here www.secfirst.org - email me rory@


All I know is that far too many "security guy" discussions sound like dick waving contests pronounced in the tone of The Simpsons' "Comic Book Guy".


It's sadly nothing at all like debates between people pushing different unit testing methodologies.


Also nothing at all like debates about static vs dynamic typing in programming languages!


It's an interesting categorization. I'd be in both categories given various activities. Yet, I disagree with category 1 as it seems like an accidental strawman. There's certainly those types of people literally. There's more of us, though, that see insecurity baked into almost everything we depend on even as a country. The state of the grid, banks, increasingly important mobile OS's, infrastructure protocols... you name it... has been shit for way too long. Unnecessarily so. You alluded to that with your DNSSEC example.

So, on top of engineering challenge, there's the need to fight for a greater baseline across the board. I'm just for doing it a different way than in OP where they call the dev's idiots. ;) More along the lines of convincing management to adopt whatever practices add great improvements to quality and security with little extra cost. Quite a few exist. The value is lower liability, more predictable schedules due to reduced debugging, quality/uptime differentiator vs competition, and possibility of charging extra for that.

Enough promotion of better-by-default in key areas will improve our baseline over time. There's actually been a lot of progress vs where we were in 90's. So, it's at least working a little. DARPA and NSF are also steadily funding strong stuff in variety of areas that has legacy compatibility where possible. More potential to push strong stuff in the future.


>they're the ones shipping grievously broken cryptography to try to "stop the NSA"

I don't understand this aspect of the strawman. Could someone help me out? In the context of the previous two examples of how members of group 1 act, is tptacek merely implying that members of group 1 are stupid?


This can probably be read as a dog-whistle to his war on the EFF Scorecard in general, CryptoCat in particular, and Nadim personally.


It's not complicated. Most of the world's really bad amateur crypto is written as a vanity exercise in saving the world. It's not like there's uncertainty about whether people have launched grievously, dangerously broken crypto software; they have. And those people do tend to be #1's.

Also, please do not try to represent my problem with the EFF's scorecard as somehow idiosyncratic. There probably aren't too many crypto engineers who don't share my opinion of it. You're probably giving me more credit than I deserve when you call it "my war".


> immortal struggle between good and evil

Immortal, maybe. I don't know about it being a struggle, so much as polarization. Think how fucked up angry memes can make things.

Security is discrimination. Keeping you out of my shit involves saying "these things here are mine" and "those things there are yours" and "I won't allow your things and my things to entangle". Someone who hacks that discrimination is for entangling things in a one way direction (your stuff entangles with theirs, but not the other way around). Manning hacked for everyone, wanting the entanglement to go both ways. (I taught Manning how to drive, FWIW.)

In Zen Buddhism, discrimination is considered to bring consciousness. Something that is aware of itself completely (all knowledge) is non-discriminatory in nature. Security, in contrast, is all about keeping things separate. If you understand that, you understand consciousness.

People who "get" security on another level will struggle to find a happy medium between these two polar ends of not knowing and knowing. What is the right amount of security? What data of mine is mine to keep? How can seemingly innocuous data be used in secret to bring lower levels of consciousness to a given group? Government surveillance springs to mind.

We could stand to have more tolerance for "jerks". They may be jerks because they understand the gravity of the situation. Others may not, and need to be given the information to understand it. We are in interesting times. If we do not guide the transition carefully in the coming years, there will be much suffering...due mostly to poor security.


This is a strange statement. Do you underestimate how many of your friends and respected peers are #1?

I think it's the #2s who are too quick to dismiss #1s that I tend to have a problem with.

I don't think jerkiness correlates with either of them even a little bit.


There are people who are both #1 and #2, but they're pretty rare. Or, at least, that's the perception I get, because I don't generally have to interrogate people to learn if they're #1s: they're all too happy to broadcast that fact.


Are you speaking from personal (statistically insignificant) experience or do you have some quantifiable evidence to share?

Your HN-karma & real-world accomplishments aside, do you honestly think you can psychologically categorize a group accurately enough that we should use your defined stereotype to improve ourselves and/or society?


I think it's pretty obvious I'm speaking from personal experience, and I'm not sure it matters to me what you choose to do with that information.


Would you consider the Wozniak/Jobs story part of the canon? The hero's journey from #1 to #2?

(i.e. #1 Cops & Robbers -> #2 Realpolitik)


>I think it's pretty obvious I'm speaking from personal experience, and I'm not sure it matters to me what you choose to do with that information

Everything is obvious to the person who has nothing to learn.


I'm sorry, I'm really not clear on what you're trying to say here. Do you disagree with the #1/#2 breakdown I presented? That's totally fine. But I don't think it's any less productive than the discussion about whether "infosec has a jerk problem".


The article show-cased common social problems and then shared ways of mitigating, even understanding, the "jerks". We could all be more kind.

Your post seemed to be focused on the details of those you dislike. That's it. No story about how you built camraderie or struggled to understand them. What lesson should someone learn from your post?


Huh. Fair enough. My comment is definitely about my issues, and probably not (now that I'm forced to reflect on it) in the original spirit of the post.

This particular issue is buzzing in my head this week because of the stupid grsecurity Twitter drama, which is really bugging me. I guess I'd like an "infosec community" that spends more effort thinking about engineering and less about community dynamics.

We could definitely all be kinder.


I can think of several people who fit firmly into both. Moxie, for instance. I fail to see how these two categories are mutually exclusive.


Hold on. The definition of my #1 category isn't a belief in good and evil; it's the idea that their primary function in the field is to save the world, not to perfect security engineering. Looking at Moxie's work, I don't think anyone can legitimately accuse him of not taking engineering seriously.


I understand that, but it's reasonable to suspect that for a significant number of #2s, #1 is the initial motivating factor that got them learning.

We see this all the time in software development, where excellent engineers are borne of trying to build a video game at age x, where x is a relatively low number.

Similarly, there's no reason that someone initially motivated to eradicate "evil" can't become a great security mind, and I suspect Moxie and many like him are of this ilk.


Again: you're rebutting me as if my argument was that anyone who believed in "good" and "evil" was a #1. That's not my definition of a #1.


Shouldn't there be a #3: people who are into security because it's important for their entity's constituents?


My #3 would be people seeking a low barrier to entry cushy corporate job. Unfortunately for me, I encounter way more #3's than #1 or #2's...


Good article (although I thought it was going to have something to do with the grsecurity thing), but it doesn't mention the other side of the problem:

"To them, we’re Chicken Little crossed with the IRS crossed with their least favorite elementary-school teacher: always yelling about the sky falling, demanding unquestioning obedience to a laundry list of arcane, seemingly arbitrary rules (password complexity requirements, anyone?) that seem of little consequence, and condescendingly remonstrating anyone who steps out of line."

That is often true.

It might help if security knew what the hell we did. The last three "Java is broken forever!!!" issues, when the problem was with applets, nearly killed me. Or possibly if they knew their job. The last employer's security policy was defined by the latest vendor tool they bought, never mind that it couldn't possibly find any issues in the applications we were writing.

Of course there was the code review I did where I pointed out that it had a giant, monstrous hole in it. The other dev nodded and said, "yeah, that's a good point". But the app had to be in production now, so nothing happened. To escalate, I would have had to go to security, which would be like complaining about a hangnail to an axe murderer. So, nope.


The (un)funny thing is, most developers would love to have the time to make sure their code is secure and well tested. Very often they lack a voice to product stakeholders, to get the time off feature development, and make sure their software is up to date with patches.

> Practice active kindness. Go out of your way to do kind things for people, especially people who may not deserve it. If you wait for them to make the first move, you’ll be waiting a while — but extend a hand to someone who expects a kick in the teeth and watch as you gain a new friend. Smile.

I really like this quote. A security engineer and a developer teaming up together as colleagues, are more likely to being taken seriously by stakeholders. Both teams working together have a much better chance of being given the time needed to make sure their software is stable and secure.


> most developers would love to have the time to make sure their code is secure and well tested

I'm suspect not.

No matter how much time you have, it's more exciting to work on something new than going through testing. Time is not all equal. Even if you have an unlimited supply of time (everlasting life), you cannot somehow use a time block that occurs 1000 years from now, in order to displace the boredom you feel from what you're doing now.

If anything, unlimited time will increase procrastination. "If this isn't debugged for another 500 years, that's okay; I will live long enough to see it debugged.".

Thus, I suspect, most developers would actually love to have a vast army of other people with unlimited time to do the QA to make sure their code is secure and well-tested. :)


Until companies start being held liable for their software deficiencies there won't be a change. This is also why I find "Software Engineering" a joke. The equivalent of what passes for Software Engineering, in any other engineering field, would put people in prison.


That's hardly the case. A lot of what passes for software engineering also passes for other engineering.

What we actually see is a lot of apologizing, recalls and class-action suit settlements, and nobody actually seems to go to jail.

Not all engineering is about bridges not collapsing; conversely, there is some software that is equally safety-critical and carefully developed.

There is also "everyday engineering", like in consumer products. That's a category that fails miserably. Put simply, shit breaks. Past the one year warranty? Too bad!


most developers would love to have the time to make sure their code is secure and well tested

This is certainly true. But competitive pressures will always force quick hacks over software robustness. The only real solution is a sort of developers' guild -- whose membership includes over 90% of all worldwide professional developers -- wherein an oath is sworn to always include security robustness as a required feature during the software estimation cycle. Or perhaps an analogue to the Hippocratic oath.

Which of course will never happen.


The real solution in my opinion is educating people on security. not just developers, but also end users, sales people, and product managers.

When users start choosing the robust and secure product over the quick and insecure product, sales will pick that up, product will follow, and programmers will treat security just like any other feature.


The problem is two-fold. The average users doesn't care about security and probably never will.

Many developers like to hack software together. To make it secure requires discipline and more time and knowledge when developing it. Most businesses don't really care about this. They just want it finished fast.

The only way it will change is if the government starts fining companies for software that has a crazy amount of bugs or requiring developers to be certified.

I honestly don't think many developers even have the ability to write software without the obvious bugs we see popping up today. For sure we would see less companies outsourcing to countries like India.


Respectfully disagree, here. Infosec has been trying the education path for decades, now. It's not working. Either something needs to change about the educational process, or acceptance that it's failed is warranted.

I think fixing development will be more optimal than fixing users. There is no legitimate reason that OWASP top ten or lack of buffer bounds checking should still be in the wild in 2016, whereas users will always fall for scams, phishing or otherwise.


I think the challenge here is: if the users don't care, why should the CEO of your company?

...and if the CEO doesn't care, why is he going to want to pay developers to (as you say earlier) "always include security robustness as a required feature during the software estimation cycle"?

So, unless the users _do_ care, the only way I can see this happening is if it costs no more to the CEO/users to do this than it costs not to... and that either means * all developers swearing your security "Hippocratic oath" (which, as you say, will never happen) * languages/tooling that mean that this becomes automatic (where there have been steps forward, but we're clearly not "there", and I doubt ever will be).


I would like to add that while this advice would generally work, there are some really shady characters that one has to deal with sometimes. In that case, the other person might just keep taking advantage of your kindness. So, there does have to be a give and take: do a little bit, and hope that they do a little bit as well.


This.

In addition, there are people who are quite literally "dangerously wrong": They talk with great authority and can influence large numbers of people to follow them, even though what they advocate is non-productive. At best they can waste immense amounts of a community's time as they cause great debates among members. (Tabs versus spaces, for example.) At worst they gain actual authority, do great damage, and then leave the community to deal with the aftermath. (Cure your diabetes through positive thinking!) So while being "nice" and "empathetic" are excellent defaults when dealing with people, it's just as important to understand when ugly ideas need to be squashed and the undecided are urged to do what needs doing.


If you think Infosec guys are jerks, I've met a group of even bigger jerks in some organizations: developers!

This isn't always the case, but in some software companies the developers capture the entire organization and run roughshod over everyone. I've seen developers tell support folks that because they are in support, they are utterly worthless and their opinions and views don't count and never will, right after they ask for their opinion. I've seen them obstruct QA people. Not arsehole, stick it to you QA folk, but quiet, unassuming QA guys doing manual test scripts and find a failure and send the case back to development.

Yup, having one-eyed folk around can really cause a toxic environment. Of course, I have to be careful, 6 years ago I was often that toxic one-eyed person, and I learned this the hard way.

Don't do that. Don't be me. Be nice and be willing to concede that the other person has good intentions. You'll be less angry, more likeable and probably more productive.


I've seen this happen way too often at various organizations. My take away is security is a function of the product, not some external process you bolt on top of it.

Organizations typically fail at security because they try to manage it as far away from the product as possible. That leads security and dev/ops teams to manage their goals in isolations: one cares about preventing incident, the other cares about shipping products.

If Agile & DevOps have taught us anything, it's that everyone in the organization should be focused on serving the customer, be it through features, reliable operations, or data security. The only good way to do infosec is to embed security with devs and ops and make everyone shares the goal of making the product better.


I don't think the problem is specialization, just that the incentives mean that the specialization doesn't always get applied where it's needed.

Presume a company with the budget to have a SOC, they're doing all the "regular" security jazz and them some. But are they auditing the network services they themselves run, or just applying patches? Auditing the products the company itself is producing, or just kicking the tyres? But they MITM all the outbound https connections, I'm sure that more than makes up for it.


I read the first lines and thought immediately of all those e-mails marked IMPORTANT coming from "my bank" that request I immediately enter my username and password somewhere for "security".

Teaching blind compliance with any (unauthenticated) request based on "security" is the one way we could make the situation even worse.


It's a pretty big deal. When everything starts at urgent and gets worse from there, people will just rescale the noise to be more understandable. It's just like the joke about sitting down and assigning points to a task and finding out everything is 100 or all bugs are critical or all tasks are top priority.

The meta joke here is that is some ways every security issue is critical, but if everyone is immune to the fear then escalation will feel like the only choice. Either you slowly discharge that stress immunity by being sensibly mellow, or you start packing heat to 'convince' everyone to reboot NOW. (There's probably room for some amount of finesse between these two extremes.)

EDIT: There's two terrible responses that seem to come out of this. Either no one gives a damn, or no one gives a damn and just does whatever you say. The first is bad because nothing gets fixed, and the second is bad for `red_admiral's reasons (users treat anything that looks like a security rant as a EULA and just do whatever it says).


> all tasks are top priority

I often joke with my boss that if all tasks are high priority, all tasks are therefore of average priority.


> When everything starts at urgent and gets worse from there, people will just rescale the noise to be more understandable.

I really like that phrasing. I've been intuitively thinking along the same line but rescale is just the perfect word. Thanks!


Tell me about it.

Got an email from my bank yesterday, "You have an urgent notification waiting on our website! We won't tell you anything about it via email for security reasons!" So I log into the bank, go to the notifications section, "Notification! You have a new IMPORTANT document." Click on documents: "Here's the monthly statement for your savings account."

Thanks guys. I hope you never need to contact me about something urgent.


> Teaching blind compliance with any (unauthenticated) request based on "security"

This feels especially strange on the phone, when someone (who later turns out to be legitimate) opens up the conversation by insisting that I prove who I am, _when they're the one that called me!_


Doubly so when they get irritated with you for responding with "and how do I know you are who you say you are?"


After nearly 20 years in various kinds of infosec companies, there are companies where jerks are not common, and companies where the jerks are very common.

You cannot fix the jerks in the first kind of company. Learn from them, quietly, and look for your exit.

Some people think that there is a correlation between being really good and being a jerk. Only in the sense that the only way a jerk can survive is if they are very very good. Some other people try to pattern-match them. Mimicking the "jerk" part is easy. Mimicking the "very very good" part is not. (See also: making a name for yourself by being a jerk is a lot easier than making a name for yourself by being really good.)

There are good people who are not jerks. I've worked with a bunch of them.


I've worked with such a mimic. He did a security audit, then scrapped the whole system because there was a completely unfiltered network interface, and no matter what other security was in place that would never ever be good enough on a GNU/Linux system. "All interfaces must have strict filtration, only allowing traffic on previously approved ports as per system and application specifications."

The interface in question was called "lo".

I don't work there any more.


>The interface in question was called "lo".

Ok that cracked me up. Happy Friday.


What is extremely frustrating is the rise of "cyber security" Masters degrees.

The vast majority of these people have never written a single line of code.

They don't understand security, because they can't understand the underlying logic in the code. They just write documentation to meet certain outside standards, and have no idea what I'm talking about when I talk about our security posture. They genuinely think that an online satellite campus degree qualifies them to manage security devs from top ten schools.

This coast's tech centers are starting to drive me batty.

For a data scientist, my normal trade, especially. So much red tape that it's a two month project to get myself a small server for test.

Anyone in need of a remote data scientist, Princeton University A.B., two years at a funded startup, four years work experience with a major firm, with specialties in machine learning, (real) cybersecurity, and big data experience?


Someone I know is getting one of those 'cyber security' Masters degrees at Mercyhurst in Erie. I respect the guy, he's really smart, he has an intuitive and pragmatic view of politics... but he hasn't written a line of code ever. He uses a Mac, but he's never opened Terminal.

And apparently the average graduation salary for these people at companies like Disney is around 120K. I want my friend to do well, but I also don't want the security industry to consist of people who have never used Linux.


I mean, I earn more than my boss, but he's still in charge. He gets the credit for the software projects I actually have to manage. So it's likely he'll rise up the ladder more quickly, and earn a multiple of my salary.

Yet, he's further up the chart and makes the final decisions, when he can't understand basic variables that go into what applications (especially early stage platforms) are even supposed to do.

I'm coming to the realization that I just need to bite the bullet, and get one of those "cybersecurity" degrees. The classes are remote, self-paced, easy to earn a 4.0 GPA, and so heavily subsidized by the company as to be nearly free, so I might as well.

I'm also making my way through the various certifications they seem to value (CISSP, PMP, CEH (Certified Ethical Hacker).)

They're very easy to pick up. I just took a CISSP practice exam for the first time and passed quite easily.


Well if that's the case, then times are changing ;) I study at the VU University of Amsterdam and took a couple of courses from the security master track. I had to use libnet to create my own custom TCP/IP packets, libpcap to listen to incoming packets (tcpdump for debugging).

Obviously, I did a SQL injection, cross site request forgery, cross site scripting and debugged malware with IDA Pro and later (when it was safe) with GDB. Did I talk about the concolic execution framework Triton that I used in combination with PIN to gather formulas for an SMT solver so we could crack the code of a binary? We did more things, I remember taint analysis being one of them with a PIN tool. Well, you get the point.

I don't know if we're a special snow flake. I like to think we aren't, but the way the courses have been taught to me is that writing code, breaking things and debugging (lots of it) are the norm in security. Smart conceptual thinking helps sometimes as well. So I have this (untested) view of the world that other security master degrees also let you code in C, Python and whatever else you need to analyze or break into a system. It would frankly be catastrophic if this isn't the case.

For me it was also a really interesting introduction to C! :) My main languages are Java, Python and Objective-C (where everything is a pointer, so I got away with not understanding it).

My only criticism is that some situations seemed a bit contrived, but on the other hand it improves practice because you know what to focus on. Another was that some parts were purely theoretical and had no practical assignment (e.g. return oriented programming).

Cool website to learn a couple of things more: https://www.win.tue.nl/~aeb/linux/hh/hh.html


Yes, times are changing. These universities offer far less while churning out Masters' degrees. No company I have ever worked for in this area had cybersecurity degrees that could code. Ever.

I work for the National Satellite Operations Facility (NSOF.)

Neither my boss nor the three dozen or so "cybersecurity" folks with whom I've worked have ever written a line of code.

Also, everything you said in the first two paragraphs I either already knew from actual networking security courses, or are trivial. SQL Injection, CSRF, and dynamic executable analysis are unfortunately... Not the most modern techniques... Might as well teach naked buffer overflows.

That said, it's a decent basis. And far far better than what's emphasized on this side of the pond (Mostly policy and automated scanners.)

If what you're saying is truly reflective of "cyber" degrees across the EU, here it's far worse.

It's very frustrating to explain a DNS tunnel to these people, much less how to find a zero day in Cisco IOS.

Occasionally you'll get a hobbyist with some Python or C under their belts, but the vast majority of these "cyber" guys really should just read a book on Kali Linux and sing campfire songs in class for the rest of the year.

Right now these degree programs are turning out graduates that think running Nessus makes them a security god.

Things may well be different in Europe (I'm not going to lie, the Trident and PIN projects sound pretty cool), but it's a nightmare over here.

If you're ever in DC, look me up, we'll have lunch, I'll give you a tour of the facility, and I can introduce you to the cyber guys, and let you draw your own conclusions.

That's a pretty open invitation by the way. Any devs, start up folks or "cyber" people (that won't freak out my girlfriend) are welcome to crash in our guest room for a night or two, if you're in DC for a conference or something.


I have 40 year experience in developer, systems and security. The real problem comes from the top. Upper management, CEO, CIO, CFO do understand risk and don't understand technology. To them, security wholes are just bugs in "the code" or product liabilities in products used.

More then once I showed management they had a life ending bug. (Little Johnny drop tables) and was fired for being the messenger.


Good article, I can't help but think this is the real problem though:

> and Bob’s demand that you explain the vulnerability is met with your impatient demand to “just do it".

Maybe rather than say "Just do it" you could say "Any user can delete our entire database and steal all of our data". I think Bob would understand why this is a bit more important than his current tickets, and by not doing it when told about it any fallout would be his problem.

He might hate you for giving him the task but it would be done.


> you could say "Any user can delete our entire database and steal all of our data".

See the problem is that all/most vulnerabilities wind up with this sort of description, which can lead to Bob building up an immunity to it.


Not really though, some do and they should be patched, but only a small subset of vulnerabilities end up with a complete backend compromise.

Things are fucked, but not that fucked.


My few brushes with Infosec folks makes me think there's a general socialization problem with the field, not really just a "jerk" problem.

For example (and I know it's still a fairly young field), but simple activities like cataloging and sharing knowledge learned so far seems to be something that just doesn't get done. So the wheel gets reinvented over and over and over again. I'm actually kind of amazed at how many half-assed log parsing/visualization/analysis tools there are. Doesn't anybody in infosec centralize knowledge other than adversaries?

The feel I get from infosec conferences is more of a dick waiving contest and tribal activity than an honest information sharing activity.


As others have said, this is a good article.

I'd like to comment on a line that literally changed my life, from my favorite https://en.wikipedia.org/wiki/Vorlon, Kosh:

"Understanding is a three-edged sword: our side, their side, and the truth."

I think about that every single day. Speaking for myself, a lot of the other suggestions in the article naturally flow out of embracing this perspective.

I could literally go on and on about this, but I'll leave it be.


> Never attribute to incompetence that which can be explained by differing incentive structures.

Oh, man, this is such a deeply, vitally true thing. It's something I've observed for a long time but never quite put it into words.

So often, when I see some person acting in a way that seems totally stupid to me, it turns out they were just operating with goals and constraints that weren't obvious and were different from mine.


I was in IT security for many years and got out largely because of the sheer attitudes of many of the people I encountered. Some of them were genuinely very nice, very smart people, but these gentlemen were few and far between. Sadly.

I started off as an abuse investigator for a very large East Coast ISP, moved into firewalls, then penetration testing once I knew what I was doing. All along this path, the vast majority of guys I worked with, for, and around (on contracts) were sheer jerks of the highest order. Everyone seemed to be angry, upset, on edge. I can understand this. IT security--especially abuse investigations--necessitates seeing the seedy side of the Internet. Ditto IT security in general. You're not working for or with people who are "creatives". You're working for and with people whose sole job is to minimize threats and vulnerabilities.

The most stressful time was being a firewall engineer. Having to deal--on the fly--with impatient customers wanting a six-spoke VPN "right now!" and them being jerks about it was annoying.

I hated the calls that involved my placing a "tap" on some poor sod whose boss wanted an 8-hour tcpdump on him to see where he went when online. They wanted a gzipped log file placed on the server where they could SSH in and retrieve it.

I hated the double standards where executives were given static IP addresses on their machines and special rules created to allow them carte blanche access to the Internet at large--no filtering. They also avoided the proxy. These special executives were &^%#$@ and everyone knew it.

When you spend your life seeing nothing but evil, you take on a different view of life and the world around you. I saw this happening and I got out in favor of being a sysadmin. I'm much happier, and the IT security roles I do have are much smaller in scope and I know how to handle them quickly and quietly.


If Bob gets in trouble with his manager for not prioritizing his features over security features and Alice gets in trouble with her manager for letting the security issues happen then it sounds like an issue with management not working together to set aside time for security related issues as part of the development cycle for the product.


This is a big reason I stayed away from infosec, although it fascinates me in many aspects.

IMO, security orgs, especially within companies have this awful reputation because they borrowed the credentialing model of project managers and walk around like doctors with an alphabet soup of certifications. Talking to these folks is like crossing a circa-1998 MCSE with a lawyer.

The CPE requirements from CISSP and other certs encourage publishing and delivering talks, which is good -- if the authors/talkers are competent. The side effect within enterprises and some vendors is you develop a cadre of bullshit artists who are great at throwing spears, not so great at doing anything of value.


Security professionals should try to practice avoiding the use of the word "no". Instead, help to explain what controls can be implemented to help reduce risk. Encourage a sense of camaraderie with system owners and develop powerful communication skills with everyone.

Help them to be excited to work with you, rather than to turn the other way when they see you in the hallway.

The mentioned topic of "Recalibrate 'urgent'" is a great message. We're rapidly approaching a desensitized feeling of security risk within organizations and in the general public. As breaches/incidents continue to rise we'll approach a "who cares?" level from the public.

What can they do? Steal my CC? Already been stolen. Steal my SSN? Already done.

What's left to fear when privacy is gone?

What really keeps me up at night is when security incidents are measured in lives. Scary...


Yeah, they definitely are.

I reported a major issue of comcast to an indirect associate of mine I know who's huge into security. I wanted some contacts with folk who know how to disclose it properly, since I wasn't getting much help from comcast. He pretty much shrugged me off because I don't have a traditional "infosec personality". A day later, I got a hold of some pretty high ups in comcast who actually fixed it. Their engineers were completely blown away by the issue and it sounds like it might do pretty well for me in the end. He wasn't the only one who shrugged me off, but if I'd gotten help from any of them, it might've gone well for them, too. The exclusionary attitude is pretty ridiculous.

I think a lot of the problem is that their threat models tend to cause them to become pretty reclusive, so that they don't really want to trust "newbs". Problem is that it now feels like a paranoid echo chamber.

I commented here a while back about working with Northwestern's security to try and protect my own HIPAA data. NW's response to my disclosure was "We're already doing scans. Report back when you find something serious." It's fucking sad that security's treated this way on many fronts. If you're not on the "in", then you're considered a script kiddie, if you're on the "enterprise in", then you're useless and completely caught in red tape, and if you're in the "in crowd", then you're part of an echo chamber.

Just yesterday, I was explaining my reasoning for wanting to make a LAN party insecure as possible to promote openness rather than exclusion, since any measure to add extreme security would prevent further appearances. The not-so-fine print wold be, "Hey guys, your stuff is insecure. This is intentional. Please treat it as such." His recommendation that for each different game (of maybe 30), you disconnect and re-request from dhcp for the vlan which is hosting the game you want to play. Two games wouldn't be allowed. Silliness.


Jerks are everywhere.

Quick read of article like, "Rise of the Cypherpunk" is a reminder of how many awesome people are in cyber security: https://news.ycombinator.com/item?id=11465203


Tribalism is really hurting the progress that security folks could be making. Development and operations are starting to collaborate and make huge gains in productivity. Rebranding security as a component of quality can help.

Coming into meetings with other teams with a list of (often unfounded) assumptions does not help anyone. I wrote about this a bit last year: https://blog.conjur.net/devops-and-security-the-five-monkeys


Technical Infosec guys can fix vulnerable dependencies themselves and use the tests Bob writes (right?) to make sure they don't break anything. Only need to bug Bob when there is a breaking change and you prioritize it with Bobs PM.

Also passwords are dumb - Bob should use certs and or ssh keys and 2fa to access anything.

The point is that any security that hinges on hassling Bob is likely bad security.


Article is fairly similar to my experience- infosec uses up all their goodwill by appearing inflexible on everything before we get to any important points. One of my (admittedly minor) examples: insisting that passwords dialogue boxes be obscured in all situations even in a private office and even if it breaks disabled input abilities and affordances.


InfoSec: "There is a vulnerability."

me: "PR or GTFO."

The problem described is that ISOs are professional nags instead of software shippers.


That only works if your application is simple enough for the infosec folks to know how to write a patch for it.

If the organization has 200 of those apps, it's unlikely a transversal security team is able to write patches for each one of them.


Not patches, bug reports. With information in. So the problem can be reproduced and prioritized. As part of the normal development process.


Whither Agile? Cross functional teams, sitting together.

This is The Reason (tm) :)


This is an excellent article.

> Hanlon’s Razor says “Never attribute to malice that which can be adequately explained by incompetence,” but I would add, “Never attribute to incompetence that which can be explained by differing incentive structures.”

That's profound & pretty thought-provoking.

Ternus's Razor, anyone?


Until a security incident occurs, the vast majority of people, be they developers, admins, management, senior management, etc. view security threats as something that only happens to other people. Why do we have to patch this, we are behind firewall x and y, we could NEVER be compromised.. yeah.


"The Truth" is, it's probably not that big a deal. No matter how stringent you are, there are probably still vulnerabilities. And just like everything else, if there's a problem, you work through it. And likely remain unscathed.


I don't know about infosec people, but computer security hackers are the biggest trolls and assholes I've ever encountered in tech. Their whole collective community is fueled by ego. (Source: 12 years in the hacker community)


> Put bluntly: to others, we’re jerks.

> If you don’t think this is a problem, you can stop reading here.

Thanks for saving me the time


>Put bluntly: to others, we’re jerks. If you don’t think this is a problem, you can stop reading here.

Well, that's a great way to have an honest dialogue.

> You fume for several minutes, cursing all developers everywhere, but no response is forthcoming. Angrily, you stand up and march over to his cube, ready to give him a piece of your mind.

At this point, all you've done in response to finding a serious security issue is to send an email with a very poorly worded and vague title. Why are you getting upset that nobody has reacted to it within a few minutes? I'm also curious as to why Bob would think this email to be unimportant, does InfoSec just use one email subject for everything?

If something's important, people generally turn to synchronous communications, where we can verify that our audience has processed whatever it is we need to tell them. Async communication works just fine on smaller time slices, like chat/IM, but email overall tends to have relatively high latency, especially if you need to communicate a serious security issue.

> Many in the Infosec community are fond of casting the security world as “us versus them,”...

Wait. Is this a joke? Isn't "us versus them" the canonical wrong way to frame just about anything? At the very least, it's obviously the wrong way to approach InfoSec, in addition to virtually any other collaborative effort. This should be obvious if only because you need to work with other people. Conflict does not cooperation make.

> ...he gets lots of “urgent” security emails that turn out to be Windows patches, admonitions to change his password, policy reminders and so on.

That right there is entirely on InfoSec. They're not only boy-who-cried-wolfing, they're doing so with vague subject titles. But that's okay, InfoSec is going to get very upset anyway, because they've failed to properly communicate the severity of the situation, and people are acting accordingly.

> ...and Bob’s demand that you explain the vulnerability is met with your impatient demand to “just do it.”

Ouch. Why does InfoSec not want to share the wonders of 0days? Seriously, one of the most fun things to do is dissect, or read others' dissections of, an 0day. This is great knowledge to share, and I would applaud any developer who takes an interest in security by wishing to understand what security issues are, especially 0days.

Additionally, rejecting someone's request for additional information about a task you've given them is almost universally a bad thing to do. If it's not feasible to grant them the information they desire, it's on you to properly communicate that, don't just rebuff their request. Transparency is great for teamwork.

> Bob... can’t deal with this now, he’s too busy, it’s not his problem (there are other devs, right?) and you should take it up with his manager.

I actually thing this is valid. If a developer does honestly feel to busy, going to their manager (who should be in the loop for security issues anyway) seems like a reasonable escalation. If it's actually an urgent issue, it should be valid to disrupt the manager's day with it just as much as it is to disrupt the developers day.

It seems like it's the same response a developer would give to someone freaking out about a serious bug in their code. If it's a serious issue (developer, for whatever reasons, is unconvinced) then you should take it up with their manager. That's not to say the developer is correct in being unconvinced, but arguing that route is less timely and less likely to actually work.

> The jaundiced attitude among Infosec mentioned above...

When I first read the article, I thought the author was knowingly straw-manning, hence their opening warning about jerks. This seems to indicate otherwise, as the author is seriously referencing their story as something remotely realistic. Either the author's story was a terrible straw man, or I'm very ignorant of how unprofessional my professional compatriots are.

Regardless, the author lays out the solutions:

> Practice active kindness.

That's horoscope level advice. It is good to be nice, but it's not exactly feasible to do all the time to everyone or we'd have solved a great many problems in society a long time ago. Generally speaking, being nice requires some emotional effort, and not everybody has the same capacity for that as everyone else, and those that can afford to do it probably already do. Although I suppose I could believe that an adult capable of being nice all the time simply isn't doing so because nobody suggested to do so...

> Seek to understand and make this clear.

Always great advice, like being kind, but far more actionable and sadly, far more applicative. Yes, communication is critical when working with others, especially when attempting to delegate tasks to others. This should be obvious, but I've found many people to not take this seriously. If you need something done, it's on you to ensure whoever you delegate the task to understands it at least as well as you. You can't fault them for your inability to properly communicate.

One of the saddest things to see is when two or more parties get upset at their own failures at communicating. For example: Bob shouts across the office "Hey Alice, do X" but Alice is listening to music and doesn't hear. Some time passes, then Bob gets upset that Alice has not done X, and Alice gets upset at Bob for being upset that Alice has not done X. Now we've got two parties, both upset over an unfortunate circumstance, with no resolution in sight. Had Bob attempted to confirm his communication, this whole situation could have been avoided. Alice could also do her part by not being upset by Bob's failure at communicating, but emotions are fickle and it's hard to defend yourself stoically when your attackers are fuming with emotions.

Additionally, had Bob confirmed his communication, in the event Alice had still not performed X, Bob can escalate to whatever authority is appropriate with the evidence of Alice's understanding of his request, increasing the odds for meaningful resolution (at least from Bob's perspective).

> Be flexible. Recalibrate “urgent.”

The boy who cried wolf.

> Create stakeholders...

I'd be a little concerned with teams arbitrarily deciding their security goals. They should 100% be involved in the process, but leaving them entirely to their own devices would incentivize them to have terrible security, as that's generally the easiest thing to do.

>... and spread security knowledge.

This is good advice, and it's sad to think it is useful advice. If you're ever in a situation in your life where you need to communicate to another person a task to be performed, you should be more than willing to share information about said task in order to aid in the delegate's efforts. It's an obviously useful thing to do, and it's sad to imagine adults making it through life, surely having many tasks delegated to them by this point, and not understand how valuable additional information about the task can be.

I would be highly concerned if my coworkers did not already understand this. While we all can't have our dream jobs/offices/employers, compromising on communication abilities is pretty much always going to have both bad and unpredictable consequences (you can't easily predict how someone's going to react to (mis)information from poor communication).

> Fixing Infosec’s jerk problem benefits everyone: us, the people we deal with, and ultimately the security of the system — and since that’s our long-term goal, we should actively seek to fix the problem.

I think the easy solution here is to fire those jerks. Seriously. Talk to them about things, ask them why they're doing what they're doing, but this isn't a systemic issue. This is a personal issue. People being jerks can (and will) happen anywhere people are present, and the solution isn't to group everyone in a large category together and then proclaim it a categorical issue that they all must work to resolve. Find the bad apples, deal with them as you would in any other situation. Communicate to them that what they're doing is both technically (security issues are not being fixed in a timely manner) and personally (they're failing to communicate on multiple levels and upsetting people) ineffective. Ensure they understand that their actions are not desirable and are creating a hostile workplace (I never thought I'd say that non-sarcastically...). If they keep doing what they're doing, let them go. If they harbor animosity towards being repressed, let them go. There's a wealth of wonderful people in the world, seek them out instead. Be selective, it doesn't take a large security team to be effective, especially when developers are a part of the security effort (which they should be !!!!11).


There is a fundamental conflict of interest between security people and developers/owners/the general public. Infosec isn't interested in curing fundamental IT insecurity by, say, using safe languages (like Rust or something JVM-based or maybe JavaScript which is also a safe GCed language) for application development and using safe OSes that aren't built around 70s state-of-art (ugh.. really no production ready examples here). Instead infosec community - both "blackhat" attackers and "whitehat" protectors - profits from business as usual: a never-ending stream of zero days, CVEs, buffer overflows, side-channel attacks. It's not in their true interest to kill their cash cow.

Remember: If we, developers, used modern safe programming technologies, such as safe languages and OSes built around capability-based security, 99% of security exploits wouldn't even exist.


[flagged]


We detached this subthread from https://news.ycombinator.com/item?id=11596004 and marked it off-topic.


I have read through every comment here, and fail to see a lack of self-awareness in any of them. Perhaps you are seeing things from a perspective I have not considered...or you are boldly trolling from a throwaway account. If it is the former, please elaborate.


I suspect it has something to do with tptacek's very strongly pro-US-state political views and his dismissive attitude to people who don't share those views, as demonstrated in the above comment.


First of all, this site is like a study in the Fundamental Attribution Fallacy. When I read comments like this, I'm certain the person writing them has no idea what I believe, but has someone managed to develop an entitlement to that knowledge because we share a message board. Unable to discern what it is I actually believe, you instead try to interpolate it from random comments on HN. Spoiler: it's not working. I'm especially amused by how many different political villains I've managed to be cast as. Am I libertarian? Am I a statist? Am I a right-winger? Am I an SJW? Depends on the thread!

Stop doing that.

Second, the #1/#2 distinction has nothing to do with the US government. Overwhelmingly, the #1's I've interacted with have cast "criminals" in the "bad guy" role, not NSA. The point isn't who the adversary is, it's that they're animated solely by the idea that they're in a real struggle with some kind of adversary in the first place.


Do go on.


If that poster responded to every comment that showed a lack of self-awareness with the comment "The lack of self awareness here is breathtaking" he'd create a bunch of threads that go on forever.


Well it's not like you divided it up like this:

1. Lazy people

2. Not lazy people.

Haven't you encountered those 1. types that profit by passing on security rumors and pushing papers instead of examining software?

There was a nice presentation by some 2. types that recommended using delimiters in your network protocols instead of [length] data. But I can't find it yet.


[flagged]


Just remember that "everyone" includes the infosec people themselves.


Exactly. Many people wouldn't react so negatively to "yet another request from security" if they hadn't experienced a prior river of BS over-escalated requests from security, or a string of purchased and then never-used security "solutions" that were bought at high-cost, installed at high-effort, and then never again looked at.

There are no doubt a lot of smart security people with great judgment in the industry. Unfortunately, my experience is that such people are FAR from the majority. (This is true for other non-security segments as well, of course.)


"And I'm sure some of them are good people."

Yet another sql injection, but ya, Infosec's the problem.


> pants on head retarded

Stop that.


[flagged]


[flagged]


[flagged]


Please stop posting unsubstantive comments here.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: