
Infosec's Jerk Problem (2013) - rfreytag
http://adversari.es/blog/2013/06/19/cant-we-all-just-get-along/
======
forgottenpass
One of the root causes seems to be that everyone with the aptitude for
security crowds toward jobs that don't actually involve implementing good
security. It's not as fun to be a developer that is really into security but
only have that be part of your job. Even if it's all you do, if your days are
just "analyze, document, harden, repeat," that's a lot less fun than getting
paid to pop boxes. I know, because that's what I do.

That disinterest is the entire raison d'etre of I am The Cavalry. From their
webpage:

    
    
        No one is rising to meet these challenges. The cavalry 
        isn’t coming. We are the domain experts and we are the 
        adults in the room. It falls to us. We Are The Cavalry.
    

The less-charitable interpretation of this is "Hey dummies! If we all want to
play red-team, who the fuck do we think is left over to fix things?"

I went to a two-hour presentation at $serious_security_conference on
$my_product_domain. I was really excited to see what there was for us. We're
not running webapps on racked pizza boxes, so there is a lot of topics in our
systems that aren't really explored in public security literature. Instead all
I got was some dummy thinking he's hot shit because he found some vulnerable
systems on Shodan. I got angry enough to walk out, and then went back thinking
there was probably some value to be had. 3 times.

~~~
mjolk
>A root cause seems to be that everyone with the aptitude for security crowds
toward jobs that don't actually involve implementing good security.

To expand on this, I've also noticed that "security" in enterprise-size
companies tends to be a dumping ground for helpdesk+ staff -- people that can
read a CVE or parrot "best practice", but not really grok the subject or think
critically about it (e.g. jumping from one password to another every 3 months
isn't "more secure"). It's been my experience that people in security teams
can't implement the fixes to the things they complain about because if you're
a security-minded dev, your career path will be in dev.

>It's not as fun to be a developer that is really into security but only have
that be part of your job.

I think it would be ideal to have security as a parallel track to regular dev
staff -- with their placement in the ecosystem being between dev and QA. Much
like how "devops" was used as a hiring filter to have sysadmins that can at
least read code, I think we should have a "devsec" group that bridges
security-QA and dev.

~~~
ergothus
> people that can read a CVE or parrot "best practice", but not really grok
> the subject or think critically about it (e.g. jumping from one password to
> another every 3 months isn't "more secure")

So much this.

One place I worked had a head of security that switched us to having passwords
that auto-expired every 30 days, had annoying requirements (mix of upper,
lower, numbers, and special characters) and not allowing reuse of a password
for 1 year. The increase in people having problems was handled by having
"security questions" and self-service password resets.

My insistence that "security questions" were just unexpiring, easy-to-guess
passwords was met with "But this is the standard!". Pointing out that rapid
turnover of hard-to-remember passwords led people to write them down generated
a similar reaction.

------
tptacek
I carve up this problem differently. To me, the field can be divided into two
basic categories of people:

1\. People who are into security to prosecute some immortal struggle between
good and evil.

2\. People who are into security because of the engineering challenge.

It's the people in group (1) that I tend to have a problem with. Often, for
the "good guys" security professionals, engineering facts are just a means to
the end of winning the war on the "bad guys". These are the people who tell
you your Scala serverside application is insecure because of some recently-
released Java applet bug; they're the people who fought against DNS query
randomization for a decade because we need to finally release DNSSEC; they're
the ones shipping grievously broken cryptography to try to "stop the NSA".

There are jerks in both categories, and competition is a dimension orthogonal
to this one, but I find I can handle jerkiness and competition between when I
know it's from someone who truly cares about understanding what they're
talking about.

~~~
secfirstmd
One of the worst is the cyber-security political purist. The person who has
drunk the kool-aid so badly and/or likes to use their knowledge to beat others
round the head with it. The person who likes to talk about things like
"risk/threat modelling" but actually doesn't:

a) Have a real understanding of the needs of the people they say they are
trying to help

b) Have the courage to make tough calls related to ideal vs good security.

Working in the NGO security space we get these people all the time. Send them
out in front of a bunch of African/Middle Eastern/Asian/Russian activists who
have been risking their lives for years for democracy etc etc and all they do
it spend three days showing off, berating people and then leaving. Telling
everyone that TAILS/PGP/Linux/TrueCrypt etc etc is for everyone and that
anything else "is gonna get people killed," so by extension shouldn't be done.
There are people who have made entire careers around taking that attitude,
which most of the time actually leads to demoralisation and a long-term
decrease in security.

It's easy to recommend the hard "cover your ass" stuff because you don't like
the NSA, it's harder to say "Hmm, ok we gotta assume some risk that the NSA
doesn't care what we're looking at and use a solid Google two-factor here and
see how we get on...now how do we stop the security guard from selling you
out?"

~~~
coredog64
This. If the Mossad/NSA/PLA thinks there is anything of value on our network,
they either already have it or could trivially get it. It wouldn't be the end
of the world if they had it, so I don't really care.

~~~
secfirstmd
Yep, within reason - I don't buy into the "I have nothing to hide argument, so
I have nothing to worry about." People like Trump or UKIP remind should remind
us of the danger of that. Though we should remember, in a realpolitik world,
the long-term interests of the NSA -> US Gov -> West etc is in supporting
democracy activists for example. It's just a pity that the sensationalism of
"counter-terrorism" interests in the short-term, actually damage the long
game.

On a sidenote, as a digital and physical security training company for NGOs we
manage to get a look at cases from both sides of the coin. Our very very rough
guesstimate is that we see confirmed human penetration about 3 times more than
we do digital penetration. Of course, this is very rough and has soooooo many
other bias factors at play (numerical, cultural how many we see vs not see
etc.). But I think it is a point that we keep having to reinforce. Too often
powerful "infosec jerks" distort the the focus towards Western biases because
of Snowden, Facebook, SnapChat, iPhone and this distracts time, money, energy,
training and security measures from the human penetration aspect of things -
which are very common in the developing world.

~~~
chris_wot
So I had to read that last paragraph twice. Could you explain what you mean by
"human penetration"? And no, I'm not being dirty (though my mind did initially
do a few mental flips when I first read that phrase, it's not my fault I never
completely matured...) I'm genuinely asking what is meant by that. Do you mean
that someone walks in and attaches a serial cable to a router and their
laptop, or plugs in a USB stick into an unlocked workstation?

~~~
secfirstmd
No, at it's most basic level I mean a spy or insider threat.

In the NGO contexts that I have seen that usually means someone who
legitimately is works in an organisation but turns for the standard reasons.
(More effective, faster and cheaper for an adversary that way)

To a slightly lessor extent, that means someone from the outside who has been
placed on the inside. (Less effective, longer and more expensive for an
adversary that way).

Sometimes both of these scenarios also include digital aspects, like stealing
a USB drive or something but not always.

Before you ask, why do people do it in an NGO environment - fairly similar
reasons as elsewhere (though I tend to order them differently based on
experience):

US Method of Counter-Intelligence:

-Money

-Ideology

-Compromise or Coercion

-Ego or Extortion

or these days:

-Reciprocation

-Authority

-Scarcity

-Commitment

-Consistency

-Liking

-Social Proof

A good read for more info here: [https://www.cia.gov/library/center-for-the-
study-of-intellig...](https://www.cia.gov/library/center-for-the-study-of-
intelligence/csi-publications/csi-
studies/studies/vol.-57-no.-1-a/vol.-57-no.-1-a-pdfs/Burkett-
MICE%20to%20RASCALS.pdf)

~~~
chris_wot
Well that's scary as all fuck! I didn't realise NGOs were as susceptible to
this sort of thing as commercial enterprises. I guess I was being naive and
should have known better.

Thanks for the insights.

~~~
secfirstmd
Honestly, in many cases NGOs actually have a far higher physical and digital
security threat environment than corporations. Partly that's one of the things
I love about my job.

I mean yeh, it's cool if you can get paid loads of money for a 9-5 job to
throw a ton of resources and people at protecting your Pied Piper software
company in suburban USA or Europe....But now take the exact same advisary
(China gov for example) and try to think up ways to minimise their threats all
while driving around with a hobbiest sysadmin (who the local gov may arrest,
torture or disappear if exposed) with very little English in the middle of the
night in darkest Africa/Middle East/Asia...The pay is crap or non-existent, it
can be high stress but you get to make a real difference, which is rewarding.

------
mcguire
Good article (although I thought it was going to have something to do with the
grsecurity thing), but it doesn't mention the _other side_ of the problem:

" _To them, we’re Chicken Little crossed with the IRS crossed with their least
favorite elementary-school teacher: always yelling about the sky falling,
demanding unquestioning obedience to a laundry list of arcane, seemingly
arbitrary rules (password complexity requirements, anyone?) that seem of
little consequence, and condescendingly remonstrating anyone who steps out of
line._ "

That is often true.

It might help if security knew what the hell we did. The last three "Java is
broken forever!!!" issues, when the problem was with _applets_ , nearly killed
me. Or possibly if they knew _their job_. The last employer's security policy
was defined by the latest vendor tool they bought, never mind that it couldn't
possibly find any issues in the applications we were writing.

Of course there was the code review I did where I pointed out that it had a
giant, monstrous hole in it. The other dev nodded and said, "yeah, that's a
good point". But the app had to be in production now, so nothing happened. To
escalate, I would have had to go to security, which would be like complaining
about a hangnail to an axe murderer. So, nope.

------
binarymax
The (un)funny thing is, most developers would love to have the time to make
sure their code is secure and well tested. Very often they lack a voice to
product stakeholders, to get the time off feature development, and make sure
their software is up to date with patches.

> _Practice active kindness. Go out of your way to do kind things for people,
> especially people who may not deserve it. If you wait for them to make the
> first move, you’ll be waiting a while — but extend a hand to someone who
> expects a kick in the teeth and watch as you gain a new friend. Smile._

I really like this quote. A security engineer and a developer teaming up
together as colleagues, are more likely to being taken seriously by
stakeholders. Both teams working together have a much better chance of being
given the time needed to make sure their software is stable and secure.

~~~
sbov
Until companies start being held liable for their software deficiencies there
won't be a change. This is also why I find "Software Engineering" a joke. The
equivalent of what passes for Software Engineering, in any other engineering
field, would put people in prison.

~~~
kazinator
That's hardly the case. A lot of what passes for software engineering also
passes for other engineering.

What we actually see is a lot of apologizing, recalls and class-action suit
settlements, and nobody actually seems to go to jail.

Not all engineering is about bridges not collapsing; conversely, there is some
software that is equally safety-critical and carefully developed.

There is also "everyday engineering", like in consumer products. That's a
category that fails miserably. Put simply, shit breaks. Past the one year
warranty? Too bad!

------
chris_wot
If you think Infosec guys are jerks, I've met a group of even bigger jerks in
some organizations: developers!

This isn't always the case, but in some software companies the developers
capture the entire organization and run roughshod over everyone. I've seen
developers tell support folks that because they are in support, they are
utterly worthless and their opinions and views don't count and never will,
right after they ask for their opinion. I've seen them obstruct QA people. Not
arsehole, stick it to you QA folk, but quiet, unassuming QA guys doing manual
test scripts and find a failure and send the case back to development.

Yup, having one-eyed folk around can really cause a toxic environment. Of
course, I have to be careful, 6 years ago _I_ was often that toxic one-eyed
person, and I learned this the hard way.

Don't do that. Don't be me. Be nice and be willing to concede that the other
person has good intentions. You'll be less angry, more likeable and probably
more productive.

------
jvehent
I've seen this happen way too often at various organizations. My take away is
security is a function of the product, not some external process you bolt on
top of it.

Organizations typically fail at security because they try to manage it as far
away from the product as possible. That leads security and dev/ops teams to
manage their goals in isolations: one cares about preventing incident, the
other cares about shipping products.

If Agile & DevOps have taught us anything, it's that everyone in the
organization should be focused on serving the customer, be it through
features, reliable operations, or data security. The only good way to do
infosec is to embed security with devs and ops and make everyone shares the
goal of making the product better.

~~~
forgottenpass
I don't think the problem is specialization, just that the incentives mean
that the specialization doesn't always get applied where it's needed.

Presume a company with the budget to have a SOC, they're doing all the
"regular" security jazz and them some. But are they auditing the network
services they themselves run, or just applying patches? Auditing the products
the company itself is producing, or just kicking the tyres? But they MITM all
the outbound https connections, I'm sure that more than makes up for it.

------
red_admiral
I read the first lines and thought immediately of all those e-mails marked
IMPORTANT coming from "my bank" that request I immediately enter my username
and password somewhere for "security".

Teaching blind compliance with any (unauthenticated) request based on
"security" is the one way we could make the situation even worse.

~~~
HCIdivision17
It's a pretty big deal. When everything starts at urgent and gets worse from
there, people will just rescale the noise to be more understandable. It's just
like the joke about sitting down and assigning points to a task and finding
out everything is 100 or all bugs are critical or all tasks are top priority.

The meta joke here is that is some ways every security issue _is_ critical,
but if everyone is immune to the fear then escalation will feel like the only
choice. Either you slowly discharge that stress immunity by being sensibly
mellow, or you start packing heat to 'convince' everyone to reboot _NOW_.
(There's probably room for _some_ amount of finesse between these two
extremes.)

EDIT: There's two terrible responses that seem to come out of this. Either no
one gives a damn, or no one gives a damn and just does whatever you say. The
first is bad because nothing gets fixed, and the second is bad for
`red_admiral's reasons (users treat anything that looks like a security rant
as a EULA and just do whatever it says).

~~~
herge
> all tasks are top priority

I often joke with my boss that if all tasks are high priority, all tasks are
therefore of average priority.

------
danielweber
After nearly 20 years in various kinds of infosec companies, there are
companies where jerks are not common, and companies where the jerks are very
common.

You cannot fix the jerks in the first kind of company. Learn from them,
quietly, and look for your exit.

Some people think that there is a correlation between being really good and
being a jerk. Only in the sense that the only way a jerk can survive is if
they are very very good. Some other people try to pattern-match them.
Mimicking the "jerk" part is easy. Mimicking the "very very good" part is not.
(See also: making a name for yourself by being a jerk is a lot easier than
making a name for yourself by being really good.)

There are good people who are not jerks. I've worked with a bunch of them.

~~~
fvold
I've worked with such a mimic. He did a security audit, then scrapped the
whole system because there was a completely unfiltered network interface, and
no matter what other security was in place that would never ever be good
enough on a GNU/Linux system. "All interfaces must have strict filtration,
only allowing traffic on previously approved ports as per system and
application specifications."

The interface in question was called "lo".

I don't work there any more.

~~~
NovaS1X
>The interface in question was called "lo".

Ok that cracked me up. Happy Friday.

------
s_q_b
What is extremely frustrating is the rise of "cyber security" Masters degrees.

The vast majority of these people have never written a single line of code.

They don't understand security, because they can't understand the underlying
logic in the code. They just write documentation to meet certain outside
standards, and have no idea what I'm talking about when I talk about our
security posture. They genuinely think that an online satellite campus degree
qualifies them to manage security devs from top ten schools.

This coast's tech centers are starting to drive me batty.

For a data scientist, my normal trade, especially. So much red tape that it's
a two month project to get myself a small server for test.

Anyone in need of a remote data scientist, Princeton University A.B., two
years at a funded startup, four years work experience with a major firm, with
specialties in machine learning, (real) cybersecurity, and big data
experience?

~~~
txru
Someone I know is getting one of those 'cyber security' Masters degrees at
Mercyhurst in Erie. I respect the guy, he's really smart, he has an intuitive
and pragmatic view of politics... but he hasn't written a line of code ever.
He uses a Mac, but he's never opened Terminal.

And apparently the average graduation salary for these people at companies
like Disney is around 120K. I want my friend to do well, but I also don't want
the security industry to consist of people who have never used Linux.

~~~
s_q_b
I mean, I earn more than my boss, but he's still in charge. He gets the credit
for the software projects I actually have to manage. So it's likely he'll rise
up the ladder more quickly, and earn a multiple of my salary.

Yet, he's further up the chart and makes the final decisions, when he can't
understand basic variables that go into what applications (especially early
stage platforms) are even supposed to _do_.

I'm coming to the realization that I just need to bite the bullet, and get one
of those "cybersecurity" degrees. The classes are remote, self-paced, easy to
earn a 4.0 GPA, and so heavily subsidized by the company as to be nearly free,
so I might as well.

I'm also making my way through the various certifications they seem to value
(CISSP, PMP, CEH (Certified Ethical Hacker).)

They're very easy to pick up. I just took a CISSP practice exam for the first
time and passed quite easily.

------
mgrennan
I have 40 year experience in developer, systems and security. The real problem
comes from the top. Upper management, CEO, CIO, CFO do understand risk and
don't understand technology. To them, security wholes are just bugs in "the
code" or product liabilities in products used.

More then once I showed management they had a life ending bug. (Little Johnny
drop tables) and was fired for being the messenger.

------
orf
Good article, I can't help but think this is the real problem though:

> and Bob’s demand that you explain the vulnerability is met with your
> impatient demand to “just do it".

Maybe rather than say "Just do it" you could say "Any user can delete our
entire database and steal all of our data". I think Bob would understand why
this is a bit more important than his current tickets, and by not doing it
when told about it any fallout would be his problem.

He might hate you for giving him the task but it would be done.

~~~
djrogers
> you could say "Any user can delete our entire database and steal all of our
> data".

See the problem is that all/most vulnerabilities wind up with this sort of
description, which can lead to Bob building up an immunity to it.

~~~
orf
Not really though, some do and they should be patched, but only a small subset
of vulnerabilities end up with a complete backend compromise.

Things are fucked, but not _that_ fucked.

------
bane
My few brushes with Infosec folks makes me think there's a general
socialization problem with the field, not really just a "jerk" problem.

For example (and I know it's still a fairly young field), but simple
activities like cataloging and sharing knowledge learned so far seems to be
something that just doesn't get done. So the wheel gets reinvented over and
over and over again. I'm actually kind of amazed at how many half-assed log
parsing/visualization/analysis tools there are. Doesn't anybody in infosec
centralize knowledge other than adversaries?

The feel I get from infosec conferences is more of a dick waiving contest and
tribal activity than an honest information sharing activity.

------
Diederich
As others have said, this is a good article.

I'd like to comment on a line that literally changed my life, from my favorite
[https://en.wikipedia.org/wiki/Vorlon](https://en.wikipedia.org/wiki/Vorlon),
Kosh:

"Understanding is a three-edged sword: our side, their side, and the truth."

I think about that every single day. Speaking for myself, a lot of the other
suggestions in the article naturally flow out of embracing this perspective.

I could literally go on and on about this, but I'll leave it be.

------
munificent
> Never attribute to incompetence that which can be explained by differing
> incentive structures.

Oh, man, this is such a deeply, vitally true thing. It's something I've
observed for a long time but never quite put it into words.

So often, when I see some person acting in a way that seems totally stupid to
me, it turns out they were just operating with goals and constraints that
weren't obvious and were different from mine.

------
hiddennetwork
I was in IT security for many years and got out largely because of the sheer
attitudes of many of the people I encountered. Some of them were genuinely
very nice, very smart people, but these gentlemen were few and far between.
Sadly.

I started off as an abuse investigator for a very large East Coast ISP, moved
into firewalls, then penetration testing once I knew what I was doing. All
along this path, the vast majority of guys I worked with, for, and around (on
contracts) were sheer jerks of the highest order. Everyone seemed to be angry,
upset, on edge. I can understand this. IT security--especially abuse
investigations--necessitates seeing the seedy side of the Internet. Ditto IT
security in general. You're not working for or with people who are
"creatives". You're working for and with people whose sole job is to minimize
threats and vulnerabilities.

The most stressful time was being a firewall engineer. Having to deal--on the
fly--with impatient customers wanting a six-spoke VPN "right now!" and them
being jerks about it was annoying.

I hated the calls that involved my placing a "tap" on some poor sod whose boss
wanted an 8-hour tcpdump on him to see where he went when online. They wanted
a gzipped log file placed on the server where they could SSH in and retrieve
it.

I hated the double standards where executives were given static IP addresses
on their machines and special rules created to allow them carte blanche access
to the Internet at large--no filtering. They also avoided the proxy. These
special executives were _&_ ^%#$@ and everyone knew it.

When you spend your life seeing nothing but evil, you take on a different view
of life and the world around you. I saw this happening and I got out in favor
of being a sysadmin. I'm much happier, and the IT security roles I do have are
much smaller in scope and I know how to handle them quickly and quietly.

------
swillis16
If Bob gets in trouble with his manager for not prioritizing his features over
security features and Alice gets in trouble with her manager for letting the
security issues happen then it sounds like an issue with management not
working together to set aside time for security related issues as part of the
development cycle for the product.

------
Spooky23
This is a big reason I stayed away from infosec, although it fascinates me in
many aspects.

IMO, security orgs, especially within companies have this awful reputation
because they borrowed the credentialing model of project managers and walk
around like doctors with an alphabet soup of certifications. Talking to these
folks is like crossing a circa-1998 MCSE with a lawyer.

The CPE requirements from CISSP and other certs encourage publishing and
delivering talks, which is good -- if the authors/talkers are competent. The
side effect within enterprises and some vendors is you develop a cadre of
bullshit artists who are great at throwing spears, not so great at doing
anything of value.

------
icaruswings
Security professionals should try to practice avoiding the use of the word
"no". Instead, help to explain what controls can be implemented to help reduce
risk. Encourage a sense of camaraderie with system owners and develop powerful
communication skills with everyone.

Help them to be excited to work with you, rather than to turn the other way
when they see you in the hallway.

The mentioned topic of "Recalibrate 'urgent'" is a great message. We're
rapidly approaching a desensitized feeling of security risk within
organizations and in the general public. As breaches/incidents continue to
rise we'll approach a "who cares?" level from the public.

What can they do? Steal my CC? Already been stolen. Steal my SSN? Already
done.

What's left to fear when privacy is gone?

What really keeps me up at night is when security incidents are measured in
lives. Scary...

------
bpchaps
Yeah, they definitely are.

I reported a major issue of comcast to an indirect associate of mine I know
who's huge into security. I wanted some contacts with folk who know how to
disclose it properly, since I wasn't getting much help from comcast. He pretty
much shrugged me off because I don't have a traditional "infosec personality".
A day later, I got a hold of some pretty high ups in comcast who actually
fixed it. Their engineers were completely blown away by the issue and it
sounds like it might do pretty well for me in the end. He wasn't the only one
who shrugged me off, but if I'd gotten help from any of them, it might've gone
well for them, too. The exclusionary attitude is pretty ridiculous.

I think a lot of the problem is that their threat models tend to cause them to
become pretty reclusive, so that they don't really want to trust "newbs".
Problem is that it now feels like a paranoid echo chamber.

I commented here a while back about working with Northwestern's security to
try and protect my own HIPAA data. NW's response to my disclosure was "We're
already doing scans. Report back when you find something serious." It's
fucking sad that security's treated this way on many fronts. If you're not on
the "in", then you're considered a script kiddie, if you're on the "enterprise
in", then you're useless and completely caught in red tape, and if you're in
the "in crowd", then you're part of an echo chamber.

Just yesterday, I was explaining my reasoning for wanting to make a LAN party
insecure as possible to promote openness rather than exclusion, since any
measure to add extreme security would prevent further appearances. The not-so-
fine print wold be, "Hey guys, your stuff is insecure. This is intentional.
Please treat it as such." His recommendation that for each different game (of
maybe 30), you disconnect and re-request from dhcp for the vlan which is
hosting the game you want to play. Two games wouldn't be allowed. Silliness.

------
nxzero
Jerks are everywhere.

Quick read of article like, "Rise of the Cypherpunk" is a reminder of how many
awesome people are in cyber security:
[https://news.ycombinator.com/item?id=11465203](https://news.ycombinator.com/item?id=11465203)

------
dustinrcollins
Tribalism is really hurting the progress that security folks could be making.
Development and operations are starting to collaborate and make huge gains in
productivity. Rebranding security as a component of quality can help.

Coming into meetings with other teams with a list of (often unfounded)
assumptions does not help anyone. I wrote about this a bit last year:
[https://blog.conjur.net/devops-and-security-the-five-
monkeys](https://blog.conjur.net/devops-and-security-the-five-monkeys)

------
siliconc0w
Technical Infosec guys can fix vulnerable dependencies themselves and use the
tests Bob writes (right?) to make sure they don't break anything. Only need to
bug Bob when there is a breaking change and you prioritize it with Bobs PM.

Also passwords are dumb - Bob should use certs and or ssh keys and 2fa to
access anything.

The point is that any security that hinges on hassling Bob is likely bad
security.

------
jmount
Article is fairly similar to my experience- infosec uses up all their goodwill
by appearing inflexible on everything before we get to any important points.
One of my (admittedly minor) examples: insisting that passwords dialogue boxes
be obscured in all situations even in a private office and even if it breaks
disabled input abilities and affordances.

------
FlyingCoconut
InfoSec: "There is a vulnerability."

me: "PR or GTFO."

The problem described is that ISOs are professional nags instead of software
shippers.

~~~
jvehent
That only works if your application is simple enough for the infosec folks to
know how to write a patch for it.

If the organization has 200 of those apps, it's unlikely a transversal
security team is able to write patches for each one of them.

~~~
yarrel
Not patches, bug reports. With information in. So the problem can be
reproduced and prioritized. As part of the normal development process.

------
ggchappell
This is an excellent article.

> Hanlon’s Razor says “Never attribute to malice that which can be adequately
> explained by incompetence,” but I would add, “Never attribute to
> incompetence that which can be explained by differing incentive structures.”

That's profound & pretty thought-provoking.

 _Ternus 's Razor_, anyone?

------
CIPHERSTONE
Until a security incident occurs, the vast majority of people, be they
developers, admins, management, senior management, etc. view security threats
as something that only happens to other people. Why do we have to patch this,
we are behind firewall x and y, we could NEVER be compromised.. yeah.

------
pbreit
"The Truth" is, it's probably not that big a deal. No matter how stringent you
are, there are probably still vulnerabilities. And just like everything else,
if there's a problem, you work through it. And likely remain unscathed.

------
peterwwillis
I don't know about infosec people, but computer security hackers are the
biggest trolls and assholes I've ever encountered in tech. Their whole
collective community is fueled by ego. (Source: 12 years in the hacker
community)

------
enthdegree
> Put bluntly: to others, we’re jerks.

> If you don’t think this is a problem, you can stop reading here.

Thanks for saving me the time

------
tetrep
>Put bluntly: to others, we’re jerks. If you don’t think this is a problem,
you can stop reading here.

Well, that's a great way to have an honest dialogue.

> You fume for several minutes, cursing all developers everywhere, but no
> response is forthcoming. Angrily, you stand up and march over to his cube,
> ready to give him a piece of your mind.

At this point, all you've done in response to finding a serious security issue
is to send an email with a very poorly worded and vague title. Why are you
getting upset that nobody has reacted to it within a few minutes? I'm also
curious as to why Bob would think this email to be unimportant, does InfoSec
just use one email subject for everything?

If something's important, people generally turn to _synchronous_
communications, where we can verify that our audience has processed whatever
it is we need to tell them. Async communication works just fine on smaller
time slices, like chat/IM, but email overall tends to have relatively high
latency, especially if you need to communicate a serious security issue.

> Many in the Infosec community are fond of casting the security world as “us
> versus them,”...

Wait. Is this a joke? Isn't "us versus them" the canonical _wrong_ way to
frame just about anything? At the very least, it's obviously the wrong way to
approach InfoSec, in addition to virtually any other collaborative effort.
This should be obvious if only because you need to work with other people.
Conflict does not cooperation make.

> ...he gets lots of “urgent” security emails that turn out to be Windows
> patches, admonitions to change his password, policy reminders and so on.

That right there is entirely on InfoSec. They're not only boy-who-cried-
wolfing, they're doing so with vague subject titles. But that's okay, InfoSec
is going to get very upset anyway, because they've failed to properly
communicate the severity of the situation, and people are acting accordingly.

> ...and Bob’s demand that you explain the vulnerability is met with your
> impatient demand to “just do it.”

Ouch. Why does InfoSec not want to share the wonders of 0days? Seriously, one
of the most fun things to do is dissect, or read others' dissections of, an
0day. This is great knowledge to share, and I would applaud any developer who
takes an interest in security by wishing to understand what security issues
are, especially 0days.

Additionally, rejecting someone's request for additional information about a
task you've given them is almost universally a bad thing to do. If it's not
feasible to grant them the information they desire, it's on you to properly
communicate that, don't just rebuff their request. Transparency is great for
teamwork.

> Bob... can’t deal with this now, he’s too busy, it’s not his problem (there
> are other devs, right?) and you should take it up with his manager.

I actually thing this is valid. If a developer does honestly feel to busy,
going to their manager (who should be in the loop for security issues anyway)
seems like a reasonable escalation. If it's actually an urgent issue, it
should be valid to disrupt the manager's day with it just as much as it is to
disrupt the developers day.

It seems like it's the same response a developer would give to someone
freaking out about a serious bug in their code. If it's a serious issue
(developer, for whatever reasons, is unconvinced) then you should take it up
with their manager. That's not to say the developer is correct in being
unconvinced, but arguing that route is less timely and less likely to actually
work.

> The jaundiced attitude among Infosec mentioned above...

When I first read the article, I thought the author was knowingly straw-
manning, hence their opening warning about jerks. This seems to indicate
otherwise, as the author is seriously referencing their story as something
remotely realistic. Either the author's story was a terrible straw man, or I'm
very ignorant of how unprofessional my professional compatriots are.

Regardless, the author lays out the solutions:

> Practice active kindness.

That's horoscope level advice. It is good to be nice, but it's not exactly
feasible to do all the time to everyone or we'd have solved a great many
problems in society a long time ago. Generally speaking, being nice requires
some emotional effort, and not everybody has the same capacity for that as
everyone else, and those that can afford to do it probably already do.
Although I suppose I could believe that an adult capable of being nice all the
time simply isn't doing so because nobody suggested to do so...

> Seek to understand and make this clear.

Always great advice, like being kind, but far more actionable and sadly, far
more applicative. Yes, communication is critical when working with others,
especially when attempting to delegate tasks to others. This _should_ be
obvious, but I've found many people to not take this seriously. If you need
something done, it's on you to ensure whoever you delegate the task to
understands it at least as well as you. You can't fault them for your
inability to properly communicate.

One of the saddest things to see is when two or more parties get upset at
their own failures at communicating. For example: Bob shouts across the office
"Hey Alice, do X" but Alice is listening to music and doesn't hear. Some time
passes, then Bob gets upset that Alice has not done X, and Alice gets upset at
Bob for being upset that Alice has not done X. Now we've got two parties, both
upset over an unfortunate circumstance, with no resolution in sight. Had Bob
attempted to confirm his communication, this whole situation could have been
avoided. Alice could also do her part by not being upset by Bob's failure at
communicating, but emotions are fickle and it's hard to defend yourself
stoically when your attackers are fuming with emotions.

Additionally, had Bob confirmed his communication, in the event Alice had
still not performed X, Bob can escalate to whatever authority is appropriate
with the evidence of Alice's understanding of his request, increasing the odds
for meaningful resolution (at least from Bob's perspective).

> Be flexible. Recalibrate “urgent.”

The boy who cried wolf.

> Create stakeholders...

I'd be a little concerned with teams arbitrarily deciding their security
goals. They should 100% be involved in the process, but leaving them entirely
to their own devices would incentivize them to have terrible security, as
that's generally the easiest thing to do.

>... and spread security knowledge.

This is good advice, and it's sad to think it is useful advice. If you're ever
in a situation in your life where you need to communicate to another person a
task to be performed, you should be more than willing to share information
about said task in order to aid in the delegate's efforts. It's an obviously
useful thing to do, and it's sad to imagine adults making it through life,
surely having many tasks delegated to them by this point, and not understand
how valuable additional information about the task can be.

I would be highly concerned if my coworkers did not already understand this.
While we all can't have our dream jobs/offices/employers, compromising on
communication abilities is pretty much always going to have both bad and
unpredictable consequences (you can't easily predict how someone's going to
react to (mis)information from poor communication).

> Fixing Infosec’s jerk problem benefits everyone: us, the people we deal
> with, and ultimately the security of the system — and since that’s our long-
> term goal, we should actively seek to fix the problem.

I think the easy solution here is to fire those jerks. Seriously. Talk to them
about things, ask them why they're doing what they're doing, but this _isn 't_
a systemic issue. This is a personal issue. People being jerks can (and will)
happen anywhere people are present, and the solution isn't to group everyone
in a large category together and then proclaim it a categorical issue that
they all must work to resolve. Find the bad apples, deal with them as you
would in any other situation. Communicate to them that what they're doing is
both technically (security issues are not being fixed in a timely manner) and
personally (they're failing to communicate on multiple levels and upsetting
people) ineffective. Ensure they understand that their actions are not
desirable and are creating a hostile workplace (I never thought I'd say that
non-sarcastically...). If they keep doing what they're doing, let them go. If
they harbor animosity towards being repressed, let them go. There's a wealth
of wonderful people in the world, seek them out instead. Be selective, it
doesn't take a large security team to be effective, especially when developers
are a part of the security effort (which they should be !!!!11).

------
sapphireblue
There is a fundamental conflict of interest between security people and
developers/owners/the general public. Infosec isn't interested in curing
fundamental IT insecurity by, say, using safe languages (like Rust or
something JVM-based or maybe JavaScript which is also a safe GCed language)
for application development and using safe OSes that aren't built around 70s
state-of-art (ugh.. really no production ready examples here). Instead infosec
community - both "blackhat" attackers and "whitehat" protectors - profits from
business as usual: a never-ending stream of zero days, CVEs, buffer overflows,
side-channel attacks. It's not in their true interest to kill their cash cow.

Remember: If we, developers, used modern safe programming technologies, such
as safe languages and OSes built around capability-based security, 99% of
security exploits wouldn't even exist.

