
“Why do you work in security instead of something more lasting?” - walterbell
http://addxorrol.blogspot.com/2016/10/why-do-you-work-in-security-instead-of.html
======
Eridrus
I asked myself this same question when my work was focussed on attack
research, and the result was that I stopped working on it, finding some clever
new attack is great, but no one really cares after it is patched. So I
focussed on building tools as well, automated reverse engineering and static
analysis tools. Tools are getting better, but even then it's not clear that
any of the current tooling approaches will have significant lasting pay offs.

For all the talk of security being a rapidly changing field, it's only true in
the same sense that JavaScript frameworks are rapidly changing, lots of churn
but not so much progress.

~~~
tropo
What do you mean "no one really cares after it is patched"?

At that point, you get a chance to find a new attack. People really care. The
government adds a new task order to your contract and then you get working on
the problem.

~~~
Eridrus
No-one really cares about what you have come up with before except as proof of
your abilities.

You can't keep building on top of your work as you can in other parts of CS.
Personally, this lead to a desire to keep bugs private since then at least you
could privately chain things together in interesting ways.

It can be a very lucrative career, and it can be a lot of fun if you enjoy
turning puzzles over in your mind (though this really applies more to tricky
exploit dev), but I did not find it satisfying in the long term.

------
matthewaveryusa
I transitioned from writing trading software to writing software to secure
networks (with a brief startup co-founding stint.) Here's my biased,
objective, and anecdotal opinion.

Security is just another knowledge domain. More than any other domain,
security has a clear divide between understanding the math behind cryptography
(primes, elliptic curves...), understanding how to implement correct and high-
performance code (sse4, avx,simd..), and high-level use of crypto
implementations to create secure protocols (diffie-hellman, tls...) On top of
that there's a new category that's emerging in the field of what I would call
embedded/root of trust security in the form of SGX and TPM.

If you stay with the mind-set of "use TLS and PGP" that's very commendable but
probably not very challenging. Going all the way up and down the crypto stack
is extremely challenging. Squeezing every last cycle out of CPUs is very
lasting (we'll always need crypto )and very transferable (mobile devices,
clients, servers, embedded, companies of any size, companies of any domain...)
It's also political and philosophical at times. Basically it's a hacker's
dream to work in security.

~~~
tptacek
Very few people working in security do cryptography. In fact, despite the fact
that the author of this post is a trained cryptographer, virtually none of his
professional work is cryptographic (he's a reverser).

I think one of the good things about security is that it exposes you to so
much domain knowledge (like cryptographic algorithms and bare-metal
programming).

But executing well in security (and particularly software security) is more
about a specific kind of problem-solving mindset. The unique difference
between security and other subfields of CS is that security provides you with
an adversary; it's intrinsically competitive in a way that other kinds of
computer science aren't.

------
nickpsecurity
I've been wondering why I worked in it for a different reason: overall lack of
demand for real thing and apathy of users. Industry faces a constant, uphill
battle that gets steeper every year due to the apathy. The intellectual
challenge is great in terms of having to learn everything part of the stack
down to EM effects of transistors. Then try to outsmart a world full of brains
in a design. You get moments you feel proud of.

Overall, though, it looks like it might have been more satisfying if I stayed
on the automatic programming track. Doing things like program/hardware
synthesis, static analysis, functional programming compilation, and so on.
Have a steady stream of useful inventions plus more users happy to adopt them.
Must be nice for _those_ fields. ;)

~~~
drvdevd
That's a very insightful take. When you say "apathy of users" are you speaking
of "higher level" users like CIOs and network admins or just the end users /
average users in the network or both?

It may be a truism or cliché but it seems correct from my perspective as a
"devops type": security is a losing game in favor of attackers due to
asymmetry between attack and defense.

Perhaps automation will change that? :)

~~~
nickpsecurity
"When you say "apathy of users" are you speaking of "higher level" users like
CIOs and network admins or just the end users / average users in the network
or both?"

All of them. There were secure computers far back as Burroughs, which was
successful.

[http://www.smecc.org/The%20Architecture%20%20of%20the%20Burr...](http://www.smecc.org/The%20Architecture%20%20of%20the%20Burroughs%20B-5000.htm)

The drivers of most of the computing industry were more features and speed at
lowest cost. Quality or security meant less of that. Tradeoff began there. By
about 80's, we mostly knew how to secure computers from specs down to CPU.
Market demand was so non-existent almost nobody was producing them. Per one of
INFOSEC's inventors, Roger Schell, there was some demand by CIO's (?!) but
they believed IT industry intentionally left bugs in products to sell them
fixes & would never sell them bulletproof stuff. So, maybe they weren't dumb.
;) By 90's, DOD's Computer Security Initiative that promised to _only_ buy
secure products for certain stuff plus TCSEC critera for building them led to
a number being on market. Almost nobody in DOD or regular market bought them.
Rinse repeat with only a few niches doing higher assurance with high unit
price due to low volume: safety-critical like aerospace, a few companies
selling to military, TEMPEST industry, HSM vendors, and some smartcards.
Smartcards being exception to low volume & high price. Even most on that list
have been lowering amount of assurance & increasing risky features due to
market demand.

Now, let's look at user side. Vast majority of time users get to choose
between a secure/private or insecure/surveillance-oriented product they will
choose the latter. Just look at whose dominating in messaging, storage, thin
clients, remote access, calenders, document formats, etc. Almost got an
exception in browsers with Chrome, based on OP Web Browser that was secure,
but they watered down security because they knew demand-side wanted blazing
fast over kind of fast but secure. There's now privacy-oriented, easy-to-use
apps for various things on smartphones for $1-10. Desktop stuff free or even
$5 a month on critical functions that are still easy to use. Almost nobody
uses these even when they get a lot of press.

So, it seems demand for actual security is almost non-existent even when it
gets main function done, performs acceptably, is plug-and-play, and is
inexpensive to free. I mean, if they don't care at that point, what can you
do? I think it's a fatal flaw of human nature in the way people's minds make
tradeoffs. Now, I encourage people to instead get into established companies
in senior roles or do startups where the product is good, people use the heck
out of it, it stays competitive, and you just bake security into it. Or just
avoid INFOSEC altogether for something not set up to fail. :)

"Perhaps automation will change that? :)"

I had some hopes for that. There were tools like 001 Toolkit by Hamilton,
various 4GL's, logic programming, and so on that basically let you specify the
problem where the tool would do everything else. One could do something like
that which embeds good security into it. Opa language does that for web apps
as an example. Additionally, the automated analysis tools for code with or
without annotations are getting _really good_. They're at a point where users
almost does no work. They also get almost no adoption even by companies that
"care about quality." ;) Anyway, machine learning that studied tons of
annotated codebases to learn what annotations go with what code patterns might
result in tools that correctly annotate new code & just run in background of
build process. There's potential there but manual effort will still be
required due to false positives/negatives. And new domain logic.

So, there's some brainstorming on that.

~~~
petra
The problem with selling for consumers is simple: it's hard to achieve
credibility. On the one hand, anti-virus software , that we all bought,
sucked. So we don't have much trust in security vendors.

On the other, after hearing about the crazy stuff hackers do, like stuxnet or
the more trivial daily privacy news, make people think the task is impossible.
And even what's possible - it's hard to quantify benefits.

Combine that with network effects , the marketing power of free apps and
default apps, and the complexity of it all - it's natural that most users
don't the time/money to spend on this.

~~~
nickpsecurity
"The problem with selling for consumers is simple: it's hard to achieve
credibility. "

I probably need to think on that angle more for the consumer side.

------
philsnow
> 1\. Original thinkers > 2\. Tolerance of non-conformism and diverse
> educational backgrounds > 3\. Intellectual honesty

This also sounds a lot like the overall SRE org at Google. I've never worked
with a more no-nonsense, brutally honest group who are open to everybody's
contributions.

Incidentally this isn't the first time I've seen this connection between
security and (Google) SRE. This talk at USENIX talks about the intersection
between "privacy engineering" and SRE:
[https://www.youtube.com/watch?v=Jx2y2yi0rZc](https://www.youtube.com/watch?v=Jx2y2yi0rZc)

~~~
uola
Except that Google is fairly infamous for having educational background as one
of their primary criteria. There are theories that close knit groups tend to
overestimate their own diversity since they are more sensitive to differences
among themselves e.g. [https://en.wikipedia.org/wiki/Out-
group_homogeneity](https://en.wikipedia.org/wiki/Out-group_homogeneity)

~~~
nostrademons
Like in many cases, perception lags reality. Google was already starting to
broaden out past Ivy League candidates when I was hired (2008), and it's been
almost 8 years since then. Even if they wanted to, they couldn't source enough
candidates from just the top schools to fill the open positions; they're at
66,000 employees now.

SRE actually seemed one of the more educationally diverse organizations while
I was there - it's a field where skills are easily measured and passion for
the job you do tends to result in better performance than educational
background. Search Quality was still overweight Stanford/Berkeley/MIT/IIT/etc
at the time, as was Research and X. Chrome and Apps had a wide variety of
backgrounds, including a number of really good people without college degrees.
Ironically, the departments that seemed to have the biggest Ivy-League bias
were AdSales and HR; when there are few objective ways of judging candidates,
more weight gets placed on educational pedigree.

~~~
kafkaesq
_When there are few objective ways of judging candidates, more weight gets
placed on educational pedigree._

About as sad as a statement of a company's (or in this case, a specific
department's) culture as one could make.

------
alain94040
_I described what I do for a living to him (reading code for subtle mistakes),
and he said "that sounds like one of the worst imaginable jobs ever". He is a
builder._

I find great interest in finding bugs. Writing code is actually an easier job:
it is well defined with inputs, outputs and your keyboard in between.

Human error, in contrast, could be anything. Someone made a typo. Someone used
a compiler that had a bug. Or the CPU had a bug. Or they didn't ship the
product right. Or they wrote the password on a postit. Human error is
unbounded, which is what makes it so intellectually challenging and
interesting to find.

------
winter_blue
> Predictions about what is "lasting" are very difficult to make :-)

It's kind of sad when you realize that a lot of the code you write will
eventually be rewritten or lapse out of use, or be replaced by something
better.

It's very hard to find truly "lasting" work in this world. Maybe research?

~~~
jedberg
>It's kind of sad when you realize that a lot of the code you write will
eventually be rewritten or lapse out of use

It's funny, I actually get sadder when I find out code I wrote a long time ago
is _still in use_. The deployment tool I wrote for reddit ended up lasting six
or seven years. Some of the stuff I wrote at eBay is still in use today.

~~~
toomuchtodo
I don't understand why that's sad at all. There's something to be said about
code that works well enough, it is needed for a long period of time without
being removed or replaced.

We don't pine for bridges that need to be torn down every few years.

~~~
Can_Not
If my code shows how less competent I was in the past, then I will definitely
want to rewrite it. Unfortunately, I've known some people who can't touch code
other people have written (or have touched), even if that "other person" was
themselves 6 months ago, without doing a large unnecessary rewrite, invoking a
crazy combination of lost features and freshly reinvented wheels.

~~~
toomuchtodo
I think we can agree though, most code is "good enough", and there's always
room to improve, refactor, and make it better.

Even if your code shows you were less competent, it worked (hopefully)! A
lesson was still learned.

------
rnnr
From the point of view of a frontend dev front, things aren't much different:

security guy: You accumulate lots of experience over the years, where most
errors happen, common culprits, but you _know_ the actual application of these
ideas will always be different over time.

frontend guy: You gain lots of experience over time about common
architectures, common best practices, common tooling, you _hope_ that the
knowledge you accumulated will stay relevant over the following years, but in
reality, except some basic concepts, everything changes. :)

I really liked his post. He pointed out to some deep issues like "Intellectual
honesty" which I've been concerned about too(yea even in math) but I don't
think the security sector is immune to these.

------
nanodano
Your title insinuates that security is not lasting....computer security has
been around for decades, pretty much since the inception of the internet, and
it is not going to go away. So what do you mean security is not lasting?

~~~
walterbell
(not the article author)

The lifetime of exploits/defenses (in an ever-evolving threat landscape) is
likely to be shorter than the lifetime of widely adopted application software.

~~~
nanodano
It's true the landscape changes rapidly, in terms of specific bugs found in
specific pieces of code, but that is the same reason security never stagnates
and will always be needed.

~~~
tptacek
He's not addressing the question of whether security will always be needed.
He's addressing the fact that the work product of a software developer working
seriously in security will have a half-life of just a few years, while the
work product of a generalist developer might live for decades.

Dullien is a software developer; he's the author of BinDiff, the first popular
binary diffing tool, and of a suite of other reversing tools like it (he's
been at Google for several years now).

------
Ar-Curunir
It seems like the author approached crypto (and security) from a perspective
of breaking things. Sure, that's a fundamental part of both fields, but the
harder (and IMO) more fun part is building secure systems.

That's the entire reason you have proofs of security in crypto: to show that
breaks are unlikely. I don't quite see why crypto should be "blindly result-
oriented", and whether that's even a good thing; when public key crypto was
invented it was wildly impractical.

The vast majority of crypto is usually not about breaks.

~~~
tptacek
The opposite is true.

There's no discipline in computer science that is _more_ about breaks than
cryptography. The whole field is refined down to the tension between
performance and vulnerabilities. What you can do in a cryptosystem is defined
by breaks; progress in the field is punctuated by breaks.

For what it's worth: Thomas Dullien is a cryptographer by training; he was one
of Dobbertin's doctoral students.

~~~
Ar-Curunir
Perhaps you refer to a specific part of applied crypto; most crypto, from what
I've seen, is not focused on breaks of the cryptanalytic kind (as in "oh,
let's try to break SHA256!").

Note that I'm speaking from an academic perspective.

~~~
tptacek
No, I'm referring to theoretical cryptography as well.

One example: the CAESAR competition that is trying to select a portfolio of
next-generation AEADs. Ciphers and constructions are entered, and submissions
are sorted by the success of attacks.

Another example: databases and format-preserving encryption. It might look
like Order Revealing Encryption is an exercise in _building_ things, but it's
also a response to vulnerabilities in Order Preserving Encryption. One team
develops something like cryptdb; another tears up cryptdb and uses its
findings to design Cipherbase (or whatever).

Unlike in generalist software development, most new things in crypto are
responses to breaks, often by the authors of the breaks.

~~~
Ar-Curunir
I can agree with the assessment of encrypted DBs and SSE schemes, but I don't
feel like that's the case for a huge portion of theoretical-ish crypto. Sure,
I'm not disagreeing that people don't break things; I'm just feel like that's
not a primary focus of most academic cryptographers; taking a look at the
papers in the last few CRYPTOs and EUROCRYPTs only consolidates this feeling.

One exception to that is perhaps multilinear maps, for which candidates are
proposed and broken all the time, but once a assumption becomes battle-tested
then I feel the focus shifts to building things out of these assumption

~~~
CiPHPerCoder
> I don't feel like that's the case for a huge portion of theoretical-ish
> crypto.

You could always verify this feeling:

[https://eprint.iacr.org/eprint-
bin/search.pl?last=365&title=...](https://eprint.iacr.org/eprint-
bin/search.pl?last=365&title=1)

~~~
Ar-Curunir
ePrint is not necessarily the best indicator for such things, since
essentially anybody can post there, but even so, from some random sampling it
seems like more than a majority of papers are constructive and not about
attacks.

~~~
tptacek
Let's try this, instead: can you name some important fundamental theoretical
work in cryptography that isn't defined by attacks? Obviously, that's a
subjective judgement. Whatever you name though, we can just look and see how
much of it is defined by attacker capabilities.

~~~
Ar-Curunir
Zero knowledge proofs? Multiparty computation? These are the staples of
theoretical crypto.

~~~
tptacek
Ok, let's say that those are. What's an important paper in either, so we can
look at it more closely?

~~~
Ar-Curunir
The very foundational paper for zero knowledge did not have much to do with
attacks. Similarly for many many subsequent works.

Similarly for MPC. Most work in MPC is about improving efficiency or proving
security against stronger classes of adversaries; it is not in finding breaks
of these protocols.

As I said earlier, tye majority of crypto research is in the following
scenario: pinning down a battle-tested assumption, using that assumption in a
provably secure manner to construct new systems. zero knowledge, MPC, ORAM,
etc. all fall within this paradigm.

Constructing new assumptions is obviously a task where attacks are necessary,
but new assumptions come along every 5-7 years; the intervening period is
dominated by this black box use of assumptions.

EDIT: see this[1] listof most cited security and crypto papers; very few
attack papers.

[1] [http://saweis.net/posts/most-cited-security-
papers-2013-2014...](http://saweis.net/posts/most-cited-security-
papers-2013-2014.html)

~~~
CiPHPerCoder
> The very foundational paper for zero knowledge did not have much to do with
> attacks. Similarly for many many subsequent works.

Dave designs a ZKP protocol for a specific application.

Jane finds a side-channel in the protocol that leaks some data about the
secret being verified and publishes it.

Is it still a zero knowledge proof?

> Similarly for MPC. Most work in MPC is about improving efficiency or proving
> security against stronger classes of adversaries; it is not in finding
> breaks of these protocols.

How do you prove security against stronger classes of adversaries without
finding breaks in existing protocols, or at least trying to do so?

~~~
Ar-Curunir
The property of being zero knowledge is entirely separate from properties of
the implementation; side channels were simply absent from the original model.
Either way, for every zero knowledge protocol, side channels are relevant only
on the prover's side; verification cannot allow anyone to learn any
information about the witness underlying the proof (other wise the proof is
not zero knowledge).

Sure, attacks can motivate some of the theory that is created, but most theory
work only very loosely uses these attacks as motivation. A number of
strengthening of the attack model are simply questions of curiosity: if we
strengthened the threat model in this way, what guarantees do we have. Very
rarely is it motivated by an actual attack.

~~~
tptacek
At this point, I'm just going to say I take your point by stand by my own.
Cryptography is unique among the disciplines we deal with in information
security in being motivated entirely by attacks.

In software engineering, we're solving problems we've known how to solve since
the 1970s. The challenge is solving them as programming environments and user
interfaces get more flexible and performant, and as the amount of code we have
to secure increases at a rate that outpaces our ability to verify it.

That's not the case with cryptography. In fact, if you include constructions
along with underlying algorithms, it's difficult (I can't do it off the top of
my head) to think of _anything_ we use today that was secure in the 1970s.

Most cryptography we use today wasn't widely available as recently as the
1990s --- not because nobody had invented the underlying basic concepts, but
because the constructions that animated those concepts were terribly insecure.

~~~
Ar-Curunir
But that's what I'm saying; there is lots of cryptography that is entirely
unmotivated by attacks, like:

work on determining the size of complexity classes like CZK and SZK;

Determining lower bounds of round complexity of MPC in various models;

Determining the kinds of assumptions different primitives can be based upon;

Determining the complexity classes supported by different delegation of
computation schemes, and the assumptions these can be based on;

Determining whether you can obtain particular primitives out of others, such
as OWPs from OWFs in a non blackbox manner, seeing what iO can be used for,
etc;

Improving asymptotic efficiency of various primitives, like NIZKs (see
SNARKs).

I think the difference in our viewpoints comes from the communities we're; I'm
in the theoretical, rooted in complexity theory side of things, whereas you
appear to be on the more applied side of crypto, given your emphasis on things
that are being used in practice.

------
13of40
Because it's cool.

That's really my answer. Maybe I'm setting myself up for career failure, but I
just enjoy having some Russian hacker lob a new puzzle at me every couple of
weeks.

------
calebm
I definitely understand the sentiment. In high school and university, I did my
share of "informal penetration testing", and always loved it. But when I went
to look for a job, I came to the "building vs breaking" crossroads, and went
with building. Fast forward a few years, and I found a job building security
tools. "Por que no los dos?"

------
seanwilson
Um, I'm curious now at what stage anyone would think security analysis would
ever become obsolete.

~~~
mkagenius
When AI starts writing programs? Then we would care about "intentions" rather
than mistakes (as with humans).

~~~
GTP
Uhm no, if an AI writes software that contains mistakes the errors would still
be important as now with human-written software. When a programmer makes a
mistake usually we don't think about intentions but this is because we usually
assume the mistake is unintentional. With AI we can ask ourselves if the
mistake is intentional or not, but the mistake would still be important.

------
nraynaud
I think it's like garbage collector, they don't build anything, they keep the
rats in check. We need people in maintenance jobs: repainting the road marks
when they wear or finding the issues with the new version of a product.

~~~
dwaltrip
I think it's a bit different than that. In security you are looking for
weaknesses that other humans could take advantage of. This adversarial aspect
is generally not present in maintenance.

------
danielrm26
Because lasting things are boring things that don't pay well.

------
contingencies
Didn't even read the article: Becuase it's an interesting problem.

------
aerialcombat
I think the answer to the question went a bit off-tangent. IMO, what the
original questioner had in mind was, "Why do you (or some people) take less
challenging and/or mundane computer jobs over something that seems more
"exciting" or something that will make a lasting impact?" And "Security" in
this question was just an example of a mundane job that most people have in
mind.

Sure, security can be/and is very challenging, exciting and lasting, but
that's besides the point of the question.

The question was not whether security is challenging to the brains or
"lasting", but more like "why do some programmers settle for less challenging
jobs when they can do a lot more with their brains?"

~~~
Buge
I don't think security is any less challenging, more mundane, or smaller
impact than other stuff. I personally find it more exciting and enjoyable. If
I enjoy something, I am more likely to be more productive and "do more with my
brain" than if I don't enjoy something.

