
Developers Are Not Idiots - eric_khun
https://www.cryptologie.net/article/466/developers-are-not-idiots
======
EdwardDiego
I think this critique applies to any specialised profession dealing with a
different profession - just because they don't know what you know, doesn't
make them idiots. I'm thinking developers and product owners etc.

I do think our industry, overall, has a problem with what I call the "alpha
nerd" clash, where people who have, if they're like me, often prided
themselves on their intelligence (comparative to their peers) in their
schooling etc. and perhaps been ostracised for their intellect and/or its
pursuits, still pursue that need to feel smarter than others, which can lead
to self-congratulatory sneering.

But it's never conducive to a high functioning team - and effective teams
deliver effective products. And a team is nearly always a vertical slice of a
company to some extent, so yeah, you might have a team of developers, but the
testers, the product owners, the BAs, the sales people, they're all part of
the team delivering that product, and while you have experience and knowledge
that they don't have, they also have experience and knowledge that you don't
have, so some empathy and humility are essential.

~~~
Alex3917
> just because they don't know what you know, doesn't make them idiots

If you're making a product, then it's your responsibility to do so in a way
that doesn't put your users or the general public in danger. Not knowing
security best practices when you get started doesn't make you an idiot, but
releasing something without first putting in the work to learn and then
implement the best practices kind of makes you a shitty person.

I mean otherwise you might as well just say something like, "I just wanted to
make a car, I don't care if it's safe or not because that's not the fun part."

~~~
Lab3301
A big problem with security is that you don't know what you don't know.

Want to allow users to upload image?

* Make sure submitted files are actually images * Limit file size to prevent denial of service * Normalize the filename to prevent directory traversal * Add a randomized component to filenames to prevent users from overwriting each other's files * Serve file with the proper content type * HTML encode filename for display

Then, oops, you didn't know SVGs allowed JavaScript, so now you have stored
XSS.

I don't think that's negligence, it's just not something you'd necessarily
know until you saw it. And this doesn't even consider language quirks and
gotchas that are even more esoteric.

~~~
Alex3917
> Then, oops, you didn't know SVGs allowed JavaScript, so now you have stored
> XSS.

Right, but presumably you're using the standard techniques to mitigate XSS,
e.g. sanitizing all other text input, using an X-XSS-Protection header, using
a CSP that only allows scripts that have been whitelisted, etc.

Even if you don't know that an SVG can contain js, that shouldn't put your
users at risk if you're doing everything else correctly. And then when that
gets caught in an audit or reported by a user or as part of a bug bounty, you
can fix it. (Although if you're going to be serving up a certain UCG file type
to users, I don't think it's unreasonable to expect people to Google for
vulnerabilities associated with that file type.)

Developers shouldn't be expected to have perfect security knowledge or to
never make mistakes, but I think it is reasonable to expect them to not be
grossly negligent. I don't want to live in a world where only people wealthy
enough to afford full security audits before they get any traction should be
allowed to launch products, but I also think developers should be held
accountable if they're recklessly endangering people.

~~~
arvinsim
And while you are doing all that your manager is breathing down your neck to
finish the damn thing and your competitor has already released a comparative
feature that you are developing.

In the ideal world, companies would give developers enough time to figure out
security. But in practice, most companies/businesses just want you to ship
ASAP.

------
strken
I wish security experts would spend more time writing libraries. "Never roll
your own security" isn't helpful when all you have are scrypt, bcrypt, and
pbkdf, and you have to implement a complete auth system by yesterday.

It is a lot more fun to poke holes than to be poked, though, so I understand
it.

~~~
tptacek
They do! For instance, for the crypto problems David studies, security experts
wrote Nacl, which brought cutting edge curve and AEAD crypto to developers
with an almost user-proof interface.

~~~
RandomInteger4
#NotAllSecurityExperts

Only pointing that out, because apparently this was a big deal on Twitter a
couple weeks back, where one group of experts were arguing that you weren't a
real security expert without being able to code, and the other group arguing
the contrary.

~~~
michaelmrose
How can you be a security expert insofar as the domain is software if you
can't code?

~~~
icedchai
They can understand common patterns and find problems. That doesn't mean they
can fix them.

~~~
michaelmrose
You can probably tell people how to physically secure their premises or not to
put passwords on post it notes without knowing how to code but you can't help
them write secure code if you can't you know code.

This is like taking drivers ed from someone who can't drive. In theory you
could reiterate material you had read on the matter and some material may even
be of value but only a fool would learn to drive from someone who can't.

The fact that not everyone is willing or able to see the emperor is a nudist
doesn't mean his threads are real.

~~~
icedchai
Many code-related security problems can be discovered externally without even
seeing the code (simple examples: XSS attacks, SQL injection problems, etc.)
Generic advice like "escape your inputs" can be provided. I am not saying this
is _valuable_ advice...

------
bertil
That kind of demand requires a simple mindset: make doing the right thing
easier than not. I was impressed, when working for mature software companies,
how the teams providing internal tools that were easily overlooked (security,
data logging, experimentation framework) managed to lead by having the best
option easily accessible.

You don’t want people to write passwords in your codebase? Make getting the
password as an environment variable easier to find; document where they are,
how to refresh them; better: change them often and have the system that
changes them support the use case better than a human with an easy to
compromise post-it.

You don’t want people to store personally-identifiable information? Make all
the relevant information associated with an irreversible hash; have all the
business logic that process logs into something legible matching that hash.
Soon, no one will use the `user_id` that is also used in the public profile
URL.

It’s significant amount of initial work but that beats repeating basic things
for every new joiner, by a mile.

~~~
thaumasiotes
> You don’t want people to store personally-identifiable information? Make all
> the relevant information associated with an irreversible hash; have all the
> business logic that process logs into something legible matching that hash.
> Soon, no one will use the `user_id` that is also used in the public profile
> URL.

I don't understand this. The user_id in the URL is not itself meaningful. If
you know that my user_id in Slack is 642819733, that information is neither
personally identifiable nor useful in any way.

But if you exfiltrate a bunch of data from Slack and learn user 642819733's
birthday and billing address, then you've probably managed to identify me.
You've done it even if you didn't know beforehand that there _was_ a user_id
642819733. The problem wasn't that 642819733 was a sensitive value; the
problem was that it was the same value in all of Slack's records. Which your
solution doesn't address.

~~~
bertil
The case that I have in mind assumes that you are an analyst who should know
better, and was asked about… say, retention. You need a personal identifier in
that table to match successive purchase by the same individual.

You can easily get your friend’s `user_id` because, well they share it on-
line. If it is easy for me to take that 642819733 and find how much you spend,
I can confront you and claim you are registered to more Slack channel than you
have told me about. So it’s less “exfiltration” and more: some junior people
are commonly given detailed data, for legitimate reasons, and you should make
it hard for them to abuse it.

There are plenty of cases that I’ve been directly involved with where I was
uncomfortable, but not sure I want to share them. I’ll just say this: Facebook
analysts typically can type their own ID off the top of their head, and most
know the ID of their close co-workers. I’ve never seen anything bad, but doing
something bad felt a little too easy at times — and that _ease_ was a big
deal.

------
cortesoft
Another problem I have with this sort of security person is the assumption
that security risks always trump other risks.

A few years ago we found some vulnerabilities on a server of ours, and some of
the security folks were really upset we didn't immediately shut down the
server. I argued that we needed to weigh the risks of the vulnerability vs the
risk of shutting down the server immediately. Both actions involved risk, and
we needed to weigh which was more severe.

Just because something is a security risk doesn't make it the most important
thing automatically. It has to be analyzed in the context, and triaged in
similar ways as other risks we discover in our work.

~~~
tptacek
The problem with this logic is that the risk of "shutting the server down" is
almost always financial loss to the software firm, and the risk of "leaving it
up" is often harm to end-users, who are an externality in the equation you're
describing.

~~~
cortesoft
Sometimes, but shutting down the server can also harm users, if it is
providing services that those customers rely on.

------
zbentley
Security is not the most important thing. It's second at best.

I am reminded of Yegge's old platform rant[1], and the part about
"Accessibility":

> When software -- or idea-ware for that matter -- fails to be accessible to
> anyone for any reason, it is the fault of the software or of the messaging
> of the idea. It is an Accessibility failure.

> Like anything else big and important in life, Accessibility has an evil twin
> who, jilted by the unbalanced affection displayed by their parents in their
> youth, has grown into an equally powerful Arch-Nemesis (yes, there's more
> than one nemesis to accessibility) named Security. And boy howdy are the two
> ever at odds.

> But I'll argue that Accessibility is actually more important than Security
> because dialing Accessibility to zero means you have no product at all,
> whereas dialing Security to zero can still get you a reasonably successful
> product such as the Playstation Network.

1\. The first copy Google found for me is here. People should read the whole
thing:
[https://gist.github.com/chitchcock/1281611](https://gist.github.com/chitchcock/1281611)

~~~
tptacek
For cat sharing startups, maybe, but for a large fraction of real-world apps,
not so much: it is better not to be available than to compromise the data of
your users.

~~~
_trampeltier
Things like to prevent XSS should just be in any web programming standart lib.
A lot of bugs are just there because people have to write for so many problems
there own solution.

Now we move so many things to a just digital world. But the ground is just
like sand. I often think todays software is just like the tower of Pisa.

I work in automation industry with PLCs. There are funny things possible with
"Industry 4.0" but .. to keep it secure .. to keep it running for 20-30-40
years ..

------
fencepost
A lot of this may come down to learning materials.

If you go pick up some books and tutorials on how to learn X, I can pretty
much guarantee that they're going to follow a pattern of "here's how to build
a basic functional and maybe useful application" and security is going to be
an afterthought if it's considered at all. As you continue to learn, there's a
fair chance that you'll start with expanding on some of what you've already
done in building that sample application - the one with no security.

~~~
baby
This was one of the critique of "Applied Cryptography", see
[https://blog.cryptographyengineering.com/2011/11/07/in-
defen...](https://blog.cryptographyengineering.com/2011/11/07/in-defense-of-
applied-cryptography/)

> The detailed argument goes something like this: Applied Cryptography
> demystified cryptography for a lot of people. By doing so, it empowered them
> to experiment with crypto techniques, and to implement their own code. No
> problem so far.

> Unfortunately, some readers, abetted by Bruce’s detailed explanations and
> convenient source code examples, felt that they were now ready to implement
> crypto professionally. Inevitably their code made its way into commercial
> products, which shipped full of horribly ridiculous, broken crypto
> implementations. This is the part that was not so good. We’re probably still
> dealing with the blowback today.

The follow up book from Schneier (Cryptography Engineering) included a whole
chapter on "bringing in experts if you deal with cryptography).

------
BerislavLopac
> the mindset of someone who is writing an application is to build something
> cool

As a developer, this rubbed me the wrong way. "Building something cool" is
definitely a thing we all like to do, and every developer worth their craft
has introduced some features just because they found them "cool"; that said,
developers' "mindset" nearly completely focuses on several activities, with a
varying level of importance depending on a number of factors:

    
    
        a) building something that works as close to the specifications as possible
        b) making sure that other developers can maintain it, and also that it can be deployed, tested etc
        c) (optional) making it as efficient as possible
    

Or, as the saying attributed to Kent Beck [1] goes: "make it work, make it
right, make it fast". The "cool" factor is usually present, but is on a much
lower level of importance than the above three. There are others, sitting in
between and sometimes even rising to the top, with security definitely being
one of them. But because of the focus on particularly "a" and "b" above, the
developers simply don't have the level of understanding of security issues as
is implied that the experts expect ("idiots" being a shortcut for anyone below
that level). So they introduce the security features the best they can, such
as that example of the key in the PHP app described in the article.

[1]
[http://wiki.c2.com/?MakeItWorkMakeItRightMakeItFast](http://wiki.c2.com/?MakeItWorkMakeItRightMakeItFast)

------
brianpgordon
There are lots of good reasons why application developers sometimes don't
write secure code. For example, security can be an afterthought if the logic
is so complex that you can barely get it working in the first place. Another
example is that it's not necessarily clear when you write a component that it
will be exposed to potentially malicious inputs. And there's always pressure
to ship a working product so, without a clear mandate from the organization,
security can fall by the wayside as a priority.

One thing that's _not_ a valid reason is the one that the author gave. "Hur
dur I develop web apps and I've never heard of XSS" isn't excusable. You have
to know about this stuff, even if it's not your expertise.

~~~
tptacek
Where do you draw the line? They need to know about XSS. Presumably you think
they need to know about SSRF, too. What about clickjacking? Cache poisoning?
What percentage of web developers in the industry do you think understand
cache poisoning attacks?

~~~
goliatone
The answer for that would ideally be all? We, as developers should be aware
and ready to learn and build the experience around security , but a different
thing are focus and ownership. You still want experts to drive specialized
fields. I run all my architecture reviews by the security team and ask them to
comment- internal RFCs. Very similar approach as with infrastructure/devops.
Except the gap between dev and security is wither than between dev and ops. I
recognize that is very dependent on both organizations and individuals but I
shy away from hiring people that at don’t have at the very minimum the
interest or awareness in security. I also shy away from business that see
security as an afterthought

~~~
tptacek
I just want the standard to be coherent. If it's "you have to know
everything", then fine, let's say that. But saying things like "at least they
should know the OWASP Top 10" doesn't make much sense to me.

~~~
Alex3917
Aren't most (non-insider) data breaches the result of OWASP top 10 things
though? If the goal of security is to make sure the costs of stealing an asset
exceed the value of that asset, then ensuring that the least-expensive-to-
exploit vulnerabilities have been mitigated first seems like a good place to
start.

The fact that there may be other vulnerabilities that are just as dangerous
doesn't mean it's equally bad to have vulnerabilities like that, if they
require more time and sophistication to exploit. E.g. you shouldn't have
string comparison timing vulnerabilities in your code, but having them is less
bad than a SQL injection attack even if they can both lead to the same result.

~~~
tptacek
No, I don't think there's much science to the OWASP Top 10.

------
scarface74
_and you realize that you haven 't written the login page yet. It now dawns on
you that you will have to figure out some rate-limiting password attempt
mechanism, a way for people to recover their passwords, perhaps even two-
factor authentication... But you don't want to do all of that, do you?_

If you are trying to write your own login and authentication system, I dare
say that you probably are an idiot or really smart.

Statistically, it’s probably the former.

~~~
baby
It is more and more common to have frameworks include blueprints for login
pages, but it used not to be the case at all. I wouldn't say that it's that
common nowadays either.

~~~
scarface74
Most frameworks have an easy way to offload user authentication to third
parties like Facebook, Amazon, Twitter, Google, etc. I would trust any of them
to get it “right” before I would trust most developers.

If you need authentication for an internal corporate app, there is always
authentication with the corporate directory services or something like Okta.

~~~
majewsky
Security doesn't exist in a vacuum. It's only one of multiple concerns. For
instance:

> offload user authentication to third parties like Facebook, Amazon, Twitter,
> Google, etc.

Have fun getting that through the GDPR compliance audit.

------
myWindoonn
The way you have defined "developer", you have defined an idiot. An unethical,
incompetent idiot.

Our purpose, as computer engineers, is to convince piles of carefully-cooked
rocks ("computers") to do a very specific thing repeatedly ("computing").
Since humans are terrible at understanding specificity, we design languages
and abstractions that help us encode our goals into computer-readable codes.

At no point is "build something cool" the goal other than during recreational
programming; our goal is not even to build, necessarily, but to understand the
requirements of our users and to help them use their resources more
effectively. To this end, "cool" is totally worthless; it is a facet of
design, of product, and of marketing. Let them focus on the spectacle. We must
focus on the specification.

Why don't developers care about security? Because we have raised them to not
care about security. We have failed to instill a desire for security into our
culture. Consider: Computers have been remotely hackable since the dawn of the
Internet. (Check out the history of SSH or email for terrifying examples.) We
do not have the cultural norms necessary for getting security correct; we
barely know how to distrust soliciting strangers on the street, let alone
Martian or Christmas-tree TCP packets.

How can I help? Well, I can try to shove POLA [0] and capabilities [1] down
everybody's throat, but so far, it's been like trying to convince people that
the world is a globe [2].

Here's how _you_ can help. Pick up a single easy security meme, think about it
for a while, and then pass it on. I recommend starting with "security should
not be optional" if you're a FLOSS developer, or "everybody is on the security
team" if you have a corporate employer.

[0]
[https://en.wikipedia.org/wiki/Principle_of_least_privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege)

[1] [http://habitatchronicles.com/2017/05/what-are-
capabilities/](http://habitatchronicles.com/2017/05/what-are-capabilities/)

[2]
[https://corbinsimpson.com/words/globe.html](https://corbinsimpson.com/words/globe.html)

------
Apocryphon
The current story above this is "Oklahoma Department of Securities Leaked
Millions of Files".

~~~
EdwardDiego
Which was due to a server not properly secured from the Internet - so most
likely a sysop or devop, or Dave the guy from accounts who is pretty good with
computers, messed up. Cute though.

------
alanfranzoni
Essential security practices (not advanced PT skills) should be basic software
quality concerns. Full stop. "But it works" is not an excuse e.g. for doing
sql queries without proper parameter binding.

------
dfilppi
All generalizations are false.

~~~
andirk
True.

------
taytus
I'm sorry but this is way too click bait-ish for my taste. I always assume
that no one is stupid.

~~~
zapzupnz
You may do, but that isn't ubiquitous.

------
andirk
I know a lot of developers developers developers who are tactless, careless,
and yeah basically idiots. So much of Silicon Valley is made up of this type
that stands next to working legacy code and takes credit for it.

