Hacker News new | past | comments | ask | show | jobs | submit login
Developers Are Not Idiots (cryptologie.net)
56 points by eric_khun 59 days ago | hide | past | web | favorite | 76 comments

I think this critique applies to any specialised profession dealing with a different profession - just because they don't know what you know, doesn't make them idiots. I'm thinking developers and product owners etc.

I do think our industry, overall, has a problem with what I call the "alpha nerd" clash, where people who have, if they're like me, often prided themselves on their intelligence (comparative to their peers) in their schooling etc. and perhaps been ostracised for their intellect and/or its pursuits, still pursue that need to feel smarter than others, which can lead to self-congratulatory sneering.

But it's never conducive to a high functioning team - and effective teams deliver effective products. And a team is nearly always a vertical slice of a company to some extent, so yeah, you might have a team of developers, but the testers, the product owners, the BAs, the sales people, they're all part of the team delivering that product, and while you have experience and knowledge that they don't have, they also have experience and knowledge that you don't have, so some empathy and humility are essential.

> I do think our industry, overall, has a problem with what I call the "alpha nerd" clash, where people who have, if they're like me, often prided themselves on their intelligence (comparative to their peers) in their schooling etc. and perhaps been ostracised for their intellect and/or its pursuits, still pursue that need to feel smarter than others, which can lead to self-congratulatory sneering.

This, and I'm typing this out around a group of these people right now. This is just a local scope of problems that then manifest in more global and troubling issues like diversity problems and predatory behavior among employees in the workplace.

> just because they don't know what you know, doesn't make them idiots

If you're making a product, then it's your responsibility to do so in a way that doesn't put your users or the general public in danger. Not knowing security best practices when you get started doesn't make you an idiot, but releasing something without first putting in the work to learn and then implement the best practices kind of makes you a shitty person.

I mean otherwise you might as well just say something like, "I just wanted to make a car, I don't care if it's safe or not because that's not the fun part."

A big problem with security is that you don't know what you don't know.

Want to allow users to upload image?

* Make sure submitted files are actually images * Limit file size to prevent denial of service * Normalize the filename to prevent directory traversal * Add a randomized component to filenames to prevent users from overwriting each other's files * Serve file with the proper content type * HTML encode filename for display

Then, oops, you didn't know SVGs allowed JavaScript, so now you have stored XSS.

I don't think that's negligence, it's just not something you'd necessarily know until you saw it. And this doesn't even consider language quirks and gotchas that are even more esoteric.

> Then, oops, you didn't know SVGs allowed JavaScript, so now you have stored XSS.

Right, but presumably you're using the standard techniques to mitigate XSS, e.g. sanitizing all other text input, using an X-XSS-Protection header, using a CSP that only allows scripts that have been whitelisted, etc.

Even if you don't know that an SVG can contain js, that shouldn't put your users at risk if you're doing everything else correctly. And then when that gets caught in an audit or reported by a user or as part of a bug bounty, you can fix it. (Although if you're going to be serving up a certain UCG file type to users, I don't think it's unreasonable to expect people to Google for vulnerabilities associated with that file type.)

Developers shouldn't be expected to have perfect security knowledge or to never make mistakes, but I think it is reasonable to expect them to not be grossly negligent. I don't want to live in a world where only people wealthy enough to afford full security audits before they get any traction should be allowed to launch products, but I also think developers should be held accountable if they're recklessly endangering people.

And while you are doing all that your manager is breathing down your neck to finish the damn thing and your competitor has already released a comparative feature that you are developing.

In the ideal world, companies would give developers enough time to figure out security. But in practice, most companies/businesses just want you to ship ASAP.

If you don't know security best practices then learning them on the fly isn't particularly helpful, simply because you don't even know what you don't know. It's much better to accept your ignorance and defer to someone who does know security.

> It's much better to accept your ignorance and defer to someone who does know security.

Society doesn't currently offer any mechanism to pay for that though. E.g. if you as a developer make a product, you can't force VCs to fund a security audit, nor can you force security folks to work for equity, nor is there any public money available to pay for this.

I disagree. It is the onus of companies and managers to give developer enough time to figure this out.

If they don't include it in their estimates, don't expect developers to put in their personal time to implement proper security.

I wish security experts would spend more time writing libraries. "Never roll your own security" isn't helpful when all you have are scrypt, bcrypt, and pbkdf, and you have to implement a complete auth system by yesterday.

It is a lot more fun to poke holes than to be poked, though, so I understand it.

They do! For instance, for the crypto problems David studies, security experts wrote Nacl, which brought cutting edge curve and AEAD crypto to developers with an almost user-proof interface.

Eh, my experience with crypto libraries is that they're poorly written from a user perspective (regardless of if they do it right under the hood). The hardest parts of the Matasano security challenge are those that require me to get some off-the-shelf crypto library to perform as expected.

For example, Rust's main AES implementation requires you to delve into details you shouldn't care about in order to get it to work. It doesn't let you just pass in Vec<u8> bytes and get Vec<u8> back out. It doesn't even let you put it through a converter functions to match the library's internal representation.

No, you have to learn low level details of how to instantiate a special fixed-size mutable buffer type and then figure out how to manage that and maintain proper ownership when extracting.

I can't speak to Rust's library, but Go's NewGCM/AEAD.Seal is about as good an interface as I think you could reasonably ask for.

Oh, nice, that's what I wish Rust had, which makes you navigate through this:


Isn’t rust-crypto abandoned? I didn’t think it was the best practice here.

EDIT: yep, not touched since 2016.

I'm not a crypto expert, but I asked someone who is and he pointed me to https://docs.rs/miscreant/0.4.2/miscreant/aead/trait.Aead.ht...

Hm, rust-crypto was what came up with when I searched -- don't remember why I chose that vs the others. (And it's not 2016 that was the wild-west days of poor Rust design and unnecessary mutable params.)

Miscreant looks like its has just as bad an interface as the one I used. You have to do a lot of work -- and deal with unnecessary low-level details of mutable fixed-length buffers -- to implement the thing you really want, which is a function that takes a byte stream of any length, a key and returns you another byte stream.

I think what you're after is:


These take a byte slice, and return an (allocated) byte vector.

The APIs you're talking about are special in-place ones for Miscreant's #![no_std] support, i.e. for embedded use or other usages which want to avoid heap allocations.

It's nice to support both of these usage patterns, because the allocating version has nicer ergonomics, but not everyone in the world has a heap.

rust-crypto probably has a lot of google juice because it used to be the premier project. Then its team disbanded. While you're right that 2016 isn't super long ago, it was originally created before Rust 1.0.

Const generics will also help these sorts of interfaces significantly.

But that's the thing: it's the premier project, and doesn't expose the simple interface that the Go library did.

Looking throw my code for cryptopals, I had to do these extra steps that aren't relevant to my problem (encrypting the plaintext with a key and cipher):

1) Create a mutable Vec<u8> vector from the plaintext.

2) Use that vector to create a mutable buffer of a type in the crypto library.

3) Create a second mutable Vec<u8> vector for the output.

4) Do step 2) for it also.

5) Call the encryption function with those two buffers as arguments. (This step's fine because calling an encryption function is unavoidable.)

6) Convert one of them back to Vec<u8> via `take_read_buffer().take_remaining().to_vec()`

(I'm ignoring the steps where you declare the key and cipher parameters since those are necessary)

And getting those to work required looking up the types of their library buffers.

But I don't care about the buffers where the library writes to when encrypting. I just want to be able to pass in a Vec<u8> and get a new one back out. At most, I should have to convert to and from the library's internal representation. Instead, I have to learn about and manage mutable variables and their ownership.

There's an entire section in my 2019 blog post about this:


rust-crypto has the most upstream dependencies, but is an unmaintained, abandoned project.

There are a number of other awesome cryptography projects in Rust (in fact some of the most advanced cryptography in the world is being developed in Rust), but they suffer from an awareness problem.

The Go standard library's cryptography, while full-featured and very mature, does suffer from a particular problem: it's a mixture of high-level and low-level APIs all within a single namespace / module. This makes it difficult to compare to Rust projects, because it's an enormous omnibus library, whereas in Rust there is no equivalent to that because the projects are more compartmentalized, and in my opinion that arrangement is preferable to what the Go standard library is doing. See also:


The closest thing to an all-in-one crypto library is ring. There's a notable difference between ring and the Go standard library though: ring presents a very high-level, hard-to-misuse API. This makes ring unsuitable for usages where you want "shoot yourself in the foot" cryptographic primitives which are difficult to use correctly and fail catastrophically unless used as such.

For the Rust equivalent of these "shoot yourself in the foot" cryptographic interfaces like Go "crypto/cipher" types such as Block, BlockMode, and Stream, take a look at the Rust Cryptography project:


Miscreant is built on top of these, and presents an AEAD interface, which could eventually be upstreamed into RustCrypto so Miscreant just implements it.

Not being a Rust person, my understanding was that Ring was where it was at now.

ring is great, I have no idea what its interface looks like, though. I've only used it transitively.

I'm also working on that with www.discocrypto.com but admittedly my API is not there yet.

Tink from Google is making good progress.

Libsodium is often mentioned as having a good API.

PS: it's not just frameworks. Languages like Rust and Go are doing huge things as well for developers.

Agree! Rust and Go both have really good nuts-and-bolts frameworks for crypto (I especially admire them for having misuse-resistant interfaces despite the need for standard libraries to support bad crypto protocols). Go's TLS is also, I think, a pretty major achievement.


Only pointing that out, because apparently this was a big deal on Twitter a couple weeks back, where one group of experts were arguing that you weren't a real security expert without being able to code, and the other group arguing the contrary.

I don't know what "security expert" means, but I do believe that you can't secure software if you can't code, and I do believe most real-world security jobs require you to secure software.

Don't tell me; tell the folks on Twitter. I was surprised as well. Quite a few of them took the "You don't need to code" side.

How can you be a security expert insofar as the domain is software if you can't code?

They can understand common patterns and find problems. That doesn't mean they can fix them.

You can probably tell people how to physically secure their premises or not to put passwords on post it notes without knowing how to code but you can't help them write secure code if you can't you know code.

This is like taking drivers ed from someone who can't drive. In theory you could reiterate material you had read on the matter and some material may even be of value but only a fool would learn to drive from someone who can't.

The fact that not everyone is willing or able to see the emperor is a nudist doesn't mean his threads are real.

Many code-related security problems can be discovered externally without even seeing the code (simple examples: XSS attacks, SQL injection problems, etc.) Generic advice like "escape your inputs" can be provided. I am not saying this is valuable advice...

Honestly, I have no clue. That's a question for the security experts on Twitter.

I'm not familiar with it, but is an almost user-proof interface the same as having dialed the accessibility down to zero? or does it mean really good and the user can hardly make a mistake?

i mean, pretty sure you mean "Never roll your own crypto".. most people should be writing some bit of "security".. You probably want authentication, authorization, audit and so on in your app.

Now, I do think I agree in the spirit of what you mean: frameworks and libraries should make this stuff easy.. I think for the most part, they do, for example: django has authentication; ruby has gems like Devise; python has stuff like flask-security..

So they exist, there are solutions, it's just not always well known by developers. So they do what they have been trained to: just write their own (faster to reason about my own code than consuming another dependency with all it's archaic config!).

Yeah, I was taking a bit of license with the term. The problem is that rolling your own non-crypto security is still fraught with danger, and not all languages provide good libraries that hide the dangerous choices for you. Even worse, those libraries often aren't written or audited by security experts either, which is a bigger problem.

That kind of demand requires a simple mindset: make doing the right thing easier than not. I was impressed, when working for mature software companies, how the teams providing internal tools that were easily overlooked (security, data logging, experimentation framework) managed to lead by having the best option easily accessible.

You don’t want people to write passwords in your codebase? Make getting the password as an environment variable easier to find; document where they are, how to refresh them; better: change them often and have the system that changes them support the use case better than a human with an easy to compromise post-it.

You don’t want people to store personally-identifiable information? Make all the relevant information associated with an irreversible hash; have all the business logic that process logs into something legible matching that hash. Soon, no one will use the `user_id` that is also used in the public profile URL.

It’s significant amount of initial work but that beats repeating basic things for every new joiner, by a mile.

> You don’t want people to store personally-identifiable information? Make all the relevant information associated with an irreversible hash; have all the business logic that process logs into something legible matching that hash. Soon, no one will use the `user_id` that is also used in the public profile URL.

I don't understand this. The user_id in the URL is not itself meaningful. If you know that my user_id in Slack is 642819733, that information is neither personally identifiable nor useful in any way.

But if you exfiltrate a bunch of data from Slack and learn user 642819733's birthday and billing address, then you've probably managed to identify me. You've done it even if you didn't know beforehand that there _was_ a user_id 642819733. The problem wasn't that 642819733 was a sensitive value; the problem was that it was the same value in all of Slack's records. Which your solution doesn't address.

The case that I have in mind assumes that you are an analyst who should know better, and was asked about… say, retention. You need a personal identifier in that table to match successive purchase by the same individual.

You can easily get your friend’s `user_id` because, well they share it on-line. If it is easy for me to take that 642819733 and find how much you spend, I can confront you and claim you are registered to more Slack channel than you have told me about. So it’s less “exfiltration” and more: some junior people are commonly given detailed data, for legitimate reasons, and you should make it hard for them to abuse it.

There are plenty of cases that I’ve been directly involved with where I was uncomfortable, but not sure I want to share them. I’ll just say this: Facebook analysts typically can type their own ID off the top of their head, and most know the ID of their close co-workers. I’ve never seen anything bad, but doing something bad felt a little too easy at times — and that _ease_ was a big deal.

This is a little tangential to your point, but per your example: it's generally inadvisable to track objects (including users) with incrementing integer sequences.

I've heard about doing things on the other end of the spectrum as well: make insecure things harder to do. You want to look at this cookie? Well you'll have to go through some layer of encryption.

Another problem I have with this sort of security person is the assumption that security risks always trump other risks.

A few years ago we found some vulnerabilities on a server of ours, and some of the security folks were really upset we didn't immediately shut down the server. I argued that we needed to weigh the risks of the vulnerability vs the risk of shutting down the server immediately. Both actions involved risk, and we needed to weigh which was more severe.

Just because something is a security risk doesn't make it the most important thing automatically. It has to be analyzed in the context, and triaged in similar ways as other risks we discover in our work.

The problem with this logic is that the risk of "shutting the server down" is almost always financial loss to the software firm, and the risk of "leaving it up" is often harm to end-users, who are an externality in the equation you're describing.

Sometimes, but shutting down the server can also harm users, if it is providing services that those customers rely on.

Security is not the most important thing. It's second at best.

I am reminded of Yegge's old platform rant[1], and the part about "Accessibility":

> When software -- or idea-ware for that matter -- fails to be accessible to anyone for any reason, it is the fault of the software or of the messaging of the idea. It is an Accessibility failure.

> Like anything else big and important in life, Accessibility has an evil twin who, jilted by the unbalanced affection displayed by their parents in their youth, has grown into an equally powerful Arch-Nemesis (yes, there's more than one nemesis to accessibility) named Security. And boy howdy are the two ever at odds.

> But I'll argue that Accessibility is actually more important than Security because dialing Accessibility to zero means you have no product at all, whereas dialing Security to zero can still get you a reasonably successful product such as the Playstation Network.

1. The first copy Google found for me is here. People should read the whole thing: https://gist.github.com/chitchcock/1281611

For cat sharing startups, maybe, but for a large fraction of real-world apps, not so much: it is better not to be available than to compromise the data of your users.

Things like to prevent XSS should just be in any web programming standart lib. A lot of bugs are just there because people have to write for so many problems there own solution.

Now we move so many things to a just digital world. But the ground is just like sand. I often think todays software is just like the tower of Pisa.

I work in automation industry with PLCs. There are funny things possible with "Industry 4.0" but .. to keep it secure .. to keep it running for 20-30-40 years ..

A lot of this may come down to learning materials.

If you go pick up some books and tutorials on how to learn X, I can pretty much guarantee that they're going to follow a pattern of "here's how to build a basic functional and maybe useful application" and security is going to be an afterthought if it's considered at all. As you continue to learn, there's a fair chance that you'll start with expanding on some of what you've already done in building that sample application - the one with no security.

This was one of the critique of "Applied Cryptography", see https://blog.cryptographyengineering.com/2011/11/07/in-defen...

> The detailed argument goes something like this: Applied Cryptography demystified cryptography for a lot of people. By doing so, it empowered them to experiment with crypto techniques, and to implement their own code. No problem so far.

> Unfortunately, some readers, abetted by Bruce’s detailed explanations and convenient source code examples, felt that they were now ready to implement crypto professionally. Inevitably their code made its way into commercial products, which shipped full of horribly ridiculous, broken crypto implementations. This is the part that was not so good. We’re probably still dealing with the blowback today.

The follow up book from Schneier (Cryptography Engineering) included a whole chapter on "bringing in experts if you deal with cryptography).

> the mindset of someone who is writing an application is to build something cool

As a developer, this rubbed me the wrong way. "Building something cool" is definitely a thing we all like to do, and every developer worth their craft has introduced some features just because they found them "cool"; that said, developers' "mindset" nearly completely focuses on several activities, with a varying level of importance depending on a number of factors:

    a) building something that works as close to the specifications as possible
    b) making sure that other developers can maintain it, and also that it can be deployed, tested etc
    c) (optional) making it as efficient as possible
Or, as the saying attributed to Kent Beck [1] goes: "make it work, make it right, make it fast". The "cool" factor is usually present, but is on a much lower level of importance than the above three. There are others, sitting in between and sometimes even rising to the top, with security definitely being one of them. But because of the focus on particularly "a" and "b" above, the developers simply don't have the level of understanding of security issues as is implied that the experts expect ("idiots" being a shortcut for anyone below that level). So they introduce the security features the best they can, such as that example of the key in the PHP app described in the article.

[1] http://wiki.c2.com/?MakeItWorkMakeItRightMakeItFast

There are lots of good reasons why application developers sometimes don't write secure code. For example, security can be an afterthought if the logic is so complex that you can barely get it working in the first place. Another example is that it's not necessarily clear when you write a component that it will be exposed to potentially malicious inputs. And there's always pressure to ship a working product so, without a clear mandate from the organization, security can fall by the wayside as a priority.

One thing that's not a valid reason is the one that the author gave. "Hur dur I develop web apps and I've never heard of XSS" isn't excusable. You have to know about this stuff, even if it's not your expertise.

Where do you draw the line? They need to know about XSS. Presumably you think they need to know about SSRF, too. What about clickjacking? Cache poisoning? What percentage of web developers in the industry do you think understand cache poisoning attacks?

There's rarely a bright line for competence in anything. But I'd say that, at least, everyone should know about something as basic as XSS. I'd throw in SQL injection and CSRF as well. As you get more obscure, you still expect most people to know about it, but it gets less astonishing if a given person doesn't.

Also there's a legitimate case for vulns that exist so far outside of your stack that you never run into them. It would be surprising, but not actually dangerous, if a pure Java developer wasn't familiar with buffer overflows, heap spraying, ASLR, etc. Same for most application-level developers and hardware-level issues like Spectre. But I maintain that the author's example of XSS for a web developer is way unreasonable.

The answer for that would ideally be all? We, as developers should be aware and ready to learn and build the experience around security , but a different thing are focus and ownership. You still want experts to drive specialized fields. I run all my architecture reviews by the security team and ask them to comment- internal RFCs. Very similar approach as with infrastructure/devops. Except the gap between dev and security is wither than between dev and ops. I recognize that is very dependent on both organizations and individuals but I shy away from hiring people that at don’t have at the very minimum the interest or awareness in security. I also shy away from business that see security as an afterthought

I just want the standard to be coherent. If it's "you have to know everything", then fine, let's say that. But saying things like "at least they should know the OWASP Top 10" doesn't make much sense to me.

Aren't most (non-insider) data breaches the result of OWASP top 10 things though? If the goal of security is to make sure the costs of stealing an asset exceed the value of that asset, then ensuring that the least-expensive-to-exploit vulnerabilities have been mitigated first seems like a good place to start.

The fact that there may be other vulnerabilities that are just as dangerous doesn't mean it's equally bad to have vulnerabilities like that, if they require more time and sophistication to exploit. E.g. you shouldn't have string comparison timing vulnerabilities in your code, but having them is less bad than a SQL injection attack even if they can both lead to the same result.

No, I don't think there's much science to the OWASP Top 10.

I think drawing the line at the OWASP top 10 is reasonable.

Ok, but if you're drawing the line at the vulnerabilities that have gotten the most promotion and documentation and not the ones that have the most impact or prevalence, isn't your standard cosmetic? For instance: you've let developers off the hook for SSRF.

Isn't OWASP top 10 based on impact and prevalence?

Clearly not; SSRF isn’t in the top 10, but log misconfiguration is.

The OWASP Top 10 is fashion, really, nothing much more.

In terms of ranking not in terms of seldction

and you realize that you haven't written the login page yet. It now dawns on you that you will have to figure out some rate-limiting password attempt mechanism, a way for people to recover their passwords, perhaps even two-factor authentication... But you don't want to do all of that, do you?

If you are trying to write your own login and authentication system, I dare say that you probably are an idiot or really smart.

Statistically, it’s probably the former.

Most applications we look at supply some sort of bespoke login and authentication system.

It is more and more common to have frameworks include blueprints for login pages, but it used not to be the case at all. I wouldn't say that it's that common nowadays either.

IME these usually have shortcomings that the business won't accept, a few I've come across:

1. Only AD or only database accounts when you need a combination of both,

2. Only offering role based security when you need something more granular. Side note - a lot of managers expect to be able to jest drop this in a a late requirement, but it usually requires significant work.

3. Not offering documentation on integration. If it includes a permissions system for instance then I also need a way to join against a table of granted permissions for performance.

4. Providing their own UI that's hard to get around. Great for a PoC but often harder to integrate with the rest of the system than writing the login page from scratch.

Generally I try to use an existing authentication system but have often had to create a custom security framework.

Most frameworks have an easy way to offload user authentication to third parties like Facebook, Amazon, Twitter, Google, etc. I would trust any of them to get it “right” before I would trust most developers.

If you need authentication for an internal corporate app, there is always authentication with the corporate directory services or something like Okta.

Security doesn't exist in a vacuum. It's only one of multiple concerns. For instance:

> offload user authentication to third parties like Facebook, Amazon, Twitter, Google, etc.

Have fun getting that through the GDPR compliance audit.

The way you have defined "developer", you have defined an idiot. An unethical, incompetent idiot.

Our purpose, as computer engineers, is to convince piles of carefully-cooked rocks ("computers") to do a very specific thing repeatedly ("computing"). Since humans are terrible at understanding specificity, we design languages and abstractions that help us encode our goals into computer-readable codes.

At no point is "build something cool" the goal other than during recreational programming; our goal is not even to build, necessarily, but to understand the requirements of our users and to help them use their resources more effectively. To this end, "cool" is totally worthless; it is a facet of design, of product, and of marketing. Let them focus on the spectacle. We must focus on the specification.

Why don't developers care about security? Because we have raised them to not care about security. We have failed to instill a desire for security into our culture. Consider: Computers have been remotely hackable since the dawn of the Internet. (Check out the history of SSH or email for terrifying examples.) We do not have the cultural norms necessary for getting security correct; we barely know how to distrust soliciting strangers on the street, let alone Martian or Christmas-tree TCP packets.

How can I help? Well, I can try to shove POLA [0] and capabilities [1] down everybody's throat, but so far, it's been like trying to convince people that the world is a globe [2].

Here's how you can help. Pick up a single easy security meme, think about it for a while, and then pass it on. I recommend starting with "security should not be optional" if you're a FLOSS developer, or "everybody is on the security team" if you have a corporate employer.

[0] https://en.wikipedia.org/wiki/Principle_of_least_privilege

[1] http://habitatchronicles.com/2017/05/what-are-capabilities/

[2] https://corbinsimpson.com/words/globe.html

The current story above this is "Oklahoma Department of Securities Leaked Millions of Files".

Which was due to a server not properly secured from the Internet - so most likely a sysop or devop, or Dave the guy from accounts who is pretty good with computers, messed up. Cute though.

Essential security practices (not advanced PT skills) should be basic software quality concerns. Full stop. "But it works" is not an excuse e.g. for doing sql queries without proper parameter binding.

All generalizations are false.


I'm sorry but this is way too click bait-ish for my taste. I always assume that no one is stupid.

You may do, but that isn't ubiquitous.

I know a lot of developers developers developers who are tactless, careless, and yeah basically idiots. So much of Silicon Valley is made up of this type that stands next to working legacy code and takes credit for it.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact