I do think our industry, overall, has a problem with what I call the "alpha nerd" clash, where people who have, if they're like me, often prided themselves on their intelligence (comparative to their peers) in their schooling etc. and perhaps been ostracised for their intellect and/or its pursuits, still pursue that need to feel smarter than others, which can lead to self-congratulatory sneering.
But it's never conducive to a high functioning team - and effective teams deliver effective products. And a team is nearly always a vertical slice of a company to some extent, so yeah, you might have a team of developers, but the testers, the product owners, the BAs, the sales people, they're all part of the team delivering that product, and while you have experience and knowledge that they don't have, they also have experience and knowledge that you don't have, so some empathy and humility are essential.
This, and I'm typing this out around a group of these people right now. This is just a local scope of problems that then manifest in more global and troubling issues like diversity problems and predatory behavior among employees in the workplace.
If you're making a product, then it's your responsibility to do so in a way that doesn't put your users or the general public in danger. Not knowing security best practices when you get started doesn't make you an idiot, but releasing something without first putting in the work to learn and then implement the best practices kind of makes you a shitty person.
I mean otherwise you might as well just say something like, "I just wanted to make a car, I don't care if it's safe or not because that's not the fun part."
Want to allow users to upload image?
* Make sure submitted files are actually images
* Limit file size to prevent denial of service
* Normalize the filename to prevent directory traversal
* Add a randomized component to filenames to prevent users from overwriting each other's files
* Serve file with the proper content type
* HTML encode filename for display
I don't think that's negligence, it's just not something you'd necessarily know until you saw it. And this doesn't even consider language quirks and gotchas that are even more esoteric.
Right, but presumably you're using the standard techniques to mitigate XSS, e.g. sanitizing all other text input, using an X-XSS-Protection header, using a CSP that only allows scripts that have been whitelisted, etc.
Even if you don't know that an SVG can contain js, that shouldn't put your users at risk if you're doing everything else correctly. And then when that gets caught in an audit or reported by a user or as part of a bug bounty, you can fix it. (Although if you're going to be serving up a certain UCG file type to users, I don't think it's unreasonable to expect people to Google for vulnerabilities associated with that file type.)
Developers shouldn't be expected to have perfect security knowledge or to never make mistakes, but I think it is reasonable to expect them to not be grossly negligent. I don't want to live in a world where only people wealthy enough to afford full security audits before they get any traction should be allowed to launch products, but I also think developers should be held accountable if they're recklessly endangering people.
In the ideal world, companies would give developers enough time to figure out security. But in practice, most companies/businesses just want you to ship ASAP.
Society doesn't currently offer any mechanism to pay for that though. E.g. if you as a developer make a product, you can't force VCs to fund a security audit, nor can you force security folks to work for equity, nor is there any public money available to pay for this.
If they don't include it in their estimates, don't expect developers to put in their personal time to implement proper security.
It is a lot more fun to poke holes than to be poked, though, so I understand it.
For example, Rust's main AES implementation requires you to delve into details you shouldn't care about in order to get it to work. It doesn't let you just pass in Vec<u8> bytes and get Vec<u8> back out. It doesn't even let you put it through a converter functions to match the library's internal representation.
No, you have to learn low level details of how to instantiate a special fixed-size mutable buffer type and then figure out how to manage that and maintain proper ownership when extracting.
EDIT: yep, not touched since 2016.
I'm not a crypto expert, but I asked someone who is and he pointed me to https://docs.rs/miscreant/0.4.2/miscreant/aead/trait.Aead.ht...
Miscreant looks like its has just as bad an interface as the one I used. You have to do a lot of work -- and deal with unnecessary low-level details of mutable fixed-length buffers -- to implement the thing you really want, which is a function that takes a byte stream of any length, a key and returns you another byte stream.
These take a byte slice, and return an (allocated) byte vector.
The APIs you're talking about are special in-place ones for Miscreant's #![no_std] support, i.e. for embedded use or other usages which want to avoid heap allocations.
It's nice to support both of these usage patterns, because the allocating version has nicer ergonomics, but not everyone in the world has a heap.
Const generics will also help these sorts of interfaces significantly.
Looking throw my code for cryptopals, I had to do these extra steps that aren't relevant to my problem (encrypting the plaintext with a key and cipher):
1) Create a mutable Vec<u8> vector from the plaintext.
2) Use that vector to create a mutable buffer of a type in the crypto library.
3) Create a second mutable Vec<u8> vector for the output.
4) Do step 2) for it also.
5) Call the encryption function with those two buffers as arguments. (This step's fine because calling an encryption function is unavoidable.)
6) Convert one of them back to Vec<u8> via `take_read_buffer().take_remaining().to_vec()`
(I'm ignoring the steps where you declare the key and cipher parameters since those are necessary)
And getting those to work required looking up the types of their library buffers.
But I don't care about the buffers where the library writes to when encrypting. I just want to be able to pass in a Vec<u8> and get a new one back out. At most, I should have to convert to and from the library's internal representation. Instead, I have to learn about and manage mutable variables and their ownership.
rust-crypto has the most upstream dependencies, but is an unmaintained, abandoned project.
There are a number of other awesome cryptography projects in Rust (in fact some of the most advanced cryptography in the world is being developed in Rust), but they suffer from an awareness problem.
The Go standard library's cryptography, while full-featured and very mature, does suffer from a particular problem: it's a mixture of high-level and low-level APIs all within a single namespace / module. This makes it difficult to compare to Rust projects, because it's an enormous omnibus library, whereas in Rust there is no equivalent to that because the projects are more compartmentalized, and in my opinion that arrangement is preferable to what the Go standard library is doing. See also:
The closest thing to an all-in-one crypto library is ring. There's a notable difference between ring and the Go standard library though: ring presents a very high-level, hard-to-misuse API. This makes ring unsuitable for usages where you want "shoot yourself in the foot" cryptographic primitives which are difficult to use correctly and fail catastrophically unless used as such.
For the Rust equivalent of these "shoot yourself in the foot" cryptographic interfaces like Go "crypto/cipher" types such as Block, BlockMode, and Stream, take a look at the Rust Cryptography project:
Miscreant is built on top of these, and presents an AEAD interface, which could eventually be upstreamed into RustCrypto so Miscreant just implements it.
Tink from Google is making good progress.
Libsodium is often mentioned as having a good API.
PS: it's not just frameworks. Languages like Rust and Go are doing huge things as well for developers.
Only pointing that out, because apparently this was a big deal on Twitter a couple weeks back, where one group of experts were arguing that you weren't a real security expert without being able to code, and the other group arguing the contrary.
This is like taking drivers ed from someone who can't drive. In theory you could reiterate material you had read on the matter and some material may even be of value but only a fool would learn to drive from someone who can't.
The fact that not everyone is willing or able to see the emperor is a nudist doesn't mean his threads are real.
Now, I do think I agree in the spirit of what you mean: frameworks and libraries should make this stuff easy.. I think for the most part, they do, for example: django has authentication; ruby has gems like Devise; python has stuff like flask-security..
So they exist, there are solutions, it's just not always well known by developers. So they do what they have been trained to: just write their own (faster to reason about my own code than consuming another dependency with all it's archaic config!).
You don’t want people to write passwords in your codebase? Make getting the password as an environment variable easier to find; document where they are, how to refresh them; better: change them often and have the system that changes them support the use case better than a human with an easy to compromise post-it.
You don’t want people to store personally-identifiable information? Make all the relevant information associated with an irreversible hash; have all the business logic that process logs into something legible matching that hash. Soon, no one will use the `user_id` that is also used in the public profile URL.
It’s significant amount of initial work but that beats repeating basic things for every new joiner, by a mile.
I don't understand this. The user_id in the URL is not itself meaningful. If you know that my user_id in Slack is 642819733, that information is neither personally identifiable nor useful in any way.
But if you exfiltrate a bunch of data from Slack and learn user 642819733's birthday and billing address, then you've probably managed to identify me. You've done it even if you didn't know beforehand that there _was_ a user_id 642819733. The problem wasn't that 642819733 was a sensitive value; the problem was that it was the same value in all of Slack's records. Which your solution doesn't address.
You can easily get your friend’s `user_id` because, well they share it on-line. If it is easy for me to take that 642819733 and find how much you spend, I can confront you and claim you are registered to more Slack channel than you have told me about. So it’s less “exfiltration” and more: some junior people are commonly given detailed data, for legitimate reasons, and you should make it hard for them to abuse it.
There are plenty of cases that I’ve been directly involved with where I was uncomfortable, but not sure I want to share them. I’ll just say this: Facebook analysts typically can type their own ID off the top of their head, and most know the ID of their close co-workers. I’ve never seen anything bad, but doing something bad felt a little too easy at times — and that _ease_ was a big deal.
A few years ago we found some vulnerabilities on a server of ours, and some of the security folks were really upset we didn't immediately shut down the server. I argued that we needed to weigh the risks of the vulnerability vs the risk of shutting down the server immediately. Both actions involved risk, and we needed to weigh which was more severe.
Just because something is a security risk doesn't make it the most important thing automatically. It has to be analyzed in the context, and triaged in similar ways as other risks we discover in our work.
I am reminded of Yegge's old platform rant, and the part about "Accessibility":
> When software -- or idea-ware for that matter -- fails to be accessible to anyone for any reason, it is the fault of the software or of the messaging of the idea. It is an Accessibility failure.
> Like anything else big and important in life, Accessibility has an evil twin who, jilted by the unbalanced affection displayed by their parents in their youth, has grown into an equally powerful Arch-Nemesis (yes, there's more than one nemesis to accessibility) named Security. And boy howdy are the two ever at odds.
> But I'll argue that Accessibility is actually more important than Security because dialing Accessibility to zero means you have no product at all, whereas dialing Security to zero can still get you a reasonably successful product such as the Playstation Network.
1. The first copy Google found for me is here. People should read the whole thing: https://gist.github.com/chitchcock/1281611
Now we move so many things to a just digital world. But the ground is just like sand. I often think todays software is just like the tower of Pisa.
I work in automation industry with PLCs. There are funny things possible with "Industry 4.0" but .. to keep it secure .. to keep it running for 20-30-40 years ..
If you go pick up some books and tutorials on how to learn X, I can pretty much guarantee that they're going to follow a pattern of "here's how to build a basic functional and maybe useful application" and security is going to be an afterthought if it's considered at all. As you continue to learn, there's a fair chance that you'll start with expanding on some of what you've already done in building that sample application - the one with no security.
> The detailed argument goes something like this: Applied Cryptography demystified cryptography for a lot of people. By doing so, it empowered them to experiment with crypto techniques, and to implement their own code. No problem so far.
> Unfortunately, some readers, abetted by Bruce’s detailed explanations and convenient source code examples, felt that they were now ready to implement crypto professionally. Inevitably their code made its way into commercial products, which shipped full of horribly ridiculous, broken crypto implementations. This is the part that was not so good. We’re probably still dealing with the blowback today.
The follow up book from Schneier (Cryptography Engineering) included a whole chapter on "bringing in experts if you deal with cryptography).
As a developer, this rubbed me the wrong way. "Building something cool" is definitely a thing we all like to do, and every developer worth their craft has introduced some features just because they found them "cool"; that said, developers' "mindset" nearly completely focuses on several activities, with a varying level of importance depending on a number of factors:
a) building something that works as close to the specifications as possible
b) making sure that other developers can maintain it, and also that it can be deployed, tested etc
c) (optional) making it as efficient as possible
One thing that's not a valid reason is the one that the author gave. "Hur dur I develop web apps and I've never heard of XSS" isn't excusable. You have to know about this stuff, even if it's not your expertise.
Also there's a legitimate case for vulns that exist so far outside of your stack that you never run into them. It would be surprising, but not actually dangerous, if a pure Java developer wasn't familiar with buffer overflows, heap spraying, ASLR, etc. Same for most application-level developers and hardware-level issues like Spectre. But I maintain that the author's example of XSS for a web developer is way unreasonable.
The fact that there may be other vulnerabilities that are just as dangerous doesn't mean it's equally bad to have vulnerabilities like that, if they require more time and sophistication to exploit. E.g. you shouldn't have string comparison timing vulnerabilities in your code, but having them is less bad than a SQL injection attack even if they can both lead to the same result.
The OWASP Top 10 is fashion, really, nothing much more.
If you are trying to write your own login and authentication system, I dare say that you probably are an idiot or really smart.
Statistically, it’s probably the former.
1. Only AD or only database accounts when you need a combination of both,
2. Only offering role based security when you need something more granular. Side note - a lot of managers expect to be able to jest drop this in a a late requirement, but it usually requires significant work.
3. Not offering documentation on integration. If it includes a permissions system for instance then I also need a way to join against a table of granted permissions for performance.
4. Providing their own UI that's hard to get around. Great for a PoC but often harder to integrate with the rest of the system than writing the login page from scratch.
Generally I try to use an existing authentication system but have often had to create a custom security framework.
If you need authentication for an internal corporate app, there is always authentication with the corporate directory services or something like Okta.
> offload user authentication to third parties like Facebook, Amazon, Twitter, Google, etc.
Have fun getting that through the GDPR compliance audit.
Our purpose, as computer engineers, is to convince piles of carefully-cooked rocks ("computers") to do a very specific thing repeatedly ("computing"). Since humans are terrible at understanding specificity, we design languages and abstractions that help us encode our goals into computer-readable codes.
At no point is "build something cool" the goal other than during recreational programming; our goal is not even to build, necessarily, but to understand the requirements of our users and to help them use their resources more effectively. To this end, "cool" is totally worthless; it is a facet of design, of product, and of marketing. Let them focus on the spectacle. We must focus on the specification.
Why don't developers care about security? Because we have raised them to not care about security. We have failed to instill a desire for security into our culture. Consider: Computers have been remotely hackable since the dawn of the Internet. (Check out the history of SSH or email for terrifying examples.) We do not have the cultural norms necessary for getting security correct; we barely know how to distrust soliciting strangers on the street, let alone Martian or Christmas-tree TCP packets.
How can I help? Well, I can try to shove POLA  and capabilities  down everybody's throat, but so far, it's been like trying to convince people that the world is a globe .
Here's how you can help. Pick up a single easy security meme, think about it for a while, and then pass it on. I recommend starting with "security should not be optional" if you're a FLOSS developer, or "everybody is on the security team" if you have a corporate employer.