What I find frustrating, personally, about cryptography (or security in general) is that a lot of knowledge seems to be concentrated in a very specialized area, but it doesn't diffuse very far outside of there.
To try to combat this, I've been trying to help make basic security and cryptography knowledge accessible to web and mobile app developers, to hopefully result in an overall net gain for the security of many companies the world over.
Obviously I can't teach everyone everything there is to know about these areas. (For starters, I myself probably don't even know half of it. Lattice-based timing attacks? Not a clue!) But realistically if I can make a dent in the propensity for bad habits and worse design choices, it's something.
Agreed. I've started poking into security for http-based things (web sites, REST api's, etc), and it's got a mostly different knowledge base from the one I've studied. Compare, for example, the CHES[1] proceedings (a good source of interesting side-channel attacks and mitigations back when I was doing that) with something like the hackers playbook. Once you get down into the details, these fields have a lot of unique concerns. Some implementation principles (least privilege, reference monitors, etc.) may have value in both domains, but the domain specific knowledge is completely siloed as you have observed.
That said, my biggest lamentation is that most people don't seem to apply solid implementation principles to the systems they build. Witness vulnerabilities like heartbleed. It would have easily been avoided if the common buffer reuse guidelines in pretty much every implementation guide for security would have been used.
> Some implementation principles (least privilege, reference monitors, etc.) may have value in both domains, but the domain specific knowledge is completely siloed as you have observed.
Some what conversely, I've proposed a model for classifying various forms of security vulnerabilities that might be easier for developers to conceptualize:
The idea is treat it like a taxonomic model: You have general security mistakes (data-instruction confusion), which can be drilled down into vulnerability classes and then specific bugs that, either stand-alone or chained together, result in specific vulnerabilities in specific implementations.
Or maybe I'm way off base here. The feedback I've gotten has largely been positive, though.
I think we're in violent agreement. I'm a fan of the high-level approach taken by the common criteria[1], which emphasizes cataloging threats, security objectives, environmental concerns & vulnerability mitigations when designing secure systems. This seems to overlap more than a bit with your thought process based on your blog post.
The challenge is to get people with good critical thinking skills (like the kind you advocate, and embodied in proceses like the CC) and domain knowledge involved in building things. For example, I think I have a good ability to reason (I started my career in formal methods) but at present I know very little about how a browser actually treats it's inputs. So, it's quite difficult for me to reason about the application operating environment presented by a web browser running some javascript. I wouldn't trust myself to start coding a secure web site today. Hopefully the industry will foster both of these skills (a critical mindset & good domain knowledge) and put them to good use.
For example: https://twitter.com/voodooKobra/status/651554578719227904
To try to combat this, I've been trying to help make basic security and cryptography knowledge accessible to web and mobile app developers, to hopefully result in an overall net gain for the security of many companies the world over.
Obviously I can't teach everyone everything there is to know about these areas. (For starters, I myself probably don't even know half of it. Lattice-based timing attacks? Not a clue!) But realistically if I can make a dent in the propensity for bad habits and worse design choices, it's something.