Hacker News new | past | comments | ask | show | jobs | submit login
Ninja Threat Modeling (matasano.com)
16 points by gthank on Oct 20, 2009 | hide | past | favorite | 11 comments



So then here's the deal. In the top 20th percentile of "secure" enterprise dev shops, with teams working on J2EE applications for claims processing and document retention, each dev cycle starts with a "threat model".

Threat modeling is part of the "SDL" (or "SDLC"), which is a powerful voodoo magic word that here means "secure software development". Threat modeling is both notoriously fuzzy --- nobody really knows exactly what it means --- and notoriously boring. The goal of the process is to help ensure that you're designing securely, developing securely, and dtesting securely (I forget what the 3rd of the 3 D's is).

Cory's talking about a methodology for getting teams to do threat modeling that either haven't totally bought into formal SDL development or skip the threat modeling step.

I'm not totally sure how this applies to startups and indies [1], but I am sold on the concept of "Seven Deadly Features", which is to say that any team should be able to start a project with a list of the 7 functional areas that are most likely to screw them over from a security perspective. A very typical list for web apps would be:

* File upload/download (it's a boundary crossing and often requires you to synchronize two different access control systems)

* Email (most apps have some kind of effectively anonymous account and some path for that account to trigger emails with some content controlled by attackers)

* Advanced Query Interfaces (because they're hard to implement using pure parameterized queries)

* Templating and look/feel customization (because you need to expose some, but not all, of HTMLs metachars to end users)

* User signup and password reset

(That's 5; the list on my desk is 7, but the last two --- SOAP/web services and thick clients --- are not likely to be problems for startups).

Can't evangelize enough for this idea, which is a really simple one. People who do web security memorize things like the OWASP Top 10, which lists "Cross Site Scripting" and "Cross Site Request Forgery" and "SQL Injection", but like Cory says, nobody wakes up in the morning thinking "I'll write an SQL Injection today!". Meanwhile, if you know what the scary features are, you can:

* Schedule them for extra-formal team review

* Double up the QA time on them

* Not assign them to junior team members to implement

[1] We wrote about this a few weeks ago:

http://chargen.matasano.com/chargen/2009/9/24/indie-software...


Meanwhile, if you know what the scary features are, you can:

It's worth adding another point to that list:

* Not implement them.

Some features just aren't worth the added complexity they would bring with them.


I used to say that, and every time I did, people would yell at me, so I've been conditioned not to.

Although I'll stick my neck out and say "crypto; don't even bother".


crypto; don't even bother

I'm supposed to yell at you now, right? :-)

In all seriousness, cryptography doesn't really fall into the "complexity which might introduce security flaws" category: Most cryptographic code is extremely simple. Yes, it's easy to get cryptography wrong; but the ways which people get cryptography wrong make it useless, not harmful. It's very difficult to write an HMAC with a buffer overflow.


You still think that people know what HMAC is. Most people think HMAC means "hash a key in with the message".


My point remains even if people do get confused about that: Their crypto might be broken and useless, but it still won't be harmful.


Dude. You just saw what happened to Flickr. Stop being argumentative.


What happened to Flickr would have happened even faster if they hadn't used crypto at all. Their crypto was broken, but didn't make things worse than they would have been in the absence of crypto.


An excellent summary; a little vague on details. If all server code can be compromised, and all client code, and the man-in-the-miiddle, what are you left to test?


Thanks for the feedback!

The assumption is that you should consider your source code open and exposed to inspection by an attacker, not that it has been compromised. As a result, if any security control is dependent on "secret" functions or embedded keys in your source, the threat actor is going to know about them and will attempt to use them against you.

As a result, the test plan will need to take that into account.


If there were more people writing this entertainingly and concisely about security testing, there would be a lot more CISAs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: