For what it's worth, and as weird as this is going to sound:
You don't in particular want the "security API" of an app framework to get externally validated. For the most part, the "security" components of app frameworks are boring, modulo some really simple- to- answer questions ("how do you store passwords", "how do you handle password resets").
When people talk about "security frameworks", they tend to be thinking about what we used to call "AAA systems" (authentication/"authn", authorization/"authz", and accountability/logging). You don't need to be an appsec practitioner to look at an AAA system and see whether it's any good. Does it apply authn by default? How easy would it be to accidentally expose a method that didn't require a login? Does it apply authz rules by default? How much of a pain would it be to reliably apply authz rules across my project? If stuff is easy and has sane defaults, you'll use them and benefit --- like you do in Rails, if you use association finders off a "current user" instead of the class object finders.
The bad news is, what you want tested in an app framework is everything else. How's it handling database persistence? How does it handle corner cases, like deliberately pathologically huge queries, out-of-range numerics, conversions between multipart and standard encoding, output encoding of HTML, Javascript, and CSS, CSRF validation, file upload, database column access, &c &c. It's the stuff you don't normally think of as "security code" that's most likely to bite you.
At Matasano, we have a "seven deadly features" in web apps that we tend to find the first and worst flaws in web apps in; they're:
1. Password reset
2. Email functionality
3. Interfaces to thick clients; "REST API" &c
4. File upload
5. File download
6. Templating
7. "Advanced search"
Only one of those is a security feature, and it's also a security feature that most "security APIs" don't actually implement (because every app wants to have a different UX for it).
Lest this sound self-interested: most frameworks haven't been formally externally tested. I'd be thrilled if someone had their framework tested formally, but I don't tell people not to use Rails or Django or Spring just because they haven't been pentested by an appsec firm.
How should one properly secure a "password reset" feature?
Ooh, I think I can actually do this: You send the address on file an email which contains a random, time-limited, one-use token which allows them to log into their account. You should not disclose presence or absence of the account via the reset mechanism on the site (only via email), but that's a comparatively minor issues. A successful password reset should end all in-progress sessions for the account. You should also do all the table-stakes web development things like e.g. not allowing SQL injection or XSS on the page.
There's plenty of failure conditions for password resets:
1) You create non-random reset tokens, such as MD5(user.id) or MD5(user.email), potentially compromising security of accounts.
2) You allow resets of your teams' admin accounts to happen through the publicly accessible password request feature. (n.b. Almost never a good idea, for one because it means you're probably doing shared authn/authe between your app and your admin console and that is generally a very bad idea. I say this in spite of shipping it in every product I write.)
3) You allow password reset tokens to get re-used, which increases the attack surface against individual accounts by e.g. making saved-to-disk emails into attack vectors for them. (Standard reset mechanisms only make emails-in-flight into attack vectors.)
> You allow resets of your teams' admin accounts to happen through the publicly accessible password request feature. (n.b. Almost never a good idea, for one because it means you're probably doing shared authn/authe between your app and your admin console and that is generally a very bad idea. I say this in spite of shipping it in every product I write.)
Can you please elaborate more, or link to articles about why this is a bad idea? What are the common failure scenarios that splitting normal & admin users fixes?
Briefly: there are a lot of common vulnerability classes which can compromise a user of your application / their cookie / their DB record, for example mass assignment (to update the "role" attribute in a free trial account to "admin") or XSS (tweet at your support account and have them click a link to your domain, bam, the adversary now has your cookie and can log in as you).
Separating the admin app and the normal app upper-bounds the impact of those flaws at something lower than "The worst thing someone could possibly do with the admin console." Given that many admin consoles have fairly extensive functionality, that's generally a good idea. (I mean, my businesses are pretty low risk as things go, and my admin consoles would trivially do about $20k of damage just with refunds if compromised.)
I just tested one recently that used client-side Javascript to generate the new temporary password for whatever account was submitted in the POST parameter.
Needless to say, you could arbitrarily set any account's password to whatever you wanted.
Account harvesting through the password reset functionality is one that pops up all the time. A lot of businesses now just accept the risk (which I no longer really get all that worked up over, at least provided they aren't also enforcing account lockout for invalid attempts)
You don't in particular want the "security API" of an app framework to get externally validated. For the most part, the "security" components of app frameworks are boring, modulo some really simple- to- answer questions ("how do you store passwords", "how do you handle password resets").
When people talk about "security frameworks", they tend to be thinking about what we used to call "AAA systems" (authentication/"authn", authorization/"authz", and accountability/logging). You don't need to be an appsec practitioner to look at an AAA system and see whether it's any good. Does it apply authn by default? How easy would it be to accidentally expose a method that didn't require a login? Does it apply authz rules by default? How much of a pain would it be to reliably apply authz rules across my project? If stuff is easy and has sane defaults, you'll use them and benefit --- like you do in Rails, if you use association finders off a "current user" instead of the class object finders.
The bad news is, what you want tested in an app framework is everything else. How's it handling database persistence? How does it handle corner cases, like deliberately pathologically huge queries, out-of-range numerics, conversions between multipart and standard encoding, output encoding of HTML, Javascript, and CSS, CSRF validation, file upload, database column access, &c &c. It's the stuff you don't normally think of as "security code" that's most likely to bite you.
At Matasano, we have a "seven deadly features" in web apps that we tend to find the first and worst flaws in web apps in; they're:
1. Password reset
2. Email functionality
3. Interfaces to thick clients; "REST API" &c
4. File upload
5. File download
6. Templating
7. "Advanced search"
Only one of those is a security feature, and it's also a security feature that most "security APIs" don't actually implement (because every app wants to have a different UX for it).
Lest this sound self-interested: most frameworks haven't been formally externally tested. I'd be thrilled if someone had their framework tested formally, but I don't tell people not to use Rails or Django or Spring just because they haven't been pentested by an appsec firm.