Hacker News new | comments | show | ask | jobs | submit login
The OWASP Top 10 is killing me (hpe.com)
116 points by sidcool on Oct 28, 2017 | hide | past | web | favorite | 87 comments



Not all vulnerabilities stem from ignorance, although this seems to be the default assumption of the infosec community.

Writing secure code takes more time than writing insecure code. Time is expensive. In every organization I've worked, security has been neglected pretty explicitly. It's not a case of "OK this looks secure", but instead more like "I am aware that our codebase has some security issues but I need to prioritize rushing out this new feature/improving our CPA".

And this is not always the wrong choice. For most organizations, the probability of having someone who is both malicious and competent enough to exploit an XSS vuln visit your site is pretty small. The chance that you'll go under if you don't get that new feature out or improve your CPA is pretty high.

If you want to criticise the state of security (and there is definite room for criticism), I think there is a need for tools and education to allow people to better make these decisions. We need ways to communicate

a) how likely it is that we'll be attacked?

b) what would the consequences be?

For now, when these questions are asked, the answers are almost always "pretty small" and "uh, possibly like really bad, depending on the attacker"

We need ways to translate these into numbers that we can compare with profit margins, etc. This is way more important than actually learning how to mitigate SQLi.


> We need ways to communicate

Microsoft developed quite a few of these ideas internally with the TwC (Trustworthy Computing) initiative in the early 2000s, and built a protocol - and development workflow - around threat modeling and security awareness. Most of their internal security-oriented protocol is listed for free:

https://www.microsoft.com/en-us/sdl/

As are some of their tools. For individual developers wanting to have a better sense of what threats their applications may face during the design stage, there’s a good Wiley book on threat modeling:

https://www.amazon.com/Threat-Modeling-Designing-Adam-Shosta...

If you’re really in a hurry, a lot of the typical OWASP vulnerabilities are mitigated by choosing higher—level, long-standing frameworks and abstractions (e.g. Rails, Symfony, ASP.NET MVC) that handle a lot of the things that can hurt you. From there, most of the low hanging fruit skids will find can be mitigated simply by following the security best practices documentation for your framework before you start writing code in it.

Anecdotally, auditing web applications for security issues is my day job. The majority of the time, ignorance is the real issue, not speed of development. They simply don’t have any idea what threats they are facing, or any real education in secure coding principles. Very rarely have I dropped vulnerabilities and had teams say “yeah, we know about that”. It’s way more “whoa, I didn’t even know you could do that”. Basic security education really matters.


I think the infosec community is the biggest barrier to improving security.

Security is like a bug light for ambitious idiots now. In large companies the function has been staffed up as a separate vertical with lots of CISSPs and other alphabet soup people who run around chasing nonsense and reporting how valuable they are.

Security expertise needs to be embedded in projects and programs so that leadership with domain knowledge can make smart decisions.


It sounds like your problem is with the infosec industry rather than the infosec community. The community in general would agree with you about the industry being kind of broken, I'd say.


We need ways to translate these into numbers that we can compare with profit margins, etc.

There is a way. I'll refer you to How To Measure Anything by Douglas Hubbard. His model is based on a combination of things:

1. Calibrated Probability Assessments

2. nth order effects

3. building a mathematical model

4. Monte Carlo simulation

apply his methodology and you can determine the impact of "hard to quantify" variable like "security" and get a probability distribution that can be used to assign values to specific scenarios.

Yeah, it's a little bit complicated and time-consuming; but the best things in life are, no?


If you like that book he wrote one about applying those ideas to this exact problem! https://www.amazon.com/How-Measure-Anything-Cybersecurity-Ri...

I've never managed to make the effort to apply his ideas with much rigor but they are definitely appealing and possibly better than the alternative of "maybe nothing".


Oh wow, I had no idea he'd written another book. I'll definitely want to check this one out at some point. Thanks for the heads-up!


I don't think anyone in security would disagree with you.

The problem is measuring something that is sort of definitionally unknowable (how many vulns are in this code, where, how likely is it someone outside the company will find it, then exploit it?) is hard (the book referenced has some ideas which boil down to "get some experts in a room and ask them, then average it")

A good security team will do their best at this but its unfortunately not as easy as "ok we found all the xss bugs which reduces our chances of getting owned by %2.5".

The further (maybe depressing) question is to the degree getting breached actually harms a company, my favorite argument that the two are tenuously related is this: http://www.cs.umd.edu/~awruef/HNYM.pdf and my favorite within is comodo. Comodo was hacked and the hacker gained the ability to sign certificates of their choice with comodo key. Comodo had one job, be worthy of trust and not get hacked. Did it harm them? They are still the #1 cert company. Look at the target breach or any others.

The only spot where a breach can be company-ending is all these bitcoin companies, which from my spot in application security makes them fascinating test cases. Here are a bunch that blew up after they got hacked: https://magoo.github.io/Blockchain-Graveyard/


Liabilities would be a well-defined measurable impact, and much needed since the industry obviously isn't going to change in this environment. $X per user, $Y if the information includes anything that could be used to do serious damage (CC numbers, health history, personal messages to/from other users).


Sorry for the non sequitur, but I got into cryptography and security generally and designed some internal systems that people use. I work in the field, or at least transitioned from SRE to SRE with a security focus. I started following vocal members of the "infosec community" online, Twitter mostly. It took a while, but I came to the conclusion that most (with a lot of exceptions) of self titled security experts had little to no experience in anything. They were more evangelists repeating what other people said with no contributions of their own or research of their own. Notable exceptions are people like Tavis Ormandy of course, but "infosec" seems to have become a title for people who want to be part of an in crowd and just repeat things over and over like "use multi factor authentication".


Web platforms in particular have a pretty long list of mistakes that, even if you're smart and careful, if you don't know about them you're virtually certain to make. Injection attacks, XSS, CSRF, etc. are very subtle and nonobvious.


If you use any decent web framework they protect you against them out of the box. I read a lit of love about libs against frameworks or sql vs orm today. But when i use django, i know that password hashing, input sanitization, html escaping, csrf tokens and the like are rock solid and enabled by default. It saves you the dilemna of security vs productivity at least for the minimum package.


This is one thing I like about IBM: they have a separate security team that audits stuff before you ship it. I was working on a react app where I set up server-side rendering, and then had it JSON-encode the state and dump it into a script tag in the end of the HTML. My thinking at the time was "It's JSON-encoded, and it's all the user's own data anyways, so it's safe."

Eventually I needed something from the querystring and for whatever reason put it into the state. It turns out that a <script> tag from the querystring in a string in a blob of JSON in a HTML page will execute. Oops.

Fortunately IBM's security team caught it before it ever shipped. Now it's been fixed and the app has a CSP header to help nullify any future mistakes.


That's weird, shouldn't that be a responsibility of your router? Or did you roll your own?


I think it was a combination of react-router and react-router-redux, and something I was doing for SSR that led to the issue. The initial fix was just to delete the react-router-redux data from the store just before JSON.stringify'ing it.

There were a number of weird things that were specific to my usage, so I'm not sure that the vulnerability would be there in a "proper" setup.


We experienced our first successful attack at my startup a few weeks ago.

What got us wasn't anything on the top ten list. I'm pretty sure it isn't covered anywhere in OWASP to my knowledge.

Users reuse passwords across different websites. An attacker tried a database of usernames/passwords sourced from elsewhere; a small percentage (about 1000 out of more than 10M requests) succeeded. 100 of those had something to steal. Attacker used a botnet, so our IP-based fail-to-ban logic was ineffective.

We thought about lots of ways to deal with this moving forward. My boss (CEO) didn't want to implement any kind of 2 factor authentication, because it's cumbersome and will lower conversion rates. We took a different strategy which is kind of complicated to explain, but it's not nearly as secure.

Anyway. What gets me is like: Password authentication SUCKS. It's a terrible terrible authentication strategy. It's awful. It should not be relied upon. It would be good if humans didn't reuse passwords. But we do. So it sucks.


> Anyway. What gets me is like: Password authentication SUCKS. It's a terrible terrible authentication strategy. It's awful. It should not be relied upon. It would be good if humans didn't reuse passwords. But we do. So it sucks.

There's an easy solution for this! Have I been Pwned offers a free API [0] you can use to check if a password has been in a previous breach.

If the password was previously disclosed in a breach, simply inform the user politely "Sorry, this password has previously been previously disclosed and will likely be used by attackers to compromise your account" and make them pick a new one.

No need for 2FA. Simply add a check to prevent users from re-using passwords.

(If you don't want to start sending user passwords to HIBP, you can also download the list and use it internally)

[0] https://haveibeenpwned.com/API/v2#PwnedPasswords


This looks interesting, and I wasn't aware of this API before. The downside is you have to send a plaintext or at best SHA1 to that server, which creates another problem even if the server is not logging and can be trusted.


It seems like an incredibly bad idea to use their API. It seems like one should just use the downloadable database. And besides, it'll be faster to just use the database rather than making an API call anyway.


Hey man,

Did you add a captcha to your login? You could add one after 2 failed attempts from an ip.

I understand not wanting 2FA for lower conversion rates, but I'd recommend implementing it in the future. It's one of the best ways to mitigate an attack against knowing your password. The session cookie is another story.

As for the passwords, I'd enforce standards around what a user is allowed to use. You want to have it be at least 10+ chars with special characters. If they reuse the password on another site and that gets compromised there really isn't much you can do. However, if we assume the reused password from the other site is hashed with a salt and you enforce more chars, they are much harder to crack and thus prevent the plaintext password from being recovered and accessing your site.

A great way to demo what more chars does to a password here

https://www.betterbuys.com/estimating-password-cracking-time...

You should also maintain a record of the common IP address an account uses and require an email/text to allow access that would further mitigate your problem. I know your a startup, so this is more of a luxury feature.


I like the idea of asking for a captcha after 2 failed logins. We had set up a pretty restrictive Cloudflare rule which blocked access to the site for some amount of time (60 minutes?) from IP addresses that got more than 5 (IIRC) failed logins within a short, sliding window of time (2 minutes?). But the botnet had a ton of machines on it. The rule didn't block nearly as many requests as I hoped it would. A captcha, though, could be set to be pretty restrictive like you suggest.

Passwords: I actually did up the password requirements (to require 10 characters), but the CEO wants me to pull back to just 8 characters. And no special characters. Our conversion rate went down a tiny bit, could be the restrictive password requirements that did it.

Keeping a record of common IP addresses and enforcing something like 2FA for strange logins: Love that idea. Would love to implement it. Can't, though. The CEO won't go for it, says 2FA lowers conversion and he's probably right about that.


You can use the have I been pawned API to check for login which are dangerous, and captcha those.


You can mitigate this almost completely by finding that "database of usernames/passwords sourced from elsewhere" (they're not hard to find) and blacklisting them. Users should not be allowed to use any breached password when they register. A simple message saying "this message was included in a recent password breach and is therefore not secure" should suffice to prevent users getting annoyed that they can't use their favourite password on your site.

Enforcing a minimum length of 10 or even 12 is a great way to eliminate nearly all previously leaked passwords from being used on your site, and it further encourages users to use password managers.

Passwords are shit, but they're here to stay for a while still.


HaveIBeenPwned makes this really easy by publishing a list of hashed passwords that have been observed in breaches [1]. The list is by no means complete, but it should cover a lot and is very easy to setup.

1: https://haveibeenpwned.com/Passwords


That... is a great idea. I'll do it. Thank you!


About the minimum password length: I did change it to 10 the day of the attack. The CEO wants us to change it to 8. We've seen a small dip in conversions (like 1%), and the longer password requirement could be why, he thinks.


That should be easy to prove with some basic A/B testing.


It is quite feasible to try to brute force you own users passwords. Run a background job that tests against common passwords and close variants, and if it is too simple, either force a harder one, or force email auth if any of their metadata changes (IP, ASN, browser fingerprint).

C.f. https://haveibeenpwned.com/Passwords


We deal with this almost every week, as in, we get into systems by searching through email:password leaks and use them.

There are a number of mitigating controls that can be applied here. Most will hamper usability, some will not.

There is a “simple” solution. Enforce 2FA. If not at the login, then before “dangerous” actions (transfer funds , change password , buy X/Y/Z )


That's a simple solution from a security perspective. From a business perspective the most simple solution is guaranteeing that you'll cover all the damage customers might possibly suffer.


That was one of the ideas that we pitched to the CEO. Only sensitive actions would require 2FA. CEO shot it down, saying it would require too much work on the part of the customer.


Why not give customers the option to choose to enable it then?


The kind of customer that reuses passwords is probably the kind that won't enable 2FA if it's optional.


Sure, but then it's on them rather than you.


Users will go through the work if you incentivize the behavior.


Have you considered more conservative rate-limiting external IPs on login endpoints?

E.g. with nginx it will make sense to set a custom rate limiting zone to prevent many requests from the same IP specifically to the login page:

https://www.nginx.com/blog/rate-limiting-nginx/

This will not fix the root cause, but will make it considerably harder to scan for matching passwords.

You can also set up Fail2Ban to block IPs that failed to authenticate many times:

https://serverfault.com/questions/421046/how-to-limit-nginx-...


We do have fail to ban type logic in place. The attacker used a botnet; requests came in from many different IP addresses.


Have you tried rate limiting based on the user account? That should block a distributed attack since each login would count against the rate limit independent of the IP address.


How would that help in this situation? The attacker had a database of user/password pairs they were trying; they weren't trying to brute-force a particular account.


Oops, you're completely correct. I was thinking of the brute force scenario.


> What gets me is like: Password authentication SUCKS.

There's a conference for that. Has been since 2010. Not quite sure about 2017/2018

> My boss (CEO) didn't want to implement any kind of 2 factor authentication, because it's cumbersome and will lower conversion rates

True. "Security is not a convenience". But there's no easier way to get people authenticating off of something at least a bit secure than straight totp 2fa.


> What got us wasn't anything on the top ten list.

From my PoV nothing got to you. Yes, you may have got some bad publicity, but the fault lies with users who have poor (non-existent?) OPSEC.

Just as there are 2 types of people: those who don't have backups and those who will make backups, there are those who don't have password compartmentalization and those who will have it.


Do you really think that their card processor would give a shit about whose fault all the fraudulent charges were?


What you're describing gets covered by OWASP as "credential stuffing"

You can find it as A2 in the current (RC2) draft OWASP Top 10 and on the wiki here https://www.owasp.org/index.php/Credential_stuffing


Wow! Thanks. I should read the site more carefully. Thanks for the correction.

Btw I feel like this should be in the top 10 list right? With mitigation strategies?


Make 2FA mandatory for users who were breached or are using passwords that are in known password lists.

I don't know how much you spent in support, but U2F Zeros are dirt cheap. You could probably just proactively mail them to your clients and encourage them to use 2FA.

Or offer discounts or other perks to users with 2FA.


It's interesting that you went for TRUE 2FA. I was thinking about going for email/text-based 2FA. I know it's not true second factor authentication, but it seems so much more accessible than requiring a separate device. Our customers are not at all tech savvy, generally speaking; there's no way they'd go for a dongle.


Passwords suck, okay. What alternative is there?


Oauth helps, because people are more likely to have a decent password already for Gmail/Facebook/whatever than to make a new good password on the spot to try your site.

If you require an email address, you could send an email with a link (or code) to login and set a long loved cookie to keep users logged in. Again, it's likely that users will have a good password on their email.

Both of these options have negatives in that they tie your user to an external identity provider.


There could be a great opportunity for Google or Facebook to take the lead in managing user logins. The rest of the world could just piggy-back on one of those two. I'd love to never have to deal with password bullshit ever again as an app creator: I'll just make my users log in with Google/FB/etc.


And lose all those users who deleted their Facebook account, and don't want their Google account to be associated with more data than necessary.


> people are more likely to have a decent password already for Gmail/Facebook/whatever

Oh man... do I have news for you.


The OWASP Top 10 isn't changing because we can't (or won't) stop not patching those issues. Quite telling that when talking about how to move beyond the baseline security struggles of the OWASP Top 10, TFA provides only superficial suggestions, rather than actual links to libraries, tools, and implementation guides that can be used to quash or audit OWASP Top 10 issues.


Many of these are due to the use of unstructured strings, which we do because we’re lazy. We’re so lazy about it that our modern languages don’t even support the ability to distinguish user strings from application strings (perl’s taint mode). The workaround in development has been extensive testing, but this is insufficient in an adversarial environment. The best solution is to bring structure to your strings so that you can tasing about how they can be abused.

Parse your strings, kids.


> We’re so lazy about it that our modern languages don’t even support the ability to distinguish user strings from application strings (perl’s taint mode).

“User strings vs. application strings” is too coarse. You just need to enforce types (a type for SQL – see query builders, a type for HTML – see MarkupSafe, etc.) and provide safe constructors for those types. Safe syntactic sugar for those is supported by JavaScript (template strings), Rust (macros), C++ (overloadable string literals), Haskell (overloadable string literals and Template Haskell), and probably plenty of other modern languages. For the others, explicit type wrappers are generally enough (like the aforementioned MarkupSafe in Python) – the only thing that’s lacking is enforcement by libraries.


For correctness it's not a binary yes/no problem of "trusted" vs "untrusted" (or "sanitised" vs "unsanitized"). Necessary escaping is dictated by context (such as HTML, JS, SQL, e-mail header), sometimes even nested multiple times (URL argument in JS in HTML attribute in a QP email body).

Most tools and practices miss that point. This leads to creation of general-purpose (not type-safe) templating systems that can't automatically enforce correct escaping everywhere by default, so they leave it to programmers to do it manually where needed, which is error prone and ensures permanent OWASP entry as long as it exists.

Also it's impossible to convince programmers to always implement "redundant" escaping for "harmless" values (such as IDs which are assumed to always be alphanumeric), which is a vulnerability waiting to be exploited (e.g. your policy is not to allow in '"' usernames, so you think you never need escape it, but then some code reads a username from a query string argument and game over).


This is why I introduced LockStrings as a key feature of THT (a language that compiles to PHP). It takes the opposite approach to Perl's taint mode. You mark string literals as safe -- everything else is untrusted.

Functions that do risky things (Database, System calls, etc.) only accept LockStrings and are responsible for escaping, so all you have to do is provide the placeholders.

https://tht.help/tutorials/language-tour#lock-strings


I challenge the "lazy" claim. We do it in large, because it is hard not to. At the border, you are either passing binary data around, or you are passing structured strings. Building structured strings to send through a border though, is often easiest done by building up an unstructured one and then sending it to the other side.

Worse, it is not uncommon to find that the layer you are using to keep things structured becomes a major source of complexity in the application.


I love OWASP. but everything they do has zero usability.

At times it looks like a bunch of 7yr old trying to mimic a big corporation.

This list is a huge example of it. Instead of a text, they have a repo, that generates a huge PDF, mentioned in a press release, with the release described verbosily in a wiki!

And I went trhu all those hops, and I still couldn't find a single link that points me to what "Injection" means.


It's a generalization of SQLi, to cover situations where the queries (or commands, or whatever) built up by unguarded string concatenation are something other than SQL. (Though, oddly, the examples in the current draft seem to all be SQL based.)


thanks. have an upvote.

I was mostly interested in learning why they have such atrocious publications/content organization though... i write on that subject internally for my employer.

if OWASP goal is to inform the public, and not look like a mega-corporation, they are doing things very wrong. Their press releases are less parseable than the worse our legal department can produce.


In my experience, it’s not been a lack of understanding or knowledge on the devs’ part. It’s been more about how much of a hurry we are in to deploy.

I’ve tried a few different strategies to get around this.

1. Build the backend first. Don’t show a UI that looks anything like it’s functional until you sneak in the requirements that you know are needed but can’t get buy-in from.

Fails because PMs and stakeholders don’t see progress fast enough.

2. Plan security into the design specs and feature list.

Fails because there’s always someone who (like when presenting speed as a feature) is higher up than you who will cross it off the list because “we’re behind a VPN/our users are too stupid to hack us/the only speed that matters is how fast we can deploy this.

3. Build the entire front end first with absolutely no backend wiring at all and slowly add the connecting db functionality and take your time adding security checks along the way.

This also fails because once PMs and stakeholders see the pretty stuff, they assume it’s almost done and have no tolerance for “slow” progress.

Direct, straightforward communication about the importance of security doesn’t work.

Obfuscating your team’s process to sneak in best practices doesn’t work.

The bottom line is that—again, much like speed—-if your leadership doesn’t see the value or can’t be persuaded to see it, it’s not going to happen, even from very well-educated teams.

This is a cultural issue that an individual contributor can only do so much about by choosing the safest frameworks to start with. And that’s about it.

It’s added a number of items to the list of things I ask in interviews now that I’m on the job market again.

Where does the company prioritize security in web applications? Where does it prioritize speed?

How hard do people have to fight to get these included as product features?

I won’t make a blanket statement that if those answers are not to my liking, I won’t take the job. But you need to know where these things stand as company commitments before you accept a job with a primary role of web developer.


> 3. Build the entire front end first with absolutely no backend wiring at all and slowly add the connecting db functionality and take your time adding security checks along the way. This also fails because once PMs and stakeholders see the pretty stuff, they assume it’s almost done and have no tolerance for “slow” progress.

It sounds like this approach should work, because you can sell a bunch of reasons rather than just security. If you don't take the needed time to develop the code, you will have correctness(not enough testing) and maintenance(not enough refactoring) problems alongside security issues. If the company's leadership shuns all three in favor of quicker deployment, then security is most likely not going to be the biggest problem, it would be all the bugs you have to chase down in spaghetti code.


> Create a culture of writing and deploying secure code.

How?

That may sound glib, but this is really just asking everyone to try, right? I would guess that the vast majority of security mistakes stem from ignorance not apathy, and that most coders are trying. Relying on people trying clearly isn't working because there's simply too much to know and it requires too constant of attention.

I think we actually do need better tooling, in terms of things like using type systems to flag sensitive data and automatically suggesting a threat modeling report include that item.

The suggestion that people spend a lot of effort all the time is clearly not going to work -- why can't we ease that barrier by focusing on better tooling so security becomes a natural part of the process, enforced by actual mechanisms?


You can't control humans unfortunately. Humans write code, and some of them will care more about the quality of work than others do. These people will at some point work above/below/with you, and their mistakes will cause you some sort of inconvenience.

My mother taught medical school, and she had a saying... "What do you call the least qualified idiot who passes my class?", the answer is "Doctor". There are good coders and bad coders, and unless we start somehow forbidding the bad(but still good enough to get hired) ones to work with/for us - this isn't going to change.


If one developer can introduce a bug, that is life. But if it can go undetected by the compiler, unit tests, code review, component tests, acceptance tests, mutation testing, static analysis, pen test, etc, then maybe the process can be improved.

It may not be cost effective, but then it's still not a lone developer problem. It's a management decision.


> It may not be cost effective, but then it's still not a lone developer problem. It's a management decision.

That's one way of looking at the issue. Another way is that this is a way for an individual developer to stand out above his or her peers.


We've created the beginnings of this type of culture at my employer. More and more, developers reach out to the security team to point out their security conscious thoughts on how to improve new features or the existing codebases.

But, this took over a decade. It certainly didn't happen overnight. The inception for this is in training, a security team that markets itself, and in constant communication with managers.

Key to this is that devs don't set the direction of their coding projects, which is where many security teams get it wrong. The C-level on down to team managers set the direction. They need to be sold on the security team and the strategic value of their output. Just like passing functional tests, passing security tests needs to be part of a minimum viable product in a way that doesn't stop shipment of products.

> The suggestion that people spend a lot of effort all the time is clearly not going to work -- why can't we ease that barrier by focusing on better tooling so security becomes a natural part of the process, enforced by actual mechanisms?

Because, in 2017 developers need to understand more about security than they did just 3 years ago. Universities really haven't kept up with this. The reason you can't push security off to tooling, other than the minimal low-hanging fruit, is that human ingenuity is required. Vulnerabilities happen from wrong function calls, yes, but also through using the wrong types of calls for the particular purpose.


> Vulnerabilities happen from wrong function calls, yes, but also through using the wrong types of calls for the particular purpose.

Which is detectable through static analysis of the type signatures -- at least to the point of ensuring that a threat assessment document includes a section on what they did there.

That's my point -- why doesn't my IDE auto-generate a template threat assessment report to share with security as part of my code review process? ...why does it rely on me understanding the possible threats, rather than clearly expressing what data I use and letting security set flags on what I should watch?

That sounds like a lack of tooling -- because there's not a technical reason.


>> Vulnerabilities happen from wrong function calls, yes, but also through using the wrong types of calls for the particular purpose.

> Which is detectable through static analysis of the type signatures -- at least to the point of ensuring that a threat assessment document includes a section on what they did there.

I don't think you read prior to replying. Type signatures are orthogonal to the purpose of code. Just because a call is made to XTS don't convey that disk encryption is used. That is the issue that you glossed over.

An IDE only has enough information to threat model an application correctly if it had enough information to write the application in the first place.


I think it's less about the culture and it's more about the big language developers making their languages safe by default for 90% of the developers using them.

The "culture" aspect may come in when you're encouraging people to use languages such as Rust over C++, rather than something like "always follow this 50-point security checklist no matter what language you use".


Product design, protocols, cryptography, regulatory restrictions, and so many other aspects have nothing to do with the language. Not every vulnerability stems from missing range or bounds checks.

I mean this is the best way possible, but your answer shows that even people on HN don't understand what goes into securing a product.


Every big bug I have seen has been known by a developer and not ticketed and triaged. Apathy, sometimes, ignorance more often, lack of organization, almost always


Anecdata will vary, but my experience has been those bugs get filed, marked "not important", and PMs have no way to sift the security impactful dozen out of hundreds to thousands of small defects in a large product's backlog. (And sometimes that backlog gets deleted, disappearing knowledge of those bugs.)

I'm hoping we'll eventually be able to flag bugs with the types involved, and let systems automatically look for combinations that have a set of types that could be dangerous.

My point was that it's not enough to focus on good intentions -- or culture -- you have to talk about mechanisms and technologies if you want a reliable system. Because the parts, humans in this case, are highly unreliable in isolation.



The article mentions home grown authentication and authorization mechanism and suggests that we stick to proven solutions. The problem is that, at least within the Java community, that library-, framework- and application server authors are not providing easy to use solutions that integrate well with applications. Instead there are a bunch of complex solutions that require manual configuration, proprietary extensions and arcane programming models for something that sits in front the application making it difficult for application authors to provide a seamless user experience. No wonder so many people are rolling their own.

This is why JSR-375 was created. It needs to happen! I've tried the reference implementation and it was awesome! If you're working on the JSR or the RI then I'm rooting for you! But I don't know if anyone is working on them these days?


Perhaps security isn't as easy as (often self-proclaimed) security experts think it is. Unlike them, developers don't devote 100% of their time to security. I couldn't care less about people standing on the sideline yelling at me what I can't do. How about proactively seeking out and suggesting meaningful improvements which actually help increase security?

Security in big corporations often boils down to a unit of people ranting about everything and nothing, and telling people what they can't do, while in fact, they should be doing the opposite.


It's always seemed to me that the web2py approach of providing a secure starter app with auth included and then letting developers break it if they want seems quite a sensible way to go. Not sure how well that works in other frameworks. http://www.web2py.com/book/default/chapter/01#Security


It is a shame that A10 and A7 were rejected.

In our mobile held, APIs are often unprotected because authentication is hard for machine to machine transactions. OpenID and the misused oAuth are a solution but it is hard to implement.

A7 addressed an organizational issue completely absent from the top 10.

Since there à so much controversies, they sold have made it a top 12.


> This means that the malicious script can read the user's cookies, session tokens, stored usernames and passwords, or files on a local hard drive.

I've seen those. On a website for a company who hired me for building their server infrastructure. The password was in clear text in the main cookie.

I signaled it and the dev team corrected it. It was only 3 years ago...


A functional prototype is not finished software, but it is for many people considered to be a product.

Functional prototypes in many cases do not even implement their functional requirements properly, let alone the non-functional ones like security.

Security in any form is not a priority for many startups. Especially the ones that aim to be acquired before their hot potato blows up.


Why should the top 10 change? We still secure our houses with locks, secure our neighbourhoods with police, secure our borders with armies. We drive safer cars these days yet we still secure our road edges with barriers at dangerous points. Why would the categories of risk change on a bi-annual basis?


One might hope that these low hanging fruit would be addressed, leaving more sophisticated attacks to fill the top 10.

Buffer overflows used to be a major vulnerability. These only stopped being such a major problem when languages that prevented them became widely used.

The lesson is probably that developers and the business don't have the time or inclination to address them, and the nest defence is to make the problem impossible rather than relying on good security practices being followed.


Also, a response to some of the mitigation’s suggested here:

1. Prevent people from reusing passwords from other websites/lists.

Fail: you shouldn’t know if the pw is the same as any other pw. If you can tell, you are already doing it wrong.

PW + random salt protects you against reused passwords. If your application is able to compare other passwords to the current password, not only did the othe site fuck up, but you did too.

(re)Captcha: fuck you. Even if it’s after the second failed attempt. Fuck you. I hate you.

You are implementing security theater, making everything worse for the user, and killing your conversation rate for everyone but spammers,who have this down pat.

Pushing x numbers of rules, whether or not they are special characters or not. 8 vs. 10 doesn’t matter that much.

Push passphrases instead.

Multi-factor is sort-of okay, but the implementations are garbage and the user experience is awful.

I’m not a security expert or a researcher. I’m a data engineer with a lot of web app experience.

But most of the advice in this thread is total garbage.

Web apps need to find a way to make the gold-standard of authentication accessible to users: per-device public/private key pairs.

Until we do that well, we suck at life and our jobs compared to native apps.

I include myself when I say that we have held ourselves to an incredibly low standard.

OWASP is a pathetically low bar. Yet we often fail.

It’s time to step up our game, people. And it’s on us to do it.


> PW + random salt protects you against reused passwords.

That only protects you if every other site does it. If you salt your passwords and some other site which doesn't is compromised, you are hosed too if your users reused passwords.

> If your application is able to compare other passwords to the current password, not only did the othe site fuck up, but you did too.

Sorry, don't follow. How is it a mistake to compare a password your user is entering to a known blacklist of compromised passwords?


To your second point, giving this information leaks too much about the user trying to create a password.

I go to a site and try to register an account. The app says, “sorry, you can’t use that password because it’s been used with this username before and has been compromised.” Your attacker now knows that the user is in a compromised list.

You can’t do this without leaking information about the user.

If you compare to a global list instead of the user, then you’ve leaked the opposite information. That at least one user is on some list of compromised passwords.

You can’t do that without leaking information about at least one user.

And, as discussed, exponential fall-off doesn’t work in the world of distributed attacks.

That’s my response to point two in a nutshell. But I’ll add that the application layer should have no knowledge of the plain-text password to begin with. The password should be hashed on the client side before being sent to the application layer. Then salted and hashed and stored in the database.

The double hashing doesn’t get you anything in crypto terms as far as I know, but it means that if your application leaks or the network between the front and back end is MITMd, then you are leaking hashes and not plain text.

Of course, if the network between your front and back ends is compromised, you fucked up pretty bad anyway. But it adds at least a little effort for the attacker instead of allowing them to just grab username and password pairs in plain text.

I also salt + hash usernames in transit, so it’s not immediately obvious who is associated with what.

That gets me to your first point.

You are correct that doing all of this doesn’t protect any individual user from illicit access, and I should have been more clear about my concerns. If an individual user chooses to reuse passwords, their account can be compromised. You are 100% correct about that.

But in the case of a data breach, which is what I was thinking of in terms of “preotection”, it’s going to be really hard to compromise a collection of salted, hashed username/password combos.

In my opinion—again, I’m not a security expert, and I welcome criticisms such as yours—I don’t think we’re going to get people to stop reusing passwords. And I don’t think we’re going to get people to use multi-auth any time soon.

I think this is the best we can do until someone comes up with a way to get per-device key pairs to work in a friendly way.

Thanks for your thoughts and criticisms and questions.


Comparing passwords to a list of compromised ones doesn't leak anything. Just ignore the user name.

And hashing on the client in addition to the server doesn't save you from any mitm attacks.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: