
The OWASP Top 10 is killing me - sidcool
https://insights.hpe.com/content/hpe-nxt/en/articles/2017/10/the-owasp-top-10-is-killing-me-and-killing-you.html
======
sixhobbits
Not all vulnerabilities stem from ignorance, although this seems to be the
default assumption of the infosec community.

Writing secure code takes more time than writing insecure code. Time is
expensive. In every organization I've worked, security has been neglected
pretty explicitly. It's not a case of "OK this looks secure", but instead more
like "I am aware that our codebase has some security issues but I need to
prioritize rushing out this new feature/improving our CPA".

And this is not always the wrong choice. For most organizations, the
probability of having someone who is both malicious and competent enough to
exploit an XSS vuln visit your site is pretty small. The chance that you'll go
under if you don't get that new feature out or improve your CPA is pretty
high.

If you want to criticise the state of security (and there is definite room for
criticism), I think there is a need for tools and education to allow people to
better make these decisions. We need ways to communicate

a) how likely it is that we'll be attacked?

b) what would the consequences be?

For now, when these questions are asked, the answers are almost always "pretty
small" and "uh, possibly like really bad, depending on the attacker"

We need ways to translate these into numbers that we can compare with profit
margins, etc. This is _way_ more important than actually learning how to
mitigate SQLi.

~~~
Spooky23
I think the infosec community is the biggest barrier to improving security.

Security is like a bug light for ambitious idiots now. In large companies the
function has been staffed up as a separate vertical with lots of CISSPs and
other alphabet soup people who run around chasing nonsense and reporting how
valuable they are.

Security expertise needs to be embedded in projects and programs so that
leadership with domain knowledge can make smart decisions.

~~~
deoxxa
It sounds like your problem is with the infosec _industry_ rather than the
infosec _community_. The community in general would agree with you about the
industry being kind of broken, I'd say.

------
nfriedly
This is one thing I like about IBM: they have a separate security team that
audits stuff before you ship it. I was working on a react app where I set up
server-side rendering, and then had it JSON-encode the state and dump it into
a script tag in the end of the HTML. My thinking at the time was "It's JSON-
encoded, and it's all the user's own data anyways, so it's safe."

Eventually I needed something from the querystring and for whatever reason put
it into the state. It turns out that a <script> tag from the querystring in a
string in a blob of JSON in a HTML page will execute. Oops.

Fortunately IBM's security team caught it before it ever shipped. Now it's
been fixed and the app has a CSP header to help nullify any future mistakes.

~~~
mmcnl
That's weird, shouldn't that be a responsibility of your router? Or did you
roll your own?

~~~
nfriedly
I think it was a combination of react-router and react-router-redux, and
something I was doing for SSR that led to the issue. The initial fix was just
to delete the react-router-redux data from the store just before
JSON.stringify'ing it.

There were a number of weird things that were specific to my usage, so I'm not
sure that the vulnerability would be there in a "proper" setup.

------
methodover
We experienced our first successful attack at my startup a few weeks ago.

What got us wasn't anything on the top ten list. I'm pretty sure it isn't
covered anywhere in OWASP to my knowledge.

Users reuse passwords across different websites. An attacker tried a database
of usernames/passwords sourced from elsewhere; a small percentage (about 1000
out of more than 10M requests) succeeded. 100 of those had something to steal.
Attacker used a botnet, so our IP-based fail-to-ban logic was ineffective.

We thought about lots of ways to deal with this moving forward. My boss (CEO)
didn't want to implement any kind of 2 factor authentication, because it's
cumbersome and will lower conversion rates. We took a different strategy which
is kind of complicated to explain, but it's not nearly as secure.

Anyway. What gets me is like: Password authentication SUCKS. It's a terrible
terrible authentication strategy. It's awful. It should not be relied upon. It
would be good if humans didn't reuse passwords. But we do. So it sucks.

~~~
sixhobbits
You can mitigate this almost completely by finding that "database of
usernames/passwords sourced from elsewhere" (they're not hard to find) and
blacklisting them. Users should not be allowed to use any breached password
when they register. A simple message saying "this message was included in a
recent password breach and is therefore not secure" should suffice to prevent
users getting annoyed that they can't use their favourite password on your
site.

Enforcing a minimum length of 10 or even 12 is a great way to eliminate nearly
all previously leaked passwords from being used on your site, and it further
encourages users to use password managers.

Passwords are shit, but they're here to stay for a while still.

~~~
wongarsu
HaveIBeenPwned makes this really easy by publishing a list of hashed passwords
that have been observed in breaches [1]. The list is by no means complete, but
it should cover a lot and is very easy to setup.

1:
[https://haveibeenpwned.com/Passwords](https://haveibeenpwned.com/Passwords)

~~~
methodover
That... is a great idea. I'll do it. Thank you!

------
beager
The OWASP Top 10 isn't changing because we can't (or won't) stop not patching
those issues. Quite telling that when talking about how to move beyond the
baseline security struggles of the OWASP Top 10, TFA provides only superficial
suggestions, rather than actual links to libraries, tools, and implementation
guides that can be used to quash or audit OWASP Top 10 issues.

------
ynniv
Many of these are due to the use of unstructured strings, which we do because
we’re lazy. We’re so lazy about it that our modern languages don’t even
support the ability to distinguish user strings from application strings
(perl’s taint mode). The workaround in development has been extensive testing,
but this is insufficient in an adversarial environment. The best solution is
to bring structure to your strings so that you can tasing about how they can
be abused.

Parse your strings, kids.

~~~
minitech
> We’re so lazy about it that our modern languages don’t even support the
> ability to distinguish user strings from application strings (perl’s taint
> mode).

“User strings vs. application strings” is too coarse. You just need to enforce
types (a type for SQL – see query builders, a type for HTML – see MarkupSafe,
etc.) and provide safe constructors for those types. Safe syntactic sugar for
those is supported by JavaScript (template strings), Rust (macros), C++
(overloadable string literals), Haskell (overloadable string literals and
Template Haskell), and probably plenty of other modern languages. For the
others, explicit type wrappers are generally enough (like the aforementioned
MarkupSafe in Python) – the only thing that’s lacking is enforcement by
libraries.

------
gcb0
I love OWASP. but everything they do has zero usability.

At times it looks like a bunch of 7yr old trying to mimic a big corporation.

This list is a huge example of it. Instead of a text, they have a repo, that
generates a huge PDF, mentioned in a press release, with the release described
verbosily in a wiki!

And I went trhu all those hops, and I still couldn't find a single link that
points me to what "Injection" means.

~~~
rst
It's a generalization of SQLi, to cover situations where the queries (or
commands, or whatever) built up by unguarded string concatenation are
something other than SQL. (Though, oddly, the examples in the current draft
seem to all be SQL based.)

~~~
gcb0
thanks. have an upvote.

I was mostly interested in learning why they have such atrocious
publications/content organization though... i write on that subject internally
for my employer.

if OWASP goal is to inform the public, and not look like a mega-corporation,
they are doing things very wrong. Their press releases are less parseable than
the worse our legal department can produce.

------
ianamartin
In my experience, it’s not been a lack of understanding or knowledge on the
devs’ part. It’s been more about how much of a hurry we are in to deploy.

I’ve tried a few different strategies to get around this.

1\. Build the backend first. Don’t show a UI that looks anything like it’s
functional until you sneak in the requirements that you know are needed but
can’t get buy-in from.

Fails because PMs and stakeholders don’t see progress fast enough.

2\. Plan security into the design specs and feature list.

Fails because there’s always someone who (like when presenting speed as a
feature) is higher up than you who will cross it off the list because “we’re
behind a VPN/our users are too stupid to hack us/the only speed that matters
is how fast we can deploy this.

3\. Build the entire front end first with absolutely no backend wiring at all
and slowly add the connecting db functionality and take your time adding
security checks along the way.

This also fails because once PMs and stakeholders see the pretty stuff, they
assume it’s almost done and have no tolerance for “slow” progress.

Direct, straightforward communication about the importance of security doesn’t
work.

Obfuscating your team’s process to sneak in best practices doesn’t work.

The bottom line is that—again, much like speed—-if your leadership doesn’t see
the value or can’t be persuaded to see it, it’s not going to happen, even from
very well-educated teams.

This is a cultural issue that an individual contributor can only do so much
about by choosing the safest frameworks to start with. And that’s about it.

It’s added a number of items to the list of things I ask in interviews now
that I’m on the job market again.

Where does the company prioritize security in web applications? Where does it
prioritize speed?

How hard do people have to fight to get these included as product features?

I won’t make a blanket statement that if those answers are not to my liking, I
won’t take the job. But you need to know where these things stand as company
commitments before you accept a job with a primary role of web developer.

~~~
Sacho
> 3\. Build the entire front end first with absolutely no backend wiring at
> all and slowly add the connecting db functionality and take your time adding
> security checks along the way. This also fails because once PMs and
> stakeholders see the pretty stuff, they assume it’s almost done and have no
> tolerance for “slow” progress.

It sounds like this approach should work, because you can sell a bunch of
reasons rather than just security. If you don't take the needed time to
develop the code, you will have correctness(not enough testing) and
maintenance(not enough refactoring) problems alongside security issues. If the
company's leadership shuns all three in favor of quicker deployment, then
security is most likely not going to be the biggest problem, it would be all
the bugs you have to chase down in spaghetti code.

------
SomeStupidPoint
> Create a culture of writing and deploying secure code.

How?

That may sound glib, but this is really just asking everyone to try, right? I
would guess that the vast majority of security mistakes stem from ignorance
not apathy, and that most coders are trying. Relying on people trying clearly
isn't working because there's simply too much to know and it requires too
constant of attention.

I think we actually _do_ need better tooling, in terms of things like using
type systems to flag sensitive data and automatically suggesting a threat
modeling report include that item.

The suggestion that people spend a lot of effort all the time is clearly not
going to work -- why can't we ease that barrier by focusing on better tooling
so security becomes a natural part of the process, enforced by actual
mechanisms?

~~~
module0000
You can't control humans unfortunately. Humans write code, and some of them
will care more about the quality of work than others do. These people will at
some point work above/below/with you, and their mistakes will cause you some
sort of inconvenience.

My mother taught medical school, and she had a saying... "What do you call the
least qualified idiot who passes my class?", the answer is "Doctor". There are
good coders and bad coders, and unless we start somehow forbidding the bad(but
still good enough to get hired) ones to work with/for us - this isn't going to
change.

~~~
diroussel
If one developer can introduce a bug, that is life. But if it can go
undetected by the compiler, unit tests, code review, component tests,
acceptance tests, mutation testing, static analysis, pen test, etc, then maybe
the process can be improved.

It may not be cost effective, but then it's still not a lone developer
problem. It's a management decision.

~~~
IncRnd
> It may not be cost effective, but then it's still not a lone developer
> problem. It's a management decision.

That's one way of looking at the issue. Another way is that this is a way for
an individual developer to stand out above his or her peers.

------
nanodano
Most developers don't know security

[http://www.akashasec.com/most-developers-dont-know-
security/](http://www.akashasec.com/most-developers-dont-know-security/)

------
tofflos
The article mentions home grown authentication and authorization mechanism and
suggests that we stick to proven solutions. The problem is that, at least
within the Java community, that library-, framework- and application server
authors are not providing easy to use solutions that integrate well with
applications. Instead there are a bunch of complex solutions that require
manual configuration, proprietary extensions and arcane programming models for
something that sits in front the application making it difficult for
application authors to provide a seamless user experience. No wonder so many
people are rolling their own.

This is why JSR-375 was created. It needs to happen! I've tried the reference
implementation and it was awesome! If you're working on the JSR or the RI then
I'm rooting for you! But I don't know if anyone is working on them these days?

------
mmcnl
Perhaps security isn't as easy as (often self-proclaimed) security experts
think it is. Unlike them, developers don't devote 100% of their time to
security. I couldn't care less about people standing on the sideline yelling
at me what I can't do. How about proactively seeking out and suggesting
meaningful improvements which actually help increase security?

Security in big corporations often boils down to a unit of people ranting
about everything and nothing, and telling people what they can't do, while in
fact, they should be doing the opposite.

------
tim333
It's always seemed to me that the web2py approach of providing a secure
starter app with auth included and then letting developers break it if they
want seems quite a sensible way to go. Not sure how well that works in other
frameworks.
[http://www.web2py.com/book/default/chapter/01#Security](http://www.web2py.com/book/default/chapter/01#Security)

------
BrandoElFollito
It is a shame that A10 and A7 were rejected.

In our mobile held, APIs are often unprotected because authentication is hard
for machine to machine transactions. OpenID and the misused oAuth are a
solution but it is hard to implement.

A7 addressed an organizational issue completely absent from the top 10.

Since there à so much controversies, they sold have made it a top 12.

------
JeanMarcS
> This means that the malicious script can read the user's cookies, session
> tokens, _stored usernames and passwords_ , or files on a local hard drive.

I've seen those. On a website for a company who hired me for building their
server infrastructure. The password was in clear text in the main cookie.

I signaled it and the dev team corrected it. It was only 3 years ago...

------
partycoder
A functional prototype is not finished software, but it is for many people
considered to be a product.

Functional prototypes in many cases do not even implement their functional
requirements properly, let alone the non-functional ones like security.

Security in any form is not a priority for many startups. Especially the ones
that aim to be acquired before their hot potato blows up.

------
vacri
Why should the top 10 change? We still secure our houses with locks, secure
our neighbourhoods with police, secure our borders with armies. We drive safer
cars these days yet we still secure our road edges with barriers at dangerous
points. Why would the categories of risk change on a bi-annual basis?

~~~
MattPalmer
One might hope that these low hanging fruit would be addressed, leaving more
sophisticated attacks to fill the top 10.

Buffer overflows used to be a major vulnerability. These only stopped being
such a major problem when languages that prevented them became widely used.

The lesson is probably that developers and the business don't have the time or
inclination to address them, and the nest defence is to make the problem
impossible rather than relying on good security practices being followed.

------
ianamartin
Also, a response to some of the mitigation’s suggested here:

1\. Prevent people from reusing passwords from other websites/lists.

Fail: you shouldn’t know if the pw is the same as any other pw. If you can
tell, you are already doing it wrong.

PW + random salt protects you against reused passwords. If your application is
able to compare other passwords to the current password, not only did the othe
site fuck up, but you did too.

(re)Captcha: fuck you. Even if it’s after the second failed attempt. Fuck you.
I hate you.

You are implementing security theater, making everything worse for the user,
and killing your conversation rate for everyone but spammers,who have this
down pat.

Pushing x numbers of rules, whether or not they are special characters or not.
8 vs. 10 doesn’t matter that much.

Push passphrases instead.

Multi-factor is sort-of okay, but the implementations are garbage and the user
experience is awful.

I’m not a security expert or a researcher. I’m a data engineer with a lot of
web app experience.

But most of the advice in this thread is total garbage.

Web apps need to find a way to make the gold-standard of authentication
accessible to users: per-device public/private key pairs.

Until we do that well, we suck at life and our jobs compared to native apps.

I include myself when I say that we have held ourselves to an incredibly low
standard.

OWASP is a pathetically low bar. Yet we often fail.

It’s time to step up our game, people. And it’s on us to do it.

~~~
jjnoakes
> PW + random salt protects you against reused passwords.

That only protects you if every other site does it. If you salt your passwords
and some other site which doesn't is compromised, you are hosed too if your
users reused passwords.

> If your application is able to compare other passwords to the current
> password, not only did the othe site fuck up, but you did too.

Sorry, don't follow. How is it a mistake to compare a password your user is
entering to a known blacklist of compromised passwords?

~~~
ianamartin
To your second point, giving this information leaks too much about the user
trying to create a password.

I go to a site and try to register an account. The app says, “sorry, you can’t
use that password because it’s been used with this username before and has
been compromised.” Your attacker now knows that the user is in a compromised
list.

You can’t do this without leaking information about the user.

If you compare to a global list instead of the user, then you’ve leaked the
opposite information. That at least one user is on some list of compromised
passwords.

You can’t do that without leaking information about at least one user.

And, as discussed, exponential fall-off doesn’t work in the world of
distributed attacks.

That’s my response to point two in a nutshell. But I’ll add that the
application layer should have no knowledge of the plain-text password to begin
with. The password should be hashed on the client side before being sent to
the application layer. Then salted and hashed and stored in the database.

The double hashing doesn’t get you anything in crypto terms as far as I know,
but it means that if your application leaks or the network between the front
and back end is MITMd, then you are leaking hashes and not plain text.

Of course, if the network between your front and back ends is compromised, you
fucked up pretty bad anyway. But it adds at least a little effort for the
attacker instead of allowing them to just grab username and password pairs in
plain text.

I also salt + hash usernames in transit, so it’s not immediately obvious who
is associated with what.

That gets me to your first point.

You are correct that doing all of this doesn’t protect any individual user
from illicit access, and I should have been more clear about my concerns. If
an individual user chooses to reuse passwords, their account can be
compromised. You are 100% correct about that.

But in the case of a data breach, which is what I was thinking of in terms of
“preotection”, it’s going to be really hard to compromise a collection of
salted, hashed username/password combos.

In my opinion—again, I’m not a security expert, and I welcome criticisms such
as yours—I don’t think we’re going to get people to stop reusing passwords.
And I don’t think we’re going to get people to use multi-auth any time soon.

I think this is the best we can do until someone comes up with a way to get
per-device key pairs to work in a friendly way.

Thanks for your thoughts and criticisms and questions.

~~~
jjnoakes
Comparing passwords to a list of compromised ones doesn't leak anything. Just
ignore the user name.

And hashing on the client in addition to the server doesn't save you from any
mitm attacks.

