Writing secure code takes more time than writing insecure code. Time is expensive. In every organization I've worked, security has been neglected pretty explicitly. It's not a case of "OK this looks secure", but instead more like "I am aware that our codebase has some security issues but I need to prioritize rushing out this new feature/improving our CPA".
And this is not always the wrong choice. For most organizations, the probability of having someone who is both malicious and competent enough to exploit an XSS vuln visit your site is pretty small. The chance that you'll go under if you don't get that new feature out or improve your CPA is pretty high.
If you want to criticise the state of security (and there is definite room for criticism), I think there is a need for tools and education to allow people to better make these decisions. We need ways to communicate
a) how likely it is that we'll be attacked?
b) what would the consequences be?
For now, when these questions are asked, the answers are almost always "pretty small" and "uh, possibly like really bad, depending on the attacker"
We need ways to translate these into numbers that we can compare with profit margins, etc. This is way more important than actually learning how to mitigate SQLi.
Microsoft developed quite a few of these ideas internally with the TwC (Trustworthy Computing) initiative in the early 2000s, and built a protocol - and development workflow - around threat modeling and security awareness. Most of their internal security-oriented protocol is listed for free:
As are some of their tools. For individual developers wanting to have a better sense of what threats their applications may face during the design stage, there’s a good Wiley book on threat modeling:
If you’re really in a hurry, a lot of the typical OWASP vulnerabilities are mitigated by choosing higher—level, long-standing frameworks and abstractions (e.g. Rails, Symfony, ASP.NET MVC) that handle a lot of the things that can hurt you. From there, most of the low hanging fruit skids will find can be mitigated simply by following the security best practices documentation for your framework before you start writing code in it.
Anecdotally, auditing web applications for security issues is my day job. The majority of the time, ignorance is the real issue, not speed of development. They simply don’t have any idea what threats they are facing, or any real education in secure coding principles. Very rarely have I dropped vulnerabilities and had teams say “yeah, we know about that”. It’s way more “whoa, I didn’t even know you could do that”. Basic security education really matters.
Security is like a bug light for ambitious idiots now. In large companies the function has been staffed up as a separate vertical with lots of CISSPs and other alphabet soup people who run around chasing nonsense and reporting how valuable they are.
Security expertise needs to be embedded in projects and programs so that leadership with domain knowledge can make smart decisions.
There is a way. I'll refer you to How To Measure Anything by Douglas Hubbard. His model is based on a combination of things:
1. Calibrated Probability Assessments
2. nth order effects
3. building a mathematical model
4. Monte Carlo simulation
apply his methodology and you can determine the impact of "hard to quantify" variable like "security" and get a probability distribution that can be used to assign values to specific scenarios.
Yeah, it's a little bit complicated and time-consuming; but the best things in life are, no?
I've never managed to make the effort to apply his ideas with much rigor but they are definitely appealing and possibly better than the alternative of "maybe nothing".
The problem is measuring something that is sort of definitionally unknowable (how many vulns are in this code, where, how likely is it someone outside the company will find it, then exploit it?) is hard (the book referenced has some ideas which boil down to "get some experts in a room and ask them, then average it")
A good security team will do their best at this but its unfortunately not as easy as "ok we found all the xss bugs which reduces our chances of getting owned by %2.5".
The further (maybe depressing) question is to the degree getting breached actually harms a company, my favorite argument that the two are tenuously related is this: http://www.cs.umd.edu/~awruef/HNYM.pdf and my favorite within is comodo. Comodo was hacked and the hacker gained the ability to sign certificates of their choice with comodo key. Comodo had one job, be worthy of trust and not get hacked. Did it harm them? They are still the #1 cert company. Look at the target breach or any others.
The only spot where a breach can be company-ending is all these bitcoin companies, which from my spot in application security makes them fascinating test cases. Here are a bunch that blew up after they got hacked: https://magoo.github.io/Blockchain-Graveyard/
Eventually I needed something from the querystring and for whatever reason put it into the state. It turns out that a <script> tag from the querystring in a string in a blob of JSON in a HTML page will execute. Oops.
Fortunately IBM's security team caught it before it ever shipped. Now it's been fixed and the app has a CSP header to help nullify any future mistakes.
There were a number of weird things that were specific to my usage, so I'm not sure that the vulnerability would be there in a "proper" setup.
What got us wasn't anything on the top ten list. I'm pretty sure it isn't covered anywhere in OWASP to my knowledge.
Users reuse passwords across different websites. An attacker tried a database of usernames/passwords sourced from elsewhere; a small percentage (about 1000 out of more than 10M requests) succeeded. 100 of those had something to steal. Attacker used a botnet, so our IP-based fail-to-ban logic was ineffective.
We thought about lots of ways to deal with this moving forward. My boss (CEO) didn't want to implement any kind of 2 factor authentication, because it's cumbersome and will lower conversion rates. We took a different strategy which is kind of complicated to explain, but it's not nearly as secure.
Anyway. What gets me is like: Password authentication SUCKS. It's a terrible terrible authentication strategy. It's awful. It should not be relied upon. It would be good if humans didn't reuse passwords. But we do. So it sucks.
There's an easy solution for this! Have I been Pwned offers a free API  you can use to check if a password has been in a previous breach.
If the password was previously disclosed in a breach, simply inform the user politely "Sorry, this password has previously been previously disclosed and will likely be used by attackers to compromise your account" and make them pick a new one.
No need for 2FA. Simply add a check to prevent users from re-using passwords.
(If you don't want to start sending user passwords to HIBP, you can also download the list and use it internally)
Did you add a captcha to your login? You could add one after 2 failed attempts from an ip.
I understand not wanting 2FA for lower conversion rates, but I'd recommend implementing it in the future. It's one of the best ways to mitigate an attack against knowing your password. The session cookie is another story.
As for the passwords, I'd enforce standards around what a user is allowed to use. You want to have it be at least 10+ chars with special characters. If they reuse the password on another site and that gets compromised there really isn't much you can do. However, if we assume the reused password from the other site is hashed with a salt and you enforce more chars, they are much harder to crack and thus prevent the plaintext password from being recovered and accessing your site.
A great way to demo what more chars does to a password here
You should also maintain a record of the common IP address an account uses and require an email/text to allow access that would further mitigate your problem. I know your a startup, so this is more of a luxury feature.
Passwords: I actually did up the password requirements (to require 10 characters), but the CEO wants me to pull back to just 8 characters. And no special characters. Our conversion rate went down a tiny bit, could be the restrictive password requirements that did it.
Keeping a record of common IP addresses and enforcing something like 2FA for strange logins: Love that idea. Would love to implement it. Can't, though. The CEO won't go for it, says 2FA lowers conversion and he's probably right about that.
Enforcing a minimum length of 10 or even 12 is a great way to eliminate nearly all previously leaked passwords from being used on your site, and it further encourages users to use password managers.
Passwords are shit, but they're here to stay for a while still.
There are a number of mitigating controls that can be applied here. Most will hamper usability, some will not.
There is a “simple” solution. Enforce 2FA. If not at the login, then before “dangerous” actions (transfer funds , change password , buy X/Y/Z )
E.g. with nginx it will make sense to set a custom rate limiting zone to prevent many requests from the same IP specifically to the login page:
This will not fix the root cause, but will make it considerably harder to scan for matching passwords.
You can also set up Fail2Ban to block IPs that failed to authenticate many times:
There's a conference for that. Has been since 2010. Not quite sure about 2017/2018
> My boss (CEO) didn't want to implement any kind of 2 factor authentication, because it's cumbersome and will lower conversion rates
True. "Security is not a convenience". But there's no easier way to get people authenticating off of something at least a bit secure than straight totp 2fa.
From my PoV nothing got to you. Yes, you may have got some bad publicity, but the fault lies with users who have poor (non-existent?) OPSEC.
Just as there are 2 types of people: those who don't have backups and those who will make backups, there are those who don't have password compartmentalization and those who will have it.
You can find it as A2 in the current (RC2) draft OWASP Top 10 and on the wiki here https://www.owasp.org/index.php/Credential_stuffing
Btw I feel like this should be in the top 10 list right? With mitigation strategies?
I don't know how much you spent in support, but U2F Zeros are dirt cheap. You could probably just proactively mail them to your clients and encourage them to use 2FA.
Or offer discounts or other perks to users with 2FA.
If you require an email address, you could send an email with a link (or code) to login and set a long loved cookie to keep users logged in. Again, it's likely that users will have a good password on their email.
Both of these options have negatives in that they tie your user to an external identity provider.
Oh man... do I have news for you.
Parse your strings, kids.
Most tools and practices miss that point. This leads to creation of general-purpose (not type-safe) templating systems that can't automatically enforce correct escaping everywhere by default, so they leave it to programmers to do it manually where needed, which is error prone and ensures permanent OWASP entry as long as it exists.
Also it's impossible to convince programmers to always implement "redundant" escaping for "harmless" values (such as IDs which are assumed to always be alphanumeric), which is a vulnerability waiting to be exploited (e.g. your policy is not to allow in '"' usernames, so you think you never need escape it, but then some code reads a username from a query string argument and game over).
Functions that do risky things (Database, System calls, etc.) only accept LockStrings and are responsible for escaping, so all you have to do is provide the placeholders.
Worse, it is not uncommon to find that the layer you are using to keep things structured becomes a major source of complexity in the application.
At times it looks like a bunch of 7yr old trying to mimic a big corporation.
This list is a huge example of it. Instead of a text, they have a repo, that generates a huge PDF, mentioned in a press release, with the release described verbosily in a wiki!
And I went trhu all those hops, and I still couldn't find a single link that points me to what "Injection" means.
I was mostly interested in learning why they have such atrocious publications/content organization though... i write on that subject internally for my employer.
if OWASP goal is to inform the public, and not look like a mega-corporation, they are doing things very wrong. Their press releases are less parseable than the worse our legal department can produce.
I’ve tried a few different strategies to get around this.
1. Build the backend first. Don’t show a UI that looks anything like it’s functional until you sneak in the requirements that you know are needed but can’t get buy-in from.
Fails because PMs and stakeholders don’t see progress fast enough.
2. Plan security into the design specs and feature list.
Fails because there’s always someone who (like when presenting speed as a feature) is higher up than you who will cross it off the list because “we’re behind a VPN/our users are too stupid to hack us/the only speed that matters is how fast we can deploy this.
3. Build the entire front end first with absolutely no backend wiring at all and slowly add the connecting db functionality and take your time adding security checks along the way.
This also fails because once PMs and stakeholders see the pretty stuff, they assume it’s almost done and have no tolerance for “slow” progress.
Direct, straightforward communication about the importance of security doesn’t work.
Obfuscating your team’s process to sneak in best practices doesn’t work.
The bottom line is that—again, much like speed—-if your leadership doesn’t see the value or can’t be persuaded to see it, it’s not going to happen, even from very well-educated teams.
This is a cultural issue that an individual contributor can only do so much about by choosing the safest frameworks to start with. And that’s about it.
It’s added a number of items to the list of things I ask in interviews now that I’m on the job market again.
Where does the company prioritize security in web applications? Where does it prioritize speed?
How hard do people have to fight to get these included as product features?
I won’t make a blanket statement that if those answers are not to my liking, I won’t take the job. But you need to know where these things stand as company commitments before you accept a job with a primary role of web developer.
It sounds like this approach should work, because you can sell a bunch of reasons rather than just security. If you don't take the needed time to develop the code, you will have correctness(not enough testing) and maintenance(not enough refactoring) problems alongside security issues. If the company's leadership shuns all three in favor of quicker deployment, then security is most likely not going to be the biggest problem, it would be all the bugs you have to chase down in spaghetti code.
That may sound glib, but this is really just asking everyone to try, right? I would guess that the vast majority of security mistakes stem from ignorance not apathy, and that most coders are trying. Relying on people trying clearly isn't working because there's simply too much to know and it requires too constant of attention.
I think we actually do need better tooling, in terms of things like using type systems to flag sensitive data and automatically suggesting a threat modeling report include that item.
The suggestion that people spend a lot of effort all the time is clearly not going to work -- why can't we ease that barrier by focusing on better tooling so security becomes a natural part of the process, enforced by actual mechanisms?
My mother taught medical school, and she had a saying... "What do you call the least qualified idiot who passes my class?", the answer is "Doctor". There are good coders and bad coders, and unless we start somehow forbidding the bad(but still good enough to get hired) ones to work with/for us - this isn't going to change.
It may not be cost effective, but then it's still not a lone developer problem. It's a management decision.
That's one way of looking at the issue. Another way is that this is a way for an individual developer to stand out above his or her peers.
But, this took over a decade. It certainly didn't happen overnight. The inception for this is in training, a security team that markets itself, and in constant communication with managers.
Key to this is that devs don't set the direction of their coding projects, which is where many security teams get it wrong. The C-level on down to team managers set the direction. They need to be sold on the security team and the strategic value of their output. Just like passing functional tests, passing security tests needs to be part of a minimum viable product in a way that doesn't stop shipment of products.
> The suggestion that people spend a lot of effort all the time is clearly not going to work -- why can't we ease that barrier by focusing on better tooling so security becomes a natural part of the process, enforced by actual mechanisms?
Because, in 2017 developers need to understand more about security than they did just 3 years ago. Universities really haven't kept up with this. The reason you can't push security off to tooling, other than the minimal low-hanging fruit, is that human ingenuity is required. Vulnerabilities happen from wrong function calls, yes, but also through using the wrong types of calls for the particular purpose.
Which is detectable through static analysis of the type signatures -- at least to the point of ensuring that a threat assessment document includes a section on what they did there.
That's my point -- why doesn't my IDE auto-generate a template threat assessment report to share with security as part of my code review process? ...why does it rely on me understanding the possible threats, rather than clearly expressing what data I use and letting security set flags on what I should watch?
That sounds like a lack of tooling -- because there's not a technical reason.
> Which is detectable through static analysis of the type signatures -- at least to the point of ensuring that a threat assessment document includes a section on what they did there.
I don't think you read prior to replying. Type signatures are orthogonal to the purpose of code. Just because a call is made to XTS don't convey that disk encryption is used. That is the issue that you glossed over.
An IDE only has enough information to threat model an application correctly if it had enough information to write the application in the first place.
The "culture" aspect may come in when you're encouraging people to use languages such as Rust over C++, rather than something like "always follow this 50-point security checklist no matter what language you use".
I mean this is the best way possible, but your answer shows that even people on HN don't understand what goes into securing a product.
I'm hoping we'll eventually be able to flag bugs with the types involved, and let systems automatically look for combinations that have a set of types that could be dangerous.
My point was that it's not enough to focus on good intentions -- or culture -- you have to talk about mechanisms and technologies if you want a reliable system. Because the parts, humans in this case, are highly unreliable in isolation.
This is why JSR-375 was created. It needs to happen! I've tried the reference implementation and it was awesome! If you're working on the JSR or the RI then I'm rooting for you! But I don't know if anyone is working on them these days?
Security in big corporations often boils down to a unit of people ranting about everything and nothing, and telling people what they can't do, while in fact, they should be doing the opposite.
In our mobile held, APIs are often unprotected because authentication is hard for machine to machine transactions. OpenID and the misused oAuth are a solution but it is hard to implement.
A7 addressed an organizational issue completely absent from the top 10.
Since there à so much controversies, they sold have made it a top 12.
I've seen those. On a website for a company who hired me for building their server infrastructure. The password was in clear text in the main cookie.
I signaled it and the dev team corrected it. It was only 3 years ago...
Functional prototypes in many cases do not even implement their functional requirements properly, let alone the non-functional ones like security.
Security in any form is not a priority for many startups. Especially the ones that aim to be acquired before their hot potato blows up.
Buffer overflows used to be a major vulnerability. These only stopped being such a major problem when languages that prevented them became widely used.
The lesson is probably that developers and the business don't have the time or inclination to address them, and the nest defence is to make the problem impossible rather than relying on good security practices being followed.
1. Prevent people from reusing passwords from other websites/lists.
Fail: you shouldn’t know if the pw is the same as any other pw. If you can tell, you are already doing it wrong.
PW + random salt protects you against reused passwords. If your application is able to compare other passwords to the current password, not only did the othe site fuck up, but you did too.
(re)Captcha: fuck you. Even if it’s after the second failed attempt. Fuck you. I hate you.
You are implementing security theater, making everything worse for the user, and killing your conversation rate for everyone but spammers,who have this down pat.
Pushing x numbers of rules, whether or not they are special characters or not. 8 vs. 10 doesn’t matter that much.
Push passphrases instead.
Multi-factor is sort-of okay, but the implementations are garbage and the user experience is awful.
I’m not a security expert or a researcher. I’m a data engineer with a lot of web app experience.
But most of the advice in this thread is total garbage.
Web apps need to find a way to make the gold-standard of authentication accessible to users: per-device public/private key pairs.
Until we do that well, we suck at life and our jobs compared to native apps.
I include myself when I say that we have held ourselves to an incredibly low standard.
OWASP is a pathetically low bar. Yet we often fail.
It’s time to step up our game, people. And it’s on us to do it.
That only protects you if every other site does it. If you salt your passwords and some other site which doesn't is compromised, you are hosed too if your users reused passwords.
> If your application is able to compare other passwords to the current password, not only did the othe site fuck up, but you did too.
Sorry, don't follow. How is it a mistake to compare a password your user is entering to a known blacklist of compromised passwords?
I go to a site and try to register an account. The app says, “sorry, you can’t use that password because it’s been used with this username before and has been compromised.” Your attacker now knows that the user is in a compromised list.
You can’t do this without leaking information about the user.
If you compare to a global list instead of the user, then you’ve leaked the opposite information. That at least one user is on some list of compromised passwords.
You can’t do that without leaking information about at least one user.
And, as discussed, exponential fall-off doesn’t work in the world of distributed attacks.
That’s my response to point two in a nutshell. But I’ll add that the application layer should have no knowledge of the plain-text password to begin with. The password should be hashed on the client side before being sent to the application layer. Then salted and hashed and stored in the database.
The double hashing doesn’t get you anything in crypto terms as far as I know, but it means that if your application leaks or the network between the front and back end is MITMd, then you are leaking hashes and not plain text.
Of course, if the network between your front and back ends is compromised, you fucked up pretty bad anyway. But it adds at least a little effort for the attacker instead of allowing them to just grab username and password pairs in plain text.
I also salt + hash usernames in transit, so it’s not immediately obvious who is associated with what.
That gets me to your first point.
You are correct that doing all of this doesn’t protect any individual user from illicit access, and I should have been more clear about my concerns. If an individual user chooses to reuse passwords, their account can be compromised. You are 100% correct about that.
But in the case of a data breach, which is what I was thinking of in terms of “preotection”, it’s going to be really hard to compromise a collection of salted, hashed username/password combos.
In my opinion—again, I’m not a security expert, and I welcome criticisms such as yours—I don’t think we’re going to get people to stop reusing passwords. And I don’t think we’re going to get people to use multi-auth any time soon.
I think this is the best we can do until someone comes up with a way to get per-device key pairs to work in a friendly way.
Thanks for your thoughts and criticisms and questions.
And hashing on the client in addition to the server doesn't save you from any mitm attacks.