Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What is the most common security mistake you see?
75 points by jtraffic on May 23, 2017 | hide | past | favorite | 74 comments
Mainly I'm asking about websites, but other security mistakes are welcome (e.g., IoT, apps, firmware)

Also acceptable: Not common but happens reasonably often and exposes large vulnerabilities.




Not even trying.

Seriously, you wouldn't believe the number of companies/people who don't even try. I'm talking "db.run(request.body)" levels of not-trying, "the shared admin password is 'turtle'" levels of not-trying; "the medical records are protected because they're on a hidden share" levels of not-trying.

These are all things that I've seen.

Second to that is thinking that you're trying because every once in a while you have a vague sense of paranoia.

"Let's use MongoDB so we're safe from SQL injection." "We need to proxy the API because that's what other people do." "The WiFi password is secure because it's 26 characters long -- even though we're still using WEP."

Also things I've seen.

I know I'm not any better either. Given the opportunity, I would always hire a professional to audit my stuff.


Yeah, I've seen this too.

"No one but the customer is even going to know this machine exists, so why would anyone try to access it?" levels of ignorance are the cause of most of the IoT security issues we face today.


Lack of working backups. It's appalling how many people depend on systems with no backups.

Forget ransomware and other exotic attacks, what happens if after 7 years your hard drive decides to die on you?

I don't know. The backup is the first thing that needs to be setup and tested when you buy a computer. Windows and Mac have build-in systems for point-in-time backups and linux offers more than a few solutions.

Backup, backup and backup again :-)


This advice doesn't seem to be getting through regardless of how loudly it is proclaimed. Everyone should have a backup plan with multiple points of failure, but of course lots of people don't because it's not trivial to set that up.

I don't think current operating systems go anywhere near far enough in helping users to get this right. When I buy a computer, the computer should be in charge of ensuring that catastrophic data loss cannot easily occur. 9 times out of 10, the user simply doesn't know how to set things up that way.


On a Mac it prompts on external hdd being connected, you click a button and its setup. How much simpler can it be ?


I agree entirely with the sentiments here, but I wouldn't consider it a security issue. A safety issue more like.


Security for software is typically defined to include things that protect you from loss of access risks. Think the latest ransomware that went out.

Specifically, the axes of security are integrity, authentication, and availability. Backups give the illusion of helping both integrity and availability. However, without practiced and verified recovery plans, they do neither.


Agreed. Disaster recovery is one of those things that everyone agrees needs to be possible. But nobody seems to realize that you are only good at the things you regularly do.

Which isn't fair of me. I think people do realize it. Just making the time to regularly repeat something you have done before is tough.

I am very sympathetic to the idea that you can automate this. However, this seems to typically fall afoul of the idea that you now just have more that can and will fail. So now you need something to monitor that....


Not just backups. Restores. You need to test your backups by periodically testing your restore process to ensure the backups are actually any good.

Test your restore process periodically!


Server backups are pretty easy to test in this era of virtualization. (I know, not everyone does it, but at least in principle it's pretty straightforward to mount a duplicate VM and test one's backed-up data. Good way to test config management systems as well.)

But how does one test restoring personal data? I'm guessing that most people, like me, don't buy duplicate laptops just to be a platform for testing their data restoration protocols.


> It's appalling how many people depend on systems with no backups.

Truer words were seldom written!!


Willful ignorance.

its 2017 and yet still the biggest vulnerability of them all is willful ignorance.

People have agendas to fulfill. Can't let the truth get in the way. Haven't found a budget to fit in the truth looks like we will have to do without it for now.

What is the truth?

Well nobody cares about security.

Who knows about the truth?

Mostly everybody

Why doesn't anyone do anything about it?

Trying hard just doesn't cut it anymore as we can see those with influence would rather destroy the entire internet for mass surveillance.

Also don't forget about 2FA


Security is boring and hard and it doesn't make you any money. Basically the same reason no one cares about US infrastructure projects.


In my organization, forcing users to change passwords every x months. Everybody I know ends up picking simpler to remember passwords from a pool as a result.


NIST just updated their guidelines, removing the requirement to change passwords, for that very reason. :)


Everyone I know just increments a number at the end of their password, until it allows them to start from 1 again.

ShittyPassword1 ShittyPassword2 ShittyPassword3 ... ShittyPassword8 ShittyPassword1

NIST came out recently against this, so hopefully, hopefully companies will start to listen.


• Password requirements outside of length really don't do squat except make it harder for people to remember their already weak passwords

• Using the same short passwords over and over again.

• Using short passwords <8 characters

• Using very commonly used passwords (password123)

• Security questions

Just use a password manager. Choose one strong 10+ character password that you can remember. Choose the first letter of every word from song lyrics you like if you have to.

Example: lwbeiycwlmdrgag (loving would be easy if your colors were like my dreams red gold and green).

It would take the standard web hack billions of years to figure out that password. Even if someone had massive computing resources behind the crack (not typical, and very expensive) it would take over a week. Password123 or Fido16 might take a minute.


Yubikeys (both two-factor and very long static passwords, with pins) go a long way. And a handful of Yubikeys handed out to your staff is cheaper than a data breach.

It's not protection from idiocy, but if you're a Microsoft shop, default long passwords avoid stuff that Microsoft is rather coy about (e.g., LANMAN hash attacks against passwords < 14 characters long are still a thing, sigh).


> Choose the first letter of every word from song lyrics you like if you have to. Example: lwbeiycwlmdrgag (loving would be easy if your colors were like my dreams red gold and green).

I'm not sure that's a strong password. A web crawler could generate a list of n-grams from the first word of every word on every web page


and that would be how many potential passwords?

i pick a random order of a 52 card deck as my password. is that a strong password? you could just iterate over all of them. might take a whole though.

now my password is EITHER the song thing or a random order of a 52 card deck, you dont know which one. am i having a strong password yet?

"a web crawler could just read every page on the internet and then construct all the potential n-grams of every word of every page on the internet and then easily figure out your password" sounds slightly optimistic.

youd only have to clone half the google operation to crack some random weirdos password.


Not understanding that fundamentally a whitelist is safer and better than a blacklist. I have gotten into many fights about this in my work about this because it always leads to a hole later when something new or unknown sticks its head out. If there are ten things it is supposed to be able to do make a list of ten things, don't just try to filter out what might not look like the ten things.


As a pen tester, the biggest problem is management not caring or understanding. If there is a budget for pen tests, there are failsafes for data loss (offline, so they can't be cryptolocked), monitoring is in active use, developers are given time to learn about and write secure software, and sysadmins given the time and budget needed, then it's usually quite alright. That might sound like a lot, but to most people here it's basic stuff. There are many companies that get it right, though of course I have a biased point of view (and I'm aware of it) because the ones I see have a budget for security tests.

As for a single, common mistake for iot/apps/firmware altogether, that's an overly broad question. I think the best answer I can give is not updating things when updates are available. That's the easiest way to get compromised without even writing a single line of vulnerable code.


Default passwords.

Bad secret management (hardcoded in Git, shared secrets not changed after an employee left ...)

Dev and live not properly separated/dev not properly secured.

Services exposed to the internet that shouldn't be.

Old and forgotten software / appliances.

Don't forget about the dev/sysadmin workstations!


I work in government. Hard-coded passwords, sharing production passwords by email, etc., is the norm.


On a general level, no systematic accounting of assets.

Can you get me most or all of the following in an hour or so?

- List of people with production API credentials and which ones per person

- List of accounts with weak passwords or lacking 2FA

- Who has SSH access to each of your servers?

- All portal logins per employee

- Show me the last 3 production data sources this specific employee accessed

- Per data source, list people and services that can read/write

- Network diagrams and policies

A lot of organizations can't. It's boring bookkeeping, but systems have to be put in place to keep this information correct and up to date. Otherwise, there's nothing to manage or secure!


The biggest security mistake I see is the History.

Talking about history of :

1. Slack channels ( lot's of private stuff / links go to #general )

2. VCS ( lot's of passwords / tokens are in the commit history )

3. JIRA ( lot's of private information / company secrets are there )

Usually only one of those three can cost the company a big lawsuit if some employee/freelancer is deliberately being hired to make damage.


Things that are important ending up in public history is the mess in the first place.

JIRA is typically secured. As are VCS. As are Slack channels.

What is not secured are the developer machines...


Most common? Simply not having a security mindset whatsoever day-to-day. If you build anything you should be thinking about security, even if all you're thinking is why security isn't an issue for some particular project. It's always important to understand where the boundaries are because they will change. I've seen countless examples of people playing fast and loose with security merely because they never thought about it or don't understand the issues. Even today, right this second there are people who are writing code that is vulnerable to sql injection, people writing web app software without sanitizing their inputs, people writing remote execution vulnerabilities, etc, etc. All because they just don't think about it and just don't care.

A lot of people have the mindset "oh, who would ever hack me? my app is just small potatoes". And this is how you end up with things like the mirai botnet.


Emailing back password in plain text - hey this is the password you've just set, don't forget it!

I've only had it happen to me about 4 or 5 times in the past 15 years or so. But it makes me so disappointed each time that I literally want to find that developer and knock him/her out with the ugliest hammer I can find.


Most common mistake I see: sharing your passwords with multiple people. Even when the product lets you have more than one account.

Mistake websites make: only letting you have one user account.


Organizational firewalled "internal networks" that everyone connects to.

People opening untrusted web content and email attachments in Acrobat Reader and Office.

Reliance on antivirus products.


Critical functionality implemented in client code rather than the backend.


As a front-end developer I'm curious about this. Do you have an example to share?


- User logs in, API sends profile data to the UI including whether a user is an admin or not

- User tries to edit something sensitive, UI has profile data so disallows the action if the user is not an admin

- User pokes around the browser console to see what request would have been sent had the user been an admin and the UI allowed it

- User manually sends malicious admin request to API

- API fails to check whether admin request came from a user with admin rights and blindly executes the malicious action


How about putting the quotes around a value in JavaScript to prevent a DB error when the server concatenates it into a SQL statement later? There was even a helpful comment in the page source to explaining what was happening. I sent an email but it wasn't answered.

I found this on a mapping site (now defunct) that let you printout sections of maps for a change. However, you could get small sample maps for free. And so by altering the width and height parameters in the page source ...

This wasn't mean't as a dig at frontend devs. After all the backend used the values as passed, and there should be some archtectural oversight to ensure this sort of sloppiness doesn't pass review. It's really a case of incompetent all round.

As dev I see a push to put more and more functionality into the client. It makes for a better user experience, but the downside is that security gets overlooked.


I'm imagining doing some client side validation and then the server just blindly assumes the content has been validated.


Not training the employees:

I can provide email filtering, DNS filtering, firewalls and all sorts of technical solutions to security.

It all goes to pot if someone gets click-happy on weird websites or email attachments or falls for the "Your $Company_President need to transfer some funds" email.


Using string concatenation to build paths or SQL queries. My previous boss is a big fan of this. I've frequently broken his stuff by adding or omitting trailing slashes. The SQL injection potential is obvious.

Our ticketting system is written in-house and is a diabolical mess of Teach Yourself PHP In 24 Hours - every trope of poor PHP development is in there. Certain sensitive pages are so badly written they're IP-restricted to our office to reduce the chance of them being exploited.

And a major one - passwords as a 1337 spelling of the username. We use this A LOT.


I should also add, another thing we do very wrong here - devs with admin access to production servers. Not only is there a vector for direct attack by compromising a large number of accounts, but if devs can (and do) change settings on production machines without Infrastructure knowing about it, you've got potential for silently breaking secure systems.


Because there are people like me who "scan networks" themselves to sleep counting specimens. I'm looking at you, petrochemical company with open video conference system a few miles away, and you battery powered terrain shift monitoring sensor at old oil rig that became a 300 meters wide hole, visible from space, and you, TV station with such the helpful admin that he exposed a page titled "intranet" to help the not so tech savvy journalists access FTP, with the credentials on the page of course, and tomorrow's news prompts and contents with write permissions, or you, radio with firewalls still on default config (may the soul of Heaviside haunt your waves), or you, Telco employee with meticulously noted credentials on a sheet of paper for such banal infrastructure, or you, law enforcement officer leaving a flash drive with secret technical specs on a new system in an internet cafe because you needed something to store "onanism inducing material".

- Default config.

- But the machine isn't a website, it doesn't have a name, just an IP address. Who would find it?

- Just execute user SQL query / eval user code, what could possibly go wrong? I'm sure all the docs telling you no string substitution, not even with a gun on your head, is just exaggeration..

- Password is domain name.

- Password comparison in JavaScript, rendered, readable (a special place in hell for this one).

- And of course, clear text passwords. Gotta love that one.


Blind trust in open source package managers. Look at the damage the removal of left-pad from npm caused, for example, and imagine what could have happened if the author had malicious intent.


This is THE number 1 cause of build failures where I work, and is of particular concern for me because the Jenkins cluster I work with runs all slave nodes as a local admin user - certain products WILL NOT build otherwise.

One malicious package could completely compromise a significant amount of our infrastructure.


Unescaped variables in HTML and SQL.

  <p>Hello, <?= $username ?>.</p>
and

  $sql = "update tbl set x = '{$_POST['x']}' where id = {$_POST['id']}";
For HTML, use htmlspecialchars, better yet a shorter wrapper function like h(), better yet a template language like Handlebars.

For SQL, use parametric queries. (Most people call them parameterized queries, but with so many suffixes in English, why not choose the more musical one?)


1. Secrets in source. This is by far the easiest one to "see".

2. In Ruby code, I see a surprising amount of `eval`; mostly `class_eval`


Eval of user input, or eval as a means of execution scope? An important distinction, since class eval and instance eval can run blocks at those levels, and there's nothing inherently wrong or insecure about that.


Lack of understanding of string escaping.

- Trying to remove/escape "bad" characters on input or thinking that only "user" input needs escaping. This means anything that slips past the filtering can do unlimited damage. Any "special" characters get lost or mangled in uncontrollable ways. The right way is to escape data on output.

- Thinking "escaping" is one universal thing that can be done in advance. This leads to HTML-escaped text in email subjects, SQL-escaped strings in HTML, etc.

- Forgetting that nested contexts require multiple levels of escaping (e.g. an URL argument in string in JS in an HTML script tag requires URL-escaping followed by JS string escaping followed by HTML-script escaping).

- Voodoo escaping such as `sprintf(buf, "cmd '%s'", arg)`


Can you expand on escaping on output?


Escaping depends on the context where the string is used, and the same string may be used in many different contexts. For example, you might need to insert someone's name in HTML, JSON, SQL, and e-mail headers/body. Each of these has different escaping syntax, so you can't reliably escape/sanitize the name ahead of time (if you try, people named O'Connor will hate you), so it's best to apply appropriate flavor of escaping as late as possible (e.g. your HTML template engine should apply HTML-escaping, your e-mail library should apply quoted-printable, etc.)

Escaping on output also makes text processing easier and more reliable. For example, if you cut a string, you don't have to worry you'll cut it in a middle of an HTML entity or just after a backslash.


Appending strings together for SQL queries (SQL injection just waiting to happen).

Passing in user info for a query, instead of using session or token encoded data (Impersonation just waiting to happen).

Bad passwords/keys, and bad management of those secrets.

No plan for backups, or not having a way to restore backups.


Most common security mistake: Lack of a Patch Management Policy that is actually followed.

Just look at WannaCry. The Patch was released March 14th and the worm was released May 12th.

That shows everybody who was compromised simply didn't have a regularly scheduled Patch Management Process in place.

From WannaCry Wikipedia page:

>A "critical" patch had been issued by Microsoft on 14 March 2017 to remove the underlying vulnerability for supported systems, nearly two months before the attack, but many organizations had not yet applied it.

>Almost all victims are running Windows 7 or newer.

Same with web apps, have a regularly scheduled Patch Management Policy in place for the Libraries, Gems, Modules, Packages, etc you use.


Most mistakes are basic, but they come about because people don't test their code/sites with the mindset of an attacker. This is pretty much always the case when it comes to XSS, and SQL-Injection attacks.

There are lots of words written about security, so it's hard to pick a single thing. But my vote would be for "enumerating badness", and "default permit", which are kinda related.


Part of my day job is reviewing vulnerability reports for a load of software written mostly in C/C++.

Bugs leading to memory corruption vulnerabilities are the most common mistake I see. So things like, not doing proper bounds checks before accessing an array, using memory after it's been freed, not initializing memory properly before using it, type confusions causing values to be treated as pointers, etc.


The OWASP Top Ten is a great list of vulnerabilities that we test for in the Financial Services industry before releasing new code: https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Proje...


The release candidate for the 2017 update is available for comment until the end of June.


No infrastructure secret management or very bad infrastructure secret management:

* Private SSH/AWS API Keys in Github

* Shared Prod Passwords

Not applying updates to infrastructure regularly


The top comments here are really harping on things that aren't necessarily actionable (with the exception of backups, but I'd split hairs on whether that's a security mistake or a business process mistake). I interpret your question to mean security mistakes that have an actionable basis.

The most common security mistake I see is API authorization errors, in the abstract. More specifically, I see this manifest itself reliably as insecure direct object references. Nearly every security assessment I have ever been on has had an API where you can slightly alter a parameter value and bypass authorization controls altogether to access another user's account (or whatever other state-changing action you can think of in the context of their session).

In an era where just about everyone knows what SQLi and XSS are, actual logic errors fly right under the radar and authorization controls are not implemented universally.

To be even more specific - do you have a microservice architecture? Have you tested that every backend service has the same authorization controls in place? When was the last time you tested this?

Put your backend services behind a common access policy (you can use something like Kong for this), and make sure that you parameterize all calls between services, because someone can and will access your privileged services through a less privileged service using an unexpected or undocumented call. For that matter, use internal access policies to discriminate between more and less privileged services. It doesn't matter if you have a DMZ and firewall open IP traffic if you give a user-facing service access to a more privileged service - users can hop between them.

And when you deprecate an internal legacy API, actually remove it from being accessible by other services. This is probably still the number one way that people find really serious vulnerabilities in Facebook. I've seen cases where user input was used by Service A to directly query Database B using fallback API C, giving the user de facto arbitrary read access to the entire table. Database B is always firewalled to outside internet traffic in these scenarios, but Service A gets a free pass to access internal resources and voila.

It's fairly straightforward to classify specific technical issues, but if your APIs for various backend services are not properly set up with authorization controls you open the door to remote code execution, command injection, server-side request forgery...it's pretty bad. And it's largely unknown by developers because they are hyper-focused on common web vulnerabilities, which frankly are mostly solved by using the right libraries these days.


Password reuse and easy-to-guess passwords. Most of the time, it's not even the users themselves who are to blame but non-sensical password policies (like "Password must contain characters XYZ and be no longer than 8 characters" and "Password must be renewed after 30 days.").


Writing security related code themselves instead of using a battle tested library. Never underestimate how many security holes there can be even in simple looking code.

The most common one for this is login systems. I've custom written ones with bugs so bad that a blank password worked for every login.


Locking down files in the name of "security"; configs and logs. Everything containing even the most banal pieces of information.

The result? All our sysadmins marching around as root all day because it's the only way they can get any work done.

Seems to be the default on several Linux distros these days.


Having everything out of date.

Servers, computers, software, (people), everything. Many exploits have been published on exploit-db.com targeting software that has already been patched. My fortune top-100 company is the type of company that will decide to update to Windows 10 when Windows 15 comes out.


I know of a company (that I won't name for obvious reasons), that uses a vulnerable version of Outlook Server because the managers use it to spy on each other and read everyone's mail. They secretly tell eachother and everyone who knows about it thinks only a few others do.

People eventually quit or are fired, are hired by a competitor, and still know about the hack. Its BAD.


12+ yr exp in security

default/dev/test/guest accounts+passwords with easy to guess credentials

exposing servers and services to the internet when not needed (memcached, hadoop, mongo, etc)

Insecure Direct Object Reference

Not blocking known hostile entities (known bad ASNs, known bad netblocks)


Mission critical logins/passwords put on a notepad directly on your desktop.

It is appalling how many old school places do this.

Also, if you open up the top drawer of someone's desk, you are pretty likely going to find passwords there too.


infrastructure department says: "applications is responsible for security" application department says: "infrastructure is responsible for security" developer ask: "Can I help you?"


Documentation of complex systems.

If you do not have documentation explaining how the system works, its dependencies, diagrams, etc, you will have a hard time fixing security errors.

What happens if the guy that built the system leaves?


This is a very old history, but iptables rules designed to not being permanent by default and reseting itself at reboot without notice was a big one.


Obvious one but not using HTTPS would be up there.


Not sanitizing user inputs and whitelisting query parameters. Also sharing private keys over emails or through file sharing.


Running the web app server in debug mode in prod.


Using bad programming languages, honestly. Most security problems boil down to type errors, but it's poor language choice that leads to making those.


What's an example of a bad programming language?


Those that aren't "strict" in certain areas. Examples:

* Those without memory management (I'm not talking about garbage collection, though there seems to be a correlation). Buffer overflows/overruns and its cousins have caused many security flaws, including heartbeat and even flaws in java, where it interops with C programs for performance's sake.

* Syntactical leniency wrt. scope can lead to errors like apple's infamous "goto fail"


I didn't want to say, as most languages have their fans. But anything missing the features ML had; any language with null, without memory safety checking, without sum types and pattern matching, without type safety or generics (really any language without HKT, though I guess you don't need it for small programs).


Not realizing that the mind is supreme and the mind is fallible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: