Hacker News new | comments | show | ask | jobs | submit login

I am wary of software security advice that leads with "don't trust user input", or revolves around "validate user input". That principle has been the core of software security strategy for going on 20 years, and has bought us very little. In the real world, we have to start by acknowledging that the verb "trust" is situational, and that in some circumstances virtually all user input is "trusted" somehow.

You could phrase this less tactfully as "Validate user input? No shit? Now what?"

Here's some software security advice I'd like to offer, as a software security practitioner to the startup stars and the Dr. Drew Pinsky of HN (ducks):

* Plan to update your platform software at inconvenient times. Dry-run your update process, so you know it will work on no notice. I personally believe you should also avoid your OS's packaged versions of things like Apache and nginx; having a known-working source build gives you control of when and how you'll apply patches; it's something you should be able to do easily.

* Put someone on your team in charge of tracking your dependencies (C libraries, Ruby gems, Python easy_install thingies) and have a process by which you periodically check to make sure you're capturing upstream security fixes. You should run your service aware of the fact that major vulnerabilities in third-party library code are often fixed without fanfare or advisories; when maintainers don't know exactly who's affected how, the whole announcement might happen in a git commit.

* Use TLS to encrypt data in motion and use GPG to encrypt data at rest, and don't do any other kind of crypto. GPG blobs are large and expensive looking, but getting custom crypto right is also very expensive.

* Stay on your platform's "golden path" for web security issues. Rails developers should default-whitelist ActiveRecord models, enable CSRF protection, and avoid "html_safe-ing" strings so that the maximum amount of code inherits default protections against mass assignment, XSS, and CSRF. Anything you customize will probably bite you in the ass. Keep your code boring.

* Triple check any piece of code that "shells out" to command line tools. By "triple check": have a process by which three signoffs are required to merge any such code into the deployment branch.

* Be extraordinarily wary of library code for web apps that includes "native" C/C++ code. Very popular C library code for modern web platforms has been found susceptible to basic memory corruption issues, because the kinds of people that look for memory corruption bugs don't usually think to troll Github for Ruby, Python, and PHP code with native backend code; terrible bugs can thus stay latent for years in code you can point a URL to and read.

* People will hate me for saying this but I'm here to offer honest advice: prefer almost any modern language to PHP or Perl. I don't know what to tell you other than that PHP and Perl apps fare worse on security assessments than everything else.

* I have more than once recommended that people who are very very concerned about platform security (ie, about the likelihood that there are memory corruption bugs in their language stack) use JVM languages.

* Do your admin stuff out-of-band. Write a separate admin app (bonus: the admin app can look shitty, and so is less expensive to maintain) that requires a VPN connection to access. Avoid special-privilege accounts in your normal apps. From years and years of experience working with startups: this is something you will mess up on.

* Triple-check code that handles direct file uploads and downloads. The filesystem introduces a new namespace, so upload/download code needs to juggle different privilege and authorization domains to handle it. We see fewer problems at companies that dump blobs with opaque names into S3 than we do with apps that have a file repository with named files.

* HAVE A SECURITY PAGE FOR YOUR APP. Have that page very cordially invite people to submit security flaws to an email address at your site; provide a PGP key for it. If I was maintaining a commercial app, I'd put a phone number on that page too. If you don't have this page, you should know that you are tacitly inviting people to report security flaws to Twitter.




>People will hate me for saying this...

Here comes the first pitchfork-bearer! Are there ready examples of what makes the usage of Perl a security liability?

By grouping PHP and Perl together, it seems like the security issue you're highlighting is websites made by inexperienced programmers. Perl is much worse than PHP in this regard owing to the fact that Perl's history of being used by people building their first website goes back further than PHP's does.

The last time that I recall Perl being in the news for security was in the first quarter of this year when they were patching up the DDoS vulnerability in Ruby and Python [1] whereas Perl and CRuby had addressed the vulnerability in question in 2004.

I would argue that at least since 2007 or 2008, that the Perl programming demographics have shifted significantly and that most restauranteurs and florists who are trying their hand at making a page for their SMB are using PHP. At this point I would venture that the fat part of the bell curve of the Perl web programming population would be systems administrators who venture outside the bounds of their automation scripting domain.

Given the stereotypical sysadmin, I would posit that they might tend to spend extra time and energy on security (in building some script-y spaghetti monstrosity).

1. http://arstechnica.com/business/news/2011/12/huge-portions-o...


> Are there ready examples of what makes the usage of Perl a security liability?

The `open` function used on untrusted input allows arbitrary code execution (I've gotten privilege execution via setuid perl scripts many times this way, as well as getting a shell on the box via web apps allowing this).

While there are many other common things I saw in real world apps, e.g. perl scripts using backticks for command execution and allowing me to run anything I wanted, the `open` one is by far the worst. It's insanely pervasive and incredibly easy to do.


The `open` function used on untrusted input allows arbitrary code execution...

... only if you use the insecure open form. The secure open form has been available and recommended since the release of Perl 5.6.0 in March 2000--twelve years ago.

People who write insecure code, when the language makes it just as easy to write secure code, are to blame for insecure code.


> People who write insecure code, when the language makes it just as easy to write secure code, are to blame for insecure code

We're not discussing who's to blame, we're discussing whether there's anything to assign blame for.


Can you name a practical language in which it's not possible, by default, to perform an unsafe operation with untrusted user input?

I can easily use Haskell's type system to disallow the use of UnsafeUserInput in my database abstraction layer, but that requires me to use my types pervasively and correctly.


The question is not whether it is POSSIBLE.


If your web application is opening and closing files based on user input, without checking them first, you have bigger issues. Blaming this on the language seems bizarre.


True but perl makes it amazingly easy to exploit.

Suppose that you know the source contains this line:

  open(file_handler, "$userinput");
How much work and what assumptions do you need to exploit this? In perl it's that easy:

  $userinput = "cat /etc/passwd |zenity --text-info |";
  open(file_handler, "$userinput");


> Suppose that you know the source contains this line

Then you have a massive damn problem, and you could set up the same strawman about almost any language - if you're starting from the premise that you are piping arbitrary user input to a system call unimpeded, all bets are off.

Shall we try some other examples? Let's say you accept input from a user, and dump it unchecked in to output shown to the user via Ruby. OH NOES! Ruby is insecure because it supports CSRF! Or, let's say you use Python and pass in input from a user unimpeded straight to the database without using the quoting mechanisms! OH NOES! Python is insecure because it enables SQL injections! etc etc etc

But wait! you say... Sensible programming languages have specific features to stop these kinds of attacks! And you're right. That's why Perl has taint mode... for when you're dumb enough to pass user input straight to open: http://perldoc.perl.org/perlsec.html#Taint-mode


I'm in no secret quest to stain Perl's reputation you know... Just illustrating how easy it is to exploit unchecked user input in the open() function. It just happens that this particular exploit is way easier in Perl than any other language I know.

Of course we should not extrapolate from this to make a judgment on the language. Of course it's nice that security mechanisms are available to alleviate this issue.


Do you usually try $UserInput $Userinput $userInput $user_input etc. when you do that?

It couldn't run your code if it had an uninitialized variable in it, right?


It's vulnerable because the user controls the content of $userinput, the variable name doesn't matter. Real code would look like this:

  #!/usr/bin/perl

  print "Enter filename: ";
  $filename = <STDIN>;
  open(file_handler, "$filename");
If you see this code, you know that you can execute any command by giving it at the filename prompt and appending a '|' character. For example giving cat /etc/passwd |zenity --text-info | will popup a dialog with the content of /etc/passwd.


Oh I see, I thought you were talking about something that would be caught by the interpreter if you put

    use strict;
and

    use warnings;
in there. Thank you for taking the time to put in an example, I understand you better now.


Those sound like two good leads, thank you!


"Use TLS to encrypt data in motion" It's also worth checking that whatever language/library you're using bothers to validate the cert (AND the hostname). Many (most even?) don't by default. You have to ask extra nice in order to get a secure connection that's actually secure.


> * People will hate me for saying this but I'm here to offer honest advice: prefer almost any modern language to PHP or Perl. I don't know what to tell you other than that PHP and Perl apps fare worse on security assessments than everything else.

If you're talking about copying and pasting formmail.php or .cgi from somewhere on the internet, sure. Beyond that, what exactly is the issue with Perl? Where are the security issues with Dancer? Catalyst? DBIx::Class?


Look: all things being equal, if

(a) you're just as happy in Python as you are in Perl and

(b) software security is very important to you,

I recommend you select Python. But: lots of people are much happier in Perl and should use Perl. In lots of companies, the best language safety net for security is not a key business asset, and this discussion shouldn't influence them.

I'm not saying that if you use Perl, you're doomed. If your best language is Perl, you're probably better off working in your best language, even where security is concerned.

That's all.


But you haven't said /why/, or given any examples, other than "Look! Perl! Security!", where other posters on this exact thread have shown a trend for Perl to have security holes fixed first.

If it's a hunch, or a gut feeling, or a prejudice, just say so. Otherwise, add content.


Not interested in adding more content about Perl to my original comment, sorry.


I'd also add: Remote timing attacks are practical[1, 2]. Twitter as target of it in the past[3].

[1] http://crypto.stanford.edu/~dabo/abstracts/ssl-timing.html

[2] http://codahale.com/a-lesson-in-timing-attacks/

[3] http://scforum.info/index.php?topic=4358.0


This is a great list.

I'd add some of the relevant things from security strategy over the past 20 years, like "log lots of stuff".

There are many things which help availability (backups, easy deployment, tests, etc.) which also help security. If it's relatively easy to push an update, it's a lot more likely that you'll be able to rapidly respond to a vulnerability, or will have pre-emptively updated away from a vulnerability.

The most interesting things I've found recently have been in the "we can't touch that because we both don't understand it and because it's not working" parts of a complete site.


The "stay on the golden path" rule is unrealistic except in corporate Java environments. It will conflict with other more important focus points than security: disruption, be faster than competition, do things thee hard way, etc. (All things well described in pg's essays).

I would require all devs to read OWASP and make sure they understand the core, then they can tweak in, hopefully knowing what they do.


"* I have more than once recommended that people who are very very concerned about platform security (ie, about the likelihood that there are memory corruption bugs in their language stack) use JVM languages."

I thought general consensus was to stay away from the JVM if you mind about security. Isn't that the case anymore? or never was?


There's a general consensus to avoid clientside Java.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: