You could phrase this less tactfully as "Validate user input? No shit? Now what?"
Here's some software security advice I'd like to offer, as a software security practitioner to the startup stars and the Dr. Drew Pinsky of HN (ducks):
* Plan to update your platform software at inconvenient times. Dry-run your update process, so you know it will work on no notice. I personally believe you should also avoid your OS's packaged versions of things like Apache and nginx; having a known-working source build gives you control of when and how you'll apply patches; it's something you should be able to do easily.
* Put someone on your team in charge of tracking your dependencies (C libraries, Ruby gems, Python easy_install thingies) and have a process by which you periodically check to make sure you're capturing upstream security fixes. You should run your service aware of the fact that major vulnerabilities in third-party library code are often fixed without fanfare or advisories; when maintainers don't know exactly who's affected how, the whole announcement might happen in a git commit.
* Use TLS to encrypt data in motion and use GPG to encrypt data at rest, and don't do any other kind of crypto. GPG blobs are large and expensive looking, but getting custom crypto right is also very expensive.
* Stay on your platform's "golden path" for web security issues. Rails developers should default-whitelist ActiveRecord models, enable CSRF protection, and avoid "html_safe-ing" strings so that the maximum amount of code inherits default protections against mass assignment, XSS, and CSRF. Anything you customize will probably bite you in the ass. Keep your code boring.
* Triple check any piece of code that "shells out" to command line tools. By "triple check": have a process by which three signoffs are required to merge any such code into the deployment branch.
* Be extraordinarily wary of library code for web apps that includes "native" C/C++ code. Very popular C library code for modern web platforms has been found susceptible to basic memory corruption issues, because the kinds of people that look for memory corruption bugs don't usually think to troll Github for Ruby, Python, and PHP code with native backend code; terrible bugs can thus stay latent for years in code you can point a URL to and read.
* People will hate me for saying this but I'm here to offer honest advice: prefer almost any modern language to PHP or Perl. I don't know what to tell you other than that PHP and Perl apps fare worse on security assessments than everything else.
* I have more than once recommended that people who are very very concerned about platform security (ie, about the likelihood that there are memory corruption bugs in their language stack) use JVM languages.
* Do your admin stuff out-of-band. Write a separate admin app (bonus: the admin app can look shitty, and so is less expensive to maintain) that requires a VPN connection to access. Avoid special-privilege accounts in your normal apps. From years and years of experience working with startups: this is something you will mess up on.
* Triple-check code that handles direct file uploads and downloads. The filesystem introduces a new namespace, so upload/download code needs to juggle different privilege and authorization domains to handle it. We see fewer problems at companies that dump blobs with opaque names into S3 than we do with apps that have a file repository with named files.
* HAVE A SECURITY PAGE FOR YOUR APP. Have that page very cordially invite people to submit security flaws to an email address at your site; provide a PGP key for it. If I was maintaining a commercial app, I'd put a phone number on that page too. If you don't have this page, you should know that you are tacitly inviting people to report security flaws to Twitter.
Here comes the first pitchfork-bearer! Are there ready examples of what makes the usage of Perl a security liability?
By grouping PHP and Perl together, it seems like the security issue you're highlighting is websites made by inexperienced programmers. Perl is much worse than PHP in this regard owing to the fact that Perl's history of being used by people building their first website goes back further than PHP's does.
The last time that I recall Perl being in the news for security was in the first quarter of this year when they were patching up the DDoS vulnerability in Ruby and Python  whereas Perl and CRuby had addressed the vulnerability in question in 2004.
I would argue that at least since 2007 or 2008, that the Perl programming demographics have shifted significantly and that most restauranteurs and florists who are trying their hand at making a page for their SMB are using PHP. At this point I would venture that the fat part of the bell curve of the Perl web programming population would be systems administrators who venture outside the bounds of their automation scripting domain.
Given the stereotypical sysadmin, I would posit that they might tend to spend extra time and energy on security (in building some script-y spaghetti monstrosity).
The `open` function used on untrusted input allows arbitrary code execution (I've gotten privilege execution via setuid perl scripts many times this way, as well as getting a shell on the box via web apps allowing this).
While there are many other common things I saw in real world apps, e.g. perl scripts using backticks for command execution and allowing me to run anything I wanted, the `open` one is by far the worst. It's insanely pervasive and incredibly easy to do.
... only if you use the insecure open form. The secure open form has been available and recommended since the release of Perl 5.6.0 in March 2000--twelve years ago.
People who write insecure code, when the language makes it just as easy to write secure code, are to blame for insecure code.
We're not discussing who's to blame, we're discussing whether there's anything to assign blame for.
I can easily use Haskell's type system to disallow the use of UnsafeUserInput in my database abstraction layer, but that requires me to use my types pervasively and correctly.
Suppose that you know the source contains this line:
$userinput = "cat /etc/passwd |zenity --text-info |";
Then you have a massive damn problem, and you could set up the same strawman about almost any language - if you're starting from the premise that you are piping arbitrary user input to a system call unimpeded, all bets are off.
Shall we try some other examples? Let's say you accept input from a user, and dump it unchecked in to output shown to the user via Ruby. OH NOES! Ruby is insecure because it supports CSRF! Or, let's say you use Python and pass in input from a user unimpeded straight to the database without using the quoting mechanisms! OH NOES! Python is insecure because it enables SQL injections! etc etc etc
But wait! you say... Sensible programming languages have specific features to stop these kinds of attacks! And you're right. That's why Perl has taint mode... for when you're dumb enough to pass user input straight to open: http://perldoc.perl.org/perlsec.html#Taint-mode
Of course we should not extrapolate from this to make a judgment on the language. Of course it's nice that security mechanisms are available to alleviate this issue.
It couldn't run your code if it had an uninitialized variable in it, right?
print "Enter filename: ";
$filename = <STDIN>;
If you're talking about copying and pasting formmail.php or .cgi from somewhere on the internet, sure. Beyond that, what exactly is the issue with Perl? Where are the security issues with Dancer? Catalyst? DBIx::Class?
(a) you're just as happy in Python as you are in Perl and
(b) software security is very important to you,
I recommend you select Python. But: lots of people are much happier in Perl and should use Perl. In lots of companies, the best language safety net for security is not a key business asset, and this discussion shouldn't influence them.
I'm not saying that if you use Perl, you're doomed. If your best language is Perl, you're probably better off working in your best language, even where security is concerned.
If it's a hunch, or a gut feeling, or a prejudice, just say so. Otherwise, add content.
I'd add some of the relevant things from security strategy over the past 20 years, like "log lots of stuff".
There are many things which help availability (backups, easy deployment, tests, etc.) which also help security. If it's relatively easy to push an update, it's a lot more likely that you'll be able to rapidly respond to a vulnerability, or will have pre-emptively updated away from a vulnerability.
The most interesting things I've found recently have been in the "we can't touch that because we both don't understand it and because it's not working" parts of a complete site.
I would require all devs to read OWASP and make sure they understand the core, then they can tweak in, hopefully knowing what they do.
I thought general consensus was to stay away from the JVM if you mind about security. Isn't that the case anymore? or never was?