There are commerical hardware devices that'll do the same sort of thing modsecurity does - I guess it's being suggested Sony didn't use any, which IMHO is very stupid.
If you look at the definition of firewall, modsecurity seems to fit it: "A firewall is a part of a computer system or network that is designed to block unauthorized access while permitting authorized communications." I don't think the term is being abused, just used in a way that people aren't familiar with. Most people seem to think a firewall is only a network (IP or Ethernet) level device.
People after 20 million credit card numbers can probably find two bugs to exploit, rendering the "protection" useless.
People trying to protect 20 million credit card numbers need to learn how to sanitize inputs and be able to render correct pages even if someone submits <script>'drop database. If they don't know how, it's time to hire programmers to write your applications instead of the monkeys you currently have.
I have it deployed on sites where people are using Drupal and Wordpress with addon modules. I have at least 2 documented cases where it's stopped an exploit that would otherwise have gotten through (though I'm fairly sure the setup of the webserver would have stopped anything bad from happening)
Your last sentence seems to be suggesting I was supporting a "just chuck modsecurity in front of it and don't worry about security" attitude, which I wasn't at all. All my original reply was trying to say is that an Application Level Firewall is still a firewall.
Similarly, web application developers need to make sure that their app is 100% safe without hacks like mod_security. But after you do that, sure, turn on mod_security. People and processes can fail, and it's good to have as many failsafes as possible.
I object to things like mod_security because, in general, people write piece of shit apps and then think they are safe because the mod has the word "security" in it. That doesn't make you safe, that makes you ignorant.
I don't see what the first half of this sentence has to do with the second. No, I will not run my webserver as root but that has literally nothing to do with sanitizing input.
A firewall is only as good as the rules the admins have deployed on it. Deploying tight rules like that on shared hosting is very stupid, but the rules used and modsecurity itself are two separate things.
Blocking outbound port 25 has legit uses, it keeps from being able to submit mail with the only information about the source of the mail being the IP address. Port 25 is meant for MTA to MTA mail transfers. Outbound 587, named "submission" is the port you can connect to at your email service provider to drop off outbound mail. This port is required to support authentication before accepting mail. This provides an additional layer of auditing and access control, and thus a way to easily turn off problem senders, on a per-account basis, should the need arise.
A smart webhost would block outbound port 25 and require you to send mail authenticated through an MTA provided for the purpose. But of course, chances are most people here are not using purely "web hosting services", but rather dedicated machines or VMs that run their own MTAs. However, any decent MTA can be configured to use a relay host and authentication, which is fine unless you're sending massive volumes of email (in which case, why are you sending it from a rinkydink webhost or single VM?)
Port 80, port 443, port whatever-you-need-to-not-be-blocked.
You can't blame the Netscreen Firewall for doing what it was told - that's my point. mod_security isn't dumb, stupid, annoying. The rulset deployed can be.
Also, probably >95% of the "website hacks" I see are automated, so mod_security really does greatly cut down on the number of exploits. Sure, if you have a dedicated hacker who knows his or her way around things like modsec, then it won't matter at all, but the number of hacks we've seen has decreased greatly due to mod_security. You only get into trouble with it if you pretend that it's anything more than it is: regex filtering for requests.
To the guy talking about validating input...perhaps you should spend a bit more time on the internet and notice all the sites running copies of WordPress with defaults. This is how the vast majority of websites are...default everything. Input validation is great if you have a custom app and a development team, but most people don't. While it's arguable that they should even be running their own site, it doesn't change the fact that they do. They don't have time, and we don't have time to go through all of their PHP that accepts _GET and _POST and make sure that they're handling input validation / sanitation properly. Yes, the people who develop wordpress / whatever CMS they're using should set some good defaults for input validation and use proper sanitation techniques, but the truth is that tons of sites run on shaky code bases and old versions...so mod_security is the "quick fix" that covers the vast majority of cases and protects tons of our users.
suPHP is great too (privilege separation). Combined with jailkit, our systems are pretty well locked down for shared hosting.
That all said, Sony SERIOUSLY screwed this up. Their system should have better secured the Cardholder Data Environment (the PCI-DSS name for any system that touches CC info). My guess would be poor architecture planning / implementation as to why this obviously wasn't done. Also, mod_security has some filters for data leakage which can be tweaked to prevent obvious HIPAA stuff and obvious PCI-DSS stuff, such as plaintext transfer of zillions of CC numbers. If a skilled hacker broke into this, he/she again could pretty easily find a way around this.
Network firewalls probably wouldn't have helped much in this case, unless they did something really stupid like leaving SSH open to the world. If it was just a site exploit, shame on them for having such a poor system (shame on them anyway for setting up a system that allows this to happen).
I was just giving examples, the rules themselves are quite complex, same as with most firewalls but even more so with the variation of good/bad code out there! It's an impossible game.
That, however, wasn't my point. My point is a firewall doesn't just have to refer to a network device as the OP (seems to) suggsest.
Unless the attacker would be able to get root on the box via a privilege escalation vulnerability, they would not be able to disable a firewall that blocks access to ports other than 80.
"Sony said it has added automated software monitoring and enhanced data security and encryption to its systems in the wake of the recent security breaches."
sounds like they have thrown a substantial amount of money (instead of skills) into the problem.
A firewall might not have stopped the attackers from owning the web server, but a proper firewall or set of firewalls could perhaps have stopped the attackers from getting PAST the web server.
Is it perfect? Of course not. Is it another layer of protection, sure it is.
I've been working a long time in the "security industry".
Believe me, it has reasons why I call products like WAFs snake oil...
WAFs aren't perfect, no security product is. They do allow you to implement protection against many types of common attacks against your website. This is useful if your site runs applications that you don't have the ability to fix XSS/Injection/etc... issues on (you don't own the code, you don't have resources to do it, etc...). This is actually pretty important as most websites out there run old and/or closed source and/or 3rd party code and/or don't have internal resources to identify and fix every vulnerability 100% of the time. A well tuned WAF provides a decent layer of protection. They also allow you you solve PCI DSS 1.2 Req #6.6 without doing pen testing/vuln testing after every single code change you release.
The idea is similar to using blacklists in filter functions in XSS or SQL protection mechanisms. In theory they could block all malicious but in practice they're poorly written and poorly configured crap that act as more security theatre than anything else. The proper approach is to use context-sensitive whitelists for all client input, not add on layers of what is essentially protocol grep.
>"The proper approach is to use context-sensitive whitelists for all client input, not add on layers of what is essentially protocol grep."
It's regex for HTTP requests / responses. Literally, that's all it does.
>"WAFs are usually viewed as relatively useless as they waste time on dumb attacks (specifically blacklisting) that harms more than it helps. "
By who? References? As I mentioned above, we use WAFs and they help a lot with stupid attacks, because stupid attacks are what most of the attacks are; automated attack crap running on botnets to put up phishing pages on easy targets.
For example, HTTP traffic can be inspected to identify threat signatures. A firewall or IDS can be configured to drop packets from a threatening IP address after an attack signature has been identified.
An attack signature might be a blacklisted URL eg: /cgi-bin/mail.pl or it could be a SQL injection attempt, or a buffer overflow attempt, or a DDOS attempt.
The idea is to prevent this traffic from ever reaching the web server machine.
Also, re: blacklisting DDoS...hahaha, against a real botnet, good luck with that. I could take down RioRey in 30 seconds if I wanted to right now (google "slowloris.pl") by myself. Kind of hilarious seeing as they sell DDoS protection. All the DDoS prevention in the world can't stop crazy traffic with real-world-emulating usage patterns. It literally is indistinguishable from legitimate traffic if done correctly...just ask paypal.
Getting people to /allow/ you to patch servers is like pulling teeth. Seriously.
If the OS itself is so far out of date that you can hardly find patches for it anymore, the issue is even worse.
The mere specter of something possibly breaking is usually reason enough in many people's minds to not prioritize security updates, or in some case, flat out disallow them.
Edit: keep in mind that this is anecdotal, I'm sure there are companies that patch their servers properly.
If they're running RHEL (which is likely), the version number doesn't mean anything, since RedHat back ports all security patches.
In the Sony case, the majority of the victims are likely young people whose sense of risk, privacy and
consequence are not yet fully developed, and thus they may also not understand the full
ramiﬁcations of what has happened. Presumably, both companies are large enough that they
could have aﬀorded to spend an appropriate amount on security and privacy protections of
their data; I have no information about what protections they had in place, although some
news reports indicate that Sony was running software that was badly out of date, and had
been warned about that risk.
Also, did they never do a security audit??
Given what I know about sony as a game developer, I would not be even remotely surprised to learn that they've never done a security audit.
Which is probably why all our docs were in Japanese for the first three months.
Contrast this to the N64 at the time, which had a $1000000 buy in for a developer license, or the Saturn which was, by all accounts, a nightmare to develop for that made the PS2 look like child's play.
After that, the support comes down to the economics of numbers. Most devs I know would have gladly made games on Dreamcast forever, but (piracy/marketing/apathy) killed it, and the PS2 was all that was left.
The selection criteria was mostly based on how well the candidates performed on an entrance examination that consisted primarily of math and science questions.
The company then spent the first few years of the employees' life there training them how to develop software.
Based on what other people told me this kind of thing is pretty common practice.