If I see one more article on this incident that abuses the word "firewall" I'm going to hurt someone. Surely Apache is either accessible via port 80, or it isn't. What would a firewall do to mitigate vulnerabilities in a webserver?
Look at modsecurity.org. That's what people call (rightly or wrongly) a web application firewall. You can put a bunch of rules in and if it seens certainly bad incoming requests or certain outgoing requests (all of which are configurable) it'll take whatever action you've got configured.
There are commerical hardware devices that'll do the same sort of thing modsecurity does - I guess it's being suggested Sony didn't use any, which IMHO is very stupid.
If you look at the definition of firewall, modsecurity seems to fit it: "A firewall is a part of a computer system or network that is designed to block unauthorized access while permitting authorized communications." I don't think the term is being abused, just used in a way that people aren't familiar with. Most people seem to think a firewall is only a network (IP or Ethernet) level device.
mod_security is a hack that you put in front of hacks to make them collapse in a more amusing manner. The idea is to stop the dumbest of dumb attacks.
People after 20 million credit card numbers can probably find two bugs to exploit, rendering the "protection" useless.
People trying to protect 20 million credit card numbers need to learn how to sanitize inputs and be able to render correct pages even if someone submits <script>'drop database. If they don't know how, it's time to hire programmers to write your applications instead of the monkeys you currently have.
I'd much rather put up some barriers that'll make it harder for the hackers. I'm not saying modsecurity (or anything else) is a perfect prevention, but combined with other things I can't see how you can argue it's _not_ useful. Are you so confident in your sanitized inputs that you run your webserver as root?
I have it deployed on sites where people are using Drupal and Wordpress with addon modules. I have at least 2 documented cases where it's stopped an exploit that would otherwise have gotten through (though I'm fairly sure the setup of the webserver would have stopped anything bad from happening)
Your last sentence seems to be suggesting I was supporting a "just chuck modsecurity in front of it and don't worry about security" attitude, which I wasn't at all. All my original reply was trying to say is that an Application Level Firewall is still a firewall.
I agree; I always design my software with as many failsafes as possible. For example, I design my applications to crash safely. But, I also try to make sure they never crash.
Similarly, web application developers need to make sure that their app is 100% safe without hacks like mod_security. But after you do that, sure, turn on mod_security. People and processes can fail, and it's good to have as many failsafes as possible.
I object to things like mod_security because, in general, people write piece of shit apps and then think they are safe because the mod has the word "security" in it. That doesn't make you safe, that makes you ignorant.
That's really stupid, I agree.
But if your webhost was also dumb enough to block port 25 outbound on their firewall, would you say that Netscreen Firewalls are stupid?
A firewall is only as good as the rules the admins have deployed on it. Deploying tight rules like that on shared hosting is very stupid, but the rules used and modsecurity itself are two separate things.
The thing is, there are ports you might legitimately want to block. For sites where users submit data, there is seldom any easily definable text you really need to stop them from submitting unless your underlying app is made out of toothpicks and chewing gum.
But if your webhost was also dumb enough to block port 25 outbound on their firewall, would you say that Netscreen Firewalls are stupid?
Blocking outbound port 25 has legit uses, it keeps from being able to submit mail with the only information about the source of the mail being the IP address. Port 25 is meant for MTA to MTA mail transfers. Outbound 587, named "submission" is the port you can connect to at your email service provider to drop off outbound mail. This port is required to support authentication before accepting mail. This provides an additional layer of auditing and access control, and thus a way to easily turn off problem senders, on a per-account basis, should the need arise.
A smart webhost would block outbound port 25 and require you to send mail authenticated through an MTA provided for the purpose. But of course, chances are most people here are not using purely "web hosting services", but rather dedicated machines or VMs that run their own MTAs. However, any decent MTA can be configured to use a relay host and authentication, which is fine unless you're sending massive volumes of email (in which case, why are you sending it from a rinkydink webhost or single VM?)
Webhosting sysadmin here, we start off with a strict ruleset for mod_security and then add exceptions as needed....they really stick out in the per-vhost error log, so it takes all of about 5 minutes to fix. We err on the side of caution, and the set we use allows the majority of sites to operate without issue. We'd rather spend 10 minutes on a ticket explaining to the customer that it's just us being overly cautious than any real issue. We have to weigh risk mitigation vs usability.
Also, probably >95% of the "website hacks" I see are automated, so mod_security really does greatly cut down on the number of exploits. Sure, if you have a dedicated hacker who knows his or her way around things like modsec, then it won't matter at all, but the number of hacks we've seen has decreased greatly due to mod_security. You only get into trouble with it if you pretend that it's anything more than it is: regex filtering for requests.
To the guy talking about validating input...perhaps you should spend a bit more time on the internet and notice all the sites running copies of WordPress with defaults. This is how the vast majority of websites are...default everything. Input validation is great if you have a custom app and a development team, but most people don't. While it's arguable that they should even be running their own site, it doesn't change the fact that they do. They don't have time, and we don't have time to go through all of their PHP that accepts _GET and _POST and make sure that they're handling input validation / sanitation properly. Yes, the people who develop wordpress / whatever CMS they're using should set some good defaults for input validation and use proper sanitation techniques, but the truth is that tons of sites run on shaky code bases and old versions...so mod_security is the "quick fix" that covers the vast majority of cases and protects tons of our users.
suPHP is great too (privilege separation). Combined with jailkit, our systems are pretty well locked down for shared hosting.
That all said, Sony SERIOUSLY screwed this up. Their system should have better secured the Cardholder Data Environment (the PCI-DSS name for any system that touches CC info). My guess would be poor architecture planning / implementation as to why this obviously wasn't done. Also, mod_security has some filters for data leakage which can be tweaked to prevent obvious HIPAA stuff and obvious PCI-DSS stuff, such as plaintext transfer of zillions of CC numbers. If a skilled hacker broke into this, he/she again could pretty easily find a way around this.
Network firewalls probably wouldn't have helped much in this case, unless they did something really stupid like leaving SSH open to the world. If it was just a site exploit, shame on them for having such a poor system (shame on them anyway for setting up a system that allows this to happen).
Just a caveat, I do not believe these were PCI cards. The European cards were probably bank cards or prop cards not subject to PCI. (all this is a guess) I would guess, again, that it was probably related to a legacy system from some integration of a purchased company.
presence of firewall would mean that somebody took some care about the web interface security. Once at it, they could go as far as to even patch the Apache ... The described situation at Sony seems that nobody took any care. Why would the company need a good sysadmin when it has an army of lawyers and money for it :)
"Sony said it has added automated software monitoring and enhanced data security and encryption to its systems in the wake of the recent security breaches."
sounds like they have thrown a substantial amount of money (instead of skills) into the problem.
Not just mod-security, but any decent firewall will provide a number of options that can be employed to reduce the attack surface on the web server, and protect it from threatening endpoints.
For example, HTTP traffic can be inspected to identify threat signatures. A firewall or IDS can be configured to drop packets from a threatening IP address after an attack signature has been identified.
An attack signature might be a blacklisted URL eg: /cgi-bin/mail.pl or it could be a SQL injection attempt, or a buffer overflow attempt, or a DDOS attempt.
The idea is to prevent this traffic from ever reaching the web server machine.
Yeah, I forgot about IDS in my reply further up. Web Application Firewalls or Intrusion Detection Systems might have saved this, or they might have just slowed the attackers down.
Also, re: blacklisting DDoS...hahaha, against a real botnet, good luck with that. I could take down RioRey in 30 seconds if I wanted to right now (google "slowloris.pl") by myself. Kind of hilarious seeing as they sell DDoS protection. All the DDoS prevention in the world can't stop crazy traffic with real-world-emulating usage patterns. It literally is indistinguishable from legitimate traffic if done correctly...just ask paypal.
slowloris is in no way a DDoS. It's going to take a competent sysadmin about a minute to find and block the attack. If you've got a gigabit attack, that's a lot harder to block then one person with a misbehaving client.
That's true. A firewall won't protect a vulnerable webserver. Still, if ports other than port 80 are open then perhaps this is further evidence that the people running the servers weren't taking security as seriously as they should have.
I disagree. While not a magic "I've added that, now I'm totally secure" the one I have deployed stops many attacks designed to infect old code. I don't have that old code, but if I did the WAF would stop the attacts against it.
Is it perfect? Of course not. Is it another layer of protection, sure it is.
WAFs aren't perfect, no security product is. They do allow you to implement protection against many types of common attacks against your website. This is useful if your site runs applications that you don't have the ability to fix XSS/Injection/etc... issues on (you don't own the code, you don't have resources to do it, etc...). This is actually pretty important as most websites out there run old and/or closed source and/or 3rd party code and/or don't have internal resources to identify and fix every vulnerability 100% of the time. A well tuned WAF provides a decent layer of protection. They also allow you you solve PCI DSS 1.2 Req #6.6 without doing pen testing/vuln testing after every single code change you release.
WAFs are usually viewed as relatively useless as they waste time on dumb attacks (specifically blacklisting) that harms more than it helps. Only the stupidest attacks can be caught using WAFs and they are more likely to block legitimate traffic than to help with security.
The idea is similar to using blacklists in filter functions in XSS or SQL protection mechanisms. In theory they could block all malicious but in practice they're poorly written and poorly configured crap that act as more security theatre than anything else. The proper approach is to use context-sensitive whitelists for all client input, not add on layers of what is essentially protocol grep.
And do you really think that's a feasible expectation for the typical shared hosting client -- a business owner with little tech experience who doesn't have the money to hire an actual good developer? The person who doesn't even know that they don't know good developers from bad developers?
>"The proper approach is to use context-sensitive whitelists for all client input, not add on layers of what is essentially protocol grep."
It's regex for HTTP requests / responses. Literally, that's all it does.
>"WAFs are usually viewed as relatively useless as they waste time on dumb attacks (specifically blacklisting) that harms more than it helps. "
By who? References? As I mentioned above, we use WAFs and they help a lot with stupid attacks, because stupid attacks are what most of the attacks are; automated attack crap running on botnets to put up phishing pages on easy targets.
Reasons like....what, exactly? We're a hosting company and we use them on most servers. They're not perfect, but they prevent probably about 95% of the automated attacks that we see come through. If it's enough protection to make them move on to something easier, it's better than nothing. I agree with you that they're pretty easy to bypass, and shame on companies like Barracuda Networks who sell Supermicro servers with CentOS and mod_security and a proxy set up with a fancy web interface and call that a "web application firewall", but they ARE better than nothing.
Yeah, I saw this rumor a while back and I wasn't convinced it was related. It's like saying Area 51 had a gap in the fence. That said, it's obviously indicative of bad security practice and will likely count against them either way.
Los Alamos did have a hole in the fence! Because everyone working there was a US citizen the censorship was voluntary and had limits, so Feynman was able to write a letter out describing where the hole was.
What's funny: I've been as close as you can legally get to Area 51 (right at the warning signs, cammo dudes in sight). There is no fence, surprisingly. The reason, most likely: it would have to be a really long fence. That's a lot of land they've got out there.
In the Sony case, the majority of the victims are likely young people whose sense of risk, privacy and
consequence are not yet fully developed, and thus they may also not understand the full
ramiﬁcations of what has happened. Presumably, both companies are large enough that they
could have aﬀorded to spend an appropriate amount on security and privacy protections of
their data; I have no information about what protections they had in place, although some
news reports indicate that Sony was running software that was badly out of date, and had
been warned about that risk.
I've been dealing with Sony platforms as a game developer for over a decade, and their primary method of interacting with others is one of arrogance. From sample code that doesn't work and still has japanese comments, to incorrect documentation, to requiring developers to build all their own systems, sony often doesn't seem to give a shit about the outside world.
Given what I know about sony as a game developer, I would not be even remotely surprised to learn that they've never done a security audit.
I know some guys who are ex-SCEA dev support. According to them, the attitude of Sony's American and European teams was one of frustration that the Japanese headquarters are hardware guys with no interest in software. They have always been severely underfunded compared to the Xbox dev support team and they've had to make do by pushing off a lot of the work to the third parties.
Well, this started with the PS1 because it was actually quite easy to develop for. It had good tools, the tools were cheap, and you could build your game in C.
Contrast this to the N64 at the time, which had a $1000000 buy in for a developer license, or the Saturn which was, by all accounts, a nightmare to develop for that made the PS2 look like child's play.
After that, the support comes down to the economics of numbers. Most devs I know would have gladly made games on Dreamcast forever, but (piracy/marketing/apathy) killed it, and the PS2 was all that was left.
A lot of japanese companies tend to consider software as not being as important as hardware... It's even more prestigious in Japan to study electronic engineering compared to computer science...
Because of that, the level of most programers I've seen working in big companies in Japan is surprisingly low.
Based on my limited experiences of interning at a Japanese software company, they don't have very good training or expectation of software people. At the company I worked for the vast majority of developers they hired had little or no previous programming experience.
The selection criteria was mostly based on how well the candidates performed on an entrance examination that consisted primarily of math and science questions.
The company then spent the first few years of the employees' life there training them how to develop software.
Based on what other people told me this kind of thing is pretty common practice.