Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have been in this position myself and I agree with your stance. Networked software should be secure by default. The assumption should be "networked" == "exposed to the Internet". From my redis.conf:

"By default Redis listens for connections from all the network interfaces available on the server."

In my case the reason Redis ended up exposed to the Internet was another dev "updated the configuration" to the default provided by the package. Okay, perhaps that was the wrong thing to do, but it simply shouldn't be that easy to expose an vulnerable service to the Internet! I understand that people should run firewalls too but a belt and braces approach should really be taken here.

PS. While I'm moaning, Jenkins is another big offender in this respect. I can't be the only person to have tried side-step its own security by having it only bind to localhost and using SSH tunnelling only to find something which can execute arbitrary code is exposed to the public Internet when the configuration is switched back to its default. Yes, a firewall would fix it, but as far as I'm concerned, Jenkins shares some of the blame for being insecure by default in the absence of one.



The way redis is pre-configured out of the box has certainly been evaluated by antirez and the rest of his team extensively. There are tons of things behind the scenes which I'm sure factor into why one might bind to every interface that we as end-users aren't aware of. Either way it's up to the engineer or IT team to perform a little due diligence and be cognizant of what their software is binding to. If something's going to be deployed, and it's networked, and it's production- spend five or ten minutes to do an impact analysis beforehand. You don't even have to do anything rigorous. Spend 5 minutes with out of the box utilities. Netstat -p and grep for whatever you added.

Back in the day, daemons (especially IRCds) would have a 'comment me to enable' line arbitrarily inserted into the generated conf file which would prevent the daemon from starting up entirely. You'd spend 10 minutes reading through the config files rather than just presuming it was configured the way you want. The belts and brace approach should be that. Stupid users won't be able to just 'gem install <foo>' or 'apt-get install <bar>' then complain so often that an engineer who has contributed so much to the community (remember Antirez was doing this for ages, as a father, operating the entire project by himself (support and all), for multiple years before he finally was able to work on it full time) has to take time out of his day to justify "hey guys, you're sort of responsible for the way you configure your daemons".


I wasn't attacking Antirez. We wouldn't be having this conversation if we didn't agree Redis is essentially good work. I was attacking insecure by default. The point is is that it's not unlikely that someone might accidentally revert to the default configuration even when due diligence was done in the first place.

Can you please go beyond vague references like "tons of things behind the scenes" and "certainly been evaluated". What's the direct answer to the question: "Why can't it just listen on loopback by default?"

I'm not sure "stupid users" and "clever Antirez" gets us any further towards a proper answer here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: