Hacker News new | past | comments | ask | show | jobs | submit login
Digital Ocean: Ubuntu, Nginx, Unicorn, Rails (mccartie.com)
53 points by jonmccartie on Sept 1, 2014 | hide | past | favorite | 48 comments



Great writeup. One suggestion on the nginx front I might suggest is you add an entry to drop requests for unknown hosts. e.g. http://104.131.41.220

https://github.com/h5bp/server-configs-nginx/blob/master/sit...

That github repo is a goldmine for understanding nginx configs too.


other things to consider:

     # if you compiled --with-http_spdy_module
     listen 443 ssl spdy;

     # amusing how many of these you'll get
     location ~*\.php {
       add_header "Not Found" 404;
     }

     # pretty sure you need to do this to have keepalive working
     # between nginx and the upstream
     proxy_set_header Connection "";
     proxy_http_version 1.1;

     # buffer writes to disk (for a busy site, you can use much larger values than 1K)
     access_log /var/log/nginx/access.log buffer=1K;

     # cache the ssl connction parameters
     ssl_session_cache shared:SSL:20m;
     ssl_session_timeout 10m;


The .php block gave me a good laugh. Great idea. Just rejecting /wp-admin.php should reduce load significantly. :)


Good call. Thanks, man.


Also worth noting that you should run a firewall as part of the basic configuration. AWS includes this via the Security Groups, but with DO you'll need to use iptables or ifw.


Good point. I didn't mention it in this post, but DO has a great article on getting started with firewalls: https://www.digitalocean.com/community/tutorials/how-to-set-...


Or `ufw` if you're on Ubuntu -- very easy to set up, much easier than crafting rules by hand.

I'm also a fan of running `sshd` on an off-numbered port to add another layer of protection against zero-day attacks. Most worms spread by compromising a service on a host, and then hitting everything around that host, but (to my knowledge) most of these depend on targeted services living on their default ports.

It won't buy you anything against a direct attack, but security is all about layers of defense, not just having a hard outer shell.


I heard Ansible over SSH is good to automate the installation. Has some experience with it? I am interested in a good tutorial similar to OP's article.



Thanks for sharing! Great that DigitalOcean has many useful tutorials.

The tutorial doesn't touch the "replace" command (http://docs.ansible.com/replace_module.html), that seem to be useful to modifying existing config files.


Quick question; I don't know much about setting up a Linux server but I found it interesting the post had nothing related to security besides setting up a ssh key and separate user for deployment.

What security-related tasks do you do when setting up a new server? Besides the above, the only things that come to mind for me are:

1. Change ssh port from default. 2. Block unwanted traffic via iptables. 3. Protect ssh with fail2ban.


Something that a lot of Ruby deployment setups seem to be missing is setting up proper user permissions for apps and deployment.

The first step is to have separate user accounts for everybody that's going to be deploying apps. You really don't want shared accounts (like a single 'deployer' user), because you get no audit trail on who-does-what, and you effectively lose access control to your machines.

The second step is to run the app itself as a limited-privilege user -- no write permissions to anything except for logs and tempfiles. A lot of attacks depend on your app being able to overwrite parts of its own code; if that can't happen, the attack fails.


Some very basic security:

1. Change SSH port

2. Block all password logins

3. Disable root login (namely, a non-root user must login via a key, then su or sudo to root)

4. Lock down everything with IPTables

5. Allow only encrypted connections to the few ports open via IPTables (normal exception is port 80 for http).

6. Segment the network (with DO private networking your machines are open to anyone within the same facility, so you need to isolate your private network from other DO customers)

7. Lock down each running service/daemon per it's standard configuration. For example, if you're running Postgres, then lock it down per pg best practices.

8. Create individual, non-privileged users for each service that you're starting. ex. If you're running an app server, then it should have its own user.

9. Sandbox everything that can be.

10. Turn off everything that is not being used, then uninstall it.

11. If you don't need them, then uninstall build tools such as compilers. Have a separate build machine.

12. Only open edge of network machines to the outside world. On DO, turn off eth0 on all machines that customers do not directly interact with.

13. All services that are internal user only should have public networking totally disabled. Ex. Redis should only run on a private IP, should have the public interface disabled, and should only be accessible via a secure connection (ideally with some type of true authentication, replay protection, etc. Or, just run these services on a trusted network and pray that nobody penetrates your network deep enough to find out that you're totally unprotected once inside your network.

14. Don't keep private keys/certs in places that they are not needed.

15. Have a map of each service, the user that it runs under, what resources that user has access to, and a general map of what sequence someone has to follow to control your network. If you catch someone mid-attack, you can start to lock off parts of your network.

16. Logging! And monitor them for attacks. And ship logs off the machine in question so that you can review logs even if a machine is compromised.

There's lots more, but this is a basic list that should be configured via Ansible/Chef/Puppet/Salt for each box.

One other item I've heard is that some people like to run heterogeneous networks.


Thank you so much for all that. I took a great introductory Linux course by the Linux Foundation over at edx[0] which really filled me in on a lot of stuff about Linux one misses out on when they learn on their own. Sadly, the subsequent Linux Foundation training courses are way out of my price range. Even the intro course offered I took for free on edx is offered by the Linux Foundation for thousands of dollars.

The good thing is I can mostly learn stuff on my own but have a hard time figuring out what I should be learning. Your comment helps a lot on that department. Even if I don't quite understand how to do some of the stuff your saying e.g. segment the network? Sandbox? Isn't eth0 my primary port to connect to my server? Which logs should I be monitoring (I've only heard of auth.log)?

Thanks.

[0] https://www.edx.org/course/linuxfoundationx/linuxfoundationx...


Most important: disable password login to ssh. Optional: disable direct login for root. Optional: limit hosts that can login with whitelist for sshd in hosts.allow (and ALL:ALL in hosts.deny). On Linux I generally don't lock down ports w/iptables - better to not have anything listening on a public interface that's not supposed to be publically available (most distros have sane defaults when set up as a server config these days).

As for DO upload your/a pubkey for root access to the control panel and have it installed for you in new images.


#2 and #3 in your list are rather sweeping (fail2ban does more than protect ssh). I'm no expert either, the only thing I'd add is to disable password-based logins and root-login [1]

[1] http://www.unixlore.net/articles/five-minutes-to-more-secure...


I personally have a mixed feeling about disabling password-based login. What if you've lot your key and you need to access the server for reasons such as getting the data out, how do you do it without root and without login password? Is there a way to go around this issue? Let me know if there is because right now, I have login password that I don't re-use. Well, I guess for real deployment, serving real users, you'd have shading and a good central logging system so losing an instance is not a big deal, but for me I run a personal server and this is so far the way I protect myself (have login and ssh) in the case of lossing my key :(

Anyhow, DO droplets allow people to "reset" root password and it is important to protect your DO account.


Remote consoles still allow you to login with a password -- you just can't over SSH.


You can enable two factor auth for your DO account.


I used to prefer the flexibility and efficiency of having my VPS setup, and it's certainly cheaper. Over the past 2 years though, I've saved so much time and sanity using Heroku.

I certainly can understand saving thousands of dollars and improving performance by moving from Heroku to a VPS or bare metal solution, but $90 is not uncomfortable enough for me to warrant the change.

Interesting article!


Question -- do you have any background as a sysadmin? If not, then I can totally understand the attractiveness of something like Heroku. It really does take care of a lot of nastiness.

From my side, I've got a fantastic Ansible setup for databases, security, ssl certificate distribution, and so on, and so Heroku doesn't really buy me anything for the extra money.


None as a fulltime sysadmin, most of my experience has been in operations, and in configuring servers for my own projects.

I've used Puppet and Chef before, haven't checked out Ansible. My main dissatisfaction with those tools was that there wasn't a super easy way to get started on a new VPS. I had to configure roles and settings to install Puppet or Chef, and _then_ I could get started automating things. It's certainly not hard, but I've never been in a position where it was something I did full time so have really valued platforms that do a lot of this for me.


That's why I picked Ansible. The only dependency is SSH, and you can write modules in whatever language you choose.

Plus -- and this is a big win -- Ansible actually has usable documentation and practical examples, which coupled with a really clean design, make it way easier to get developers up to speed on the operations half of things.


Can you recommend tutorials, examples for Ansible? Is Ruby knowledge required to efficiently use Ansible? I would like to automate the installation of nginx, php, nodejs, mysql, etc. incl. modify a few config files.


Ansible is mostly configuration based - Ansible itself is written in python but all you'll typically need to write for most usages is configuration in YAML. So no ruby at all.

There are likely existing roles for most things at http://galaxy.ansible.com/

The Ansible docs at http://docs.ansible.com/ are pretty good, and there are lots of getting started guides most weeks on the Ansible weekly mailing list (https://devopsu.com/newsletters/ansible-weekly-newsletter.ht...)

And there are a few pages on my github blog on a few fundamental principles of modelling configuration e.g. http://willthames.github.io/2014/03/17/ansible-layered-confi... http://willthames.github.io/2014/04/02/modelling-credentials... to give a bit of flavour of how to manage more complicated setups. Installing things is easy, getting the configuration right for your needs is the tricky bit!


Ansible is written in Python, so no you don't need Ruby knowledge, and I've found the documentation -- especially the tutorials -- to be very helpful: http://docs.ansible.com/intro.html


Thanks, David. We use Heroku a TON at work and I absolutely love them. For my small-ish app, I just couldn't afford the extra cash to pay Heroku to manage my app. Thanks for reading!


Ah, that makes a lot more sense! I quickly skimmed through SproutMark and it seemed great so assumed it was a full time job.


That's the best compliment I've gotten all day. Thanks! :)


Heroku apps aren't performant in many areas of the world e.g Australia like where I am, Amazon EC2 has an Australian region which is amazing


I'm also Australian, but we have a fairly international customer base so the ~150ms difference for requests that hit the application hasn't been that big for us in the scheme of things.


Yeah we have a near-realtime app and all our customers are in Australia. Tried heroku meh. Silly thing is they run on Amazon so should be able to deploy to Australia.

We also do thing heroku doesn't support with some vpn setup etc but all good for those who can use it.


What about backup, check out Duplicity for backing up to S3. Rackspace has quite a nice built in tool for backups too if you ever leave your current setup. Duplicity http://duplicity.nongnu.org/


Would love to see a part two of this, for when you app gets popular... covering three simple (but perplexing for beginners) things:

1) dedicated database server 2) two application servers 3) git push to multiple application servers


Great idea, thanks!


I wouldn't recommend running Nginx, Unicorn, Rails, Redis, and PostgreSQL all on one instance. Better to offload the databases onto their own VPS's.


Even better idea is not to host the databases at all.

Only those who have never experienced a corrupted backup or failed slaves think a database is something that is relatively trivial to manage.

You're much better off looking at platforms like RDS, MongoHQ, Cloudant etc.


In my experience, hosted database providers are almost always either cost or latency prohibitive.

Nowadays, in PostgreSQL is quite simple to replicate a db via log shipping[1]. You can even stream the WAL to an S3 bucket.

[1]: http://www.postgresql.org/docs/9.3/static/warm-standby.html


I currently use MongoHQ and get response times in the tens of milliseconds for $18/month. Hardly cost or latency prohibitive. I've seen similar times for less money with Cloudant.

Making sure that master/slave scenario works when you need it to requires extensive testing and care. You also failed to address backups. I've yet to meet anyone who runs their own database who actually tests their backup.


Agreed. This is just a temporary solution for a small-ish app. Looking forward to solving those problems...


I find the references to the nano text editor endearing.


LOL! It was intentional. :)


> sudo apt-get install nginx

no, the ubuntu repository is outdated, you should add the nginx team ppa:

sudo add-apt-repository ppa:nginx/stable

now you can 'sudo apt-get install nginx'



I think this is a typo; "deafult" instead of "default"

    nano /etc/nginx/sites-enabled/deafult


Ack! Good catch! Fixed!


While I appreciate the time and effort put into both this hosting move (and the subsequent writeup), I really can't help but feel the time and effort would have been better spent on gaining more traffic and users than on the move.

OP is saving $90/mo, less than the cost of adding just one new monthly subscriber at his 'Premium' plan.


It didn't take long ... and I wanted to take a break from staring at my non-performant Facebook ads and making cold calls.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: