

Ask HN: DevOps guides for HA SQL, load balancer, backups, on rented hardware? - plasma

Hi HN,<p>I'm decent with Windows/Linux server management but would need to sink some good time into understanding how to glue everything together to have reliable, field tested hardware deployments.<p>I'm aware of AWS/Azure/Heroku and how they offer these bits of glue and take care of it for you.<p>As a devops/software developer/startup owner, I'm interested in understanding how to:
1) Deploy a highly available database (say Pg/MySQL) that has a simple master and slave<p>Looking at PgSql, there are tools like repmgr (http://www.repmgr.org) to handle replication, and other tools to then handle backups, but these aren't trivial. There's still complexity overhead.<p>How do I then automate moving a Virtual IP (VIP) so that app servers always point to the same IP for a database (so failover is transparent as possible)?<p>2) Automated backups of the database
This seems to be easier, there are tools available that do log shipping to S3 etc.<p>3) Implementation of a load balancer
I can rent hardware at Hetzner for handling web servers, but I can't easily load balance requests across the machines unless I setup a load balancer of some kind.<p>4) Firewalling/VPNing infrastructure so database servers etc are not publicly accessible (similar to Amazon security groups).<p>--<p>I'm interested in hearing any battle-tested deployment strategies anyone can offer insight into for these kinds of scenarios.<p>Is anyone able to share any resources, insights, experiences, or come up with other glue pieces that you need to roll on your own when not using a PaaS/IaaS provider?
======
stevekemp
There are several options for software load-balancing. You can go all out and
use heartbeat, or you can use something like varnish/pound - both of those
will work as a reverse proxy to route traffic to N backend servers.

I wrote about migrating to a cluster here:

[http://www.debian-
administration.org/article/683/Redeploying...](http://www.debian-
administration.org/article/683/Redeploying_Debian-Administration.org_..).

In brief I used "ucarp" to have a virtual IP which was always up on one of
four hosts, then on that virtual IP I have pound listening for SSL, and
varnish for HTTP. Pound forwards to Varnish, varnish does some caching and
works as a load-balancer to the Apache back-ends.

I've had a couple of outages where two webservers died, and it was 100%
transparent.

------
jaddison
For higher level sys admin you might look at saltstack, chef or puppet - and
there are more options. These let you orchestrate rollout, updates etc.
libcloud is also something you might be interested in.

~~~
SEJeff
I'll second saltstack, but being a one of the many co-maintainers of the
project, I'm biased :)

FYI: salt-cloud uses libcloud, so you can recursively build clouds to build
clouds using salt cloud.

