
Common Server Setups For Your Web App - beigeotter
https://www.digitalocean.com/community/articles/5-common-server-setups-for-your-web-application
======
ntoshev
If you are just starting, you should have the simplest setup - everything on
one server - and scale it only when it becomes necessary. Premature
scalability adds complexity and slows down your iterations.

My setups usually consist of an nginx serving static content and proxying
applications requests (doing gzip, etc). The data tier is initially collapsed
into the application as described in
[http://www.underengineering.com/2014/05/22/DIY-
NoSql/](http://www.underengineering.com/2014/05/22/DIY-NoSql/) This
architecture allows very fast iterations while providing enough performance
headroom; it can serve 10k simple (CRUD) http requests per second on a single
core.

~~~
nilsbunger
I'm falling behind on security updates because I built an all-in-one box like
this. While I mostly agree with you, now I wish I had separated out the db on
day 1 into a private network so I could maintain a stateless app tier where I
can update the public-facing OS image with no downtime or risk.

~~~
yogo
I think rolling release distros are your friend for these type of all-in-one
setups. Small weekly updates. Easy to test too because you only need one test
machine.

~~~
lsc
the one server vs many servers debate aside,

>I think rolling release distros are your friend for these type of all-in-one
setups. Small weekly updates. Easy to test too because you only need one test
machine.

I think the important thing here is a distro that tests it's changes well, and
one that doesn't force you into a major upgrade (where you have to change your
configs) very often.

Releases like Debian that want you to do a rolling major upgrade every two
years, I think, are more difficult to deal with, because the major upgrades,
if you have anything at all custom and the config file format changed, are
going to require work and testing for you to move the configs over, even if
the developers test perfectly (and nothing is perfect.)

I think RHEL/CentOS is best, assuming that the latest RHEL/CentOS supports all
the packages in-distro. (If you need to step outside of the distro repos, that
kind of defeats the point. maintaining a package yourself on an ancient distro
gets old fast, and most of the smaller 3rd party repos don't put as much
effort into keeping the old package versions patched up.)

That's the thing, sure, you have to format and re-install for a major upgrade,
but you have ten years before you have to worry about that.

------
estsauver
The one thing I really want from Digital Ocean is a guide that carefully
explains how to set up the "private network" piece of the equation.

The "orange box" that represents the private network in each of the examples
is taken for granted, but for someone coming from an application development
perspective that piece isn't trivial to make. EC2 Security groups make that
sort of box incredibly easy to make, but DO doesn't have anything like that.

~~~
beigeotter
The article on setting up and effectively using the private network for what
you're describing is actually now in our pipeline. It will probably be live
this or early next week :)

You can find our most recently published tutorials in our community here:
[https://www.digitalocean.com/community/?filter=tutorials](https://www.digitalocean.com/community/?filter=tutorials)
or catch it in our twitter feed (@digitalocean)

~~~
ptr
Great! If that article would contain setting up custom hosts without using the
HOSTS file (local DNS?), that'd be awesome. I don't like those ugly IP-
addresses.

------
coherentpony
That was super helpful. I think I finally understand how serving web
applications works. Thanks to whoever wrote this.

~~~
jacquesm
> I think I finally understand how serving web applications works

It's nice to see you got a lot out of the article but this is hardly a
complete course on how the web works from the server side. It is more of a
quick guide on a number of common server set-ups for mid sized web sites. If
you want to learn more about 'how web serving applications work' I suggest you
follow one of the how to guides about setting up a web server of your own and
serving up a couple of pages. You won't need any extra hardware for this, all
the software is 'open source' and won't cost you a dime. Depending on what
kind of operating system you normally use you could start with any of these:

Windows:

[http://httpd.apache.org/docs/2.2/platform/windows.html](http://httpd.apache.org/docs/2.2/platform/windows.html)

OS/X:

[https://discussions.apple.com/docs/DOC-3083](https://discussions.apple.com/docs/DOC-3083)

Linux/Ubuntu:

[https://help.ubuntu.com/10.04/serverguide/httpd.html](https://help.ubuntu.com/10.04/serverguide/httpd.html)

best of luck!

~~~
coherentpony
>It's nice to see you got a lot out of the article but this is hardly a
complete course on how the web works from the server side. It is more of a
quick guide on a number of common server set-ups for mid sized web sites. If
you want to learn more about 'how web serving applications work' I suggest you
follow one of the how to guides about setting up a web server of your own and
serving up a couple of pages.

I actually meant just the hardware aspect of the setup, sorry for the
confusion. That said, I'm still super interested in how the actual serving
works. The resources you've provided seem to be exactly what I'm looking for.
Thanks so much for providing those.

My experience is in HPC where 'serving content' actually means 'sending data
to other nodes'. The upside of this is that in a compute cluster, all the
nodes are, usually, in the same room and are actually located very close
together. There's still a lot of networking involved in getting the nodes to
communicate, but it's super interesting to me to see how to scale things on
the web where nodes are not necessary even located in the same country! The
example of having the DB and application servers on different machines is a
good example.

Anyway, sorry for the digression, and thanks again for the links. It'll be
bed-time reading for me :)

------
bttf
I really enjoy the community-driven articles/tutorials that DigitalOcean
provides. They have documentation for a lot of processes that are not readily
documented or still emerging.

~~~
beigeotter
Thanks so much! We really appreciate it and are always excited to cover new
topics. If you have any suggestions of what you'd like to see in our
community, I'd recommend posting them in the comments here:
[https://www.digitalocean.com/community/articles/digitalocean...](https://www.digitalocean.com/community/articles/digitalocean-
community-article-suggestions-and-ideas)

------
sz4kerto
I am hosting all of my stuff on a single VPS instance in Docker/lcx
containers. It is reasonably easy to migrate stuff out if I need a larger
hardware, but it's also very cheap.

Regarding scaling: a couple of years ago I ran a database on a single CPU core
(because of licensing issues). It stored 50M rows a day and also executed
various queries quite quickly. So I seriously doubt that most of us is going
to need large clusters.

~~~
Thaxll
50M/day on a VPS / single core, I'm having hard time believing...

~~~
sz4kerto
No, that wasn't VPS. It's kdb.

[http://pietrowski.info/2012/12/kdb-high-performance-
column-o...](http://pietrowski.info/2012/12/kdb-high-performance-column-
oriented-designed-for-massive-datasets-database/)

"1.126 million inserts per second (single insert)"

------
cookerware
my current setup on DO, I would like some inputs.

website hosted on 1 droplet. additional 1 droplet per every customer is
deployed through Stripe and DO api.

DO let's you save a snapshot and load it to the droplet. I have a snapshot
that is basically a copy of my 'software'. It's a LAMP stack with init script
to load the webapp from git repo.

Customer logs in at username.mywebapp.com

The beauty of this is that I never have to worry about things breaking or
becoming a bottle neck. if one customer outgrows themselves, they won't affect
other resources. It has linear scalability, new customers, add a new droplet.
I don't need to worry about writing crazy deployment scripts although I use
paramiko to ssh in to each server when I need to get dirty.

The main website is mostly static content. I could host it even on Amazon S3
but currently using cloudflare.

Updating the product code requires me to restart the droplet instance.
However, I test things out on another staging droplet. Once things work on
there, I use the DO api to iterate through all the customer droplets and do a
restart.

~~~
ovi256
The obvious remark is why do a new webapp deployment per customer instead of a
multi-tenant app ? Multi-tenancy requires more code in the web-app to isolate
accounts, but it will mutualize and consolidate web servers.

~~~
cookerware
I did it because a new webapp deployment costs nothing, no extra work involved
at all with DO, just add a snapshot and ssh key to a new droplet. If something
needs to be consolidated, I just use DO api and paramiko to ssh into each
droplet and run new commands. If it's updating the webapp across all
customers, it's a matter of issuing restart command to all the droplets via
API.

~~~
xur17
Aren't you paying for hosting on each droplet though? Consolidating would save
you that money, but I guess this wouldn't have a huge impact if the income per
customer is a lot larger than the cost of an additional droplet.

------
austinhutch
This is awesome! Great content for DigitalOcean to be pushing out as I am
probably the exact audience they are looking for when they published this.
E.g. I've never gone beyond a shared hosting setup but have been curious to
try my luck at learning more of the stack by using the DO platform.

~~~
Volscio
Yeah, I've been googling for tutorials/configs/info on various deployment
setups and digitalocean's come up more and more with solid guides. Thanks for
sharing the secrets of the somewhat arcane!

------
adventured
The effort D.O. puts into their community education is one of my favorite
things about them. The few times I've had problems with a droplet
configuration, inevitably someone had already posted a solution in the help
section.

------
sergiosgc
Wouldn't it be much better to tech the concept of horizontal scalability
applied to the application stack? Your server is a stack of interfaces: a
frontend cache, a static content server, a dynamic content server and a
database. You can horizontally scale each stack layer. Much simpler,
applicable to different scenarios.

However, this approach won't give you a viral article title like "eight server
setups for your app" (replace eight by 2^n where n is the layer count).

------
h1karu
Excellent writeup! Next I'd like to see an article on deployment. What if I
want my development team to be able to push code changes regularly to an app
cluster via a git-based workflow and have these deploys all occur with zero
downtime ? I think that an article which demonstrates how to use modern
deployment tools such as ansible or docker to achieve those goals on a
commonly used programming environment such as Ruby would serve to lure quite a
few developers away from PaaS towards something like Digital Ocean.

For now though, those tasks are still "hard" which means that for many
developers digital ocean is still hard to use relative to other emerging
platforms such as Redhat's Openshift or Heroku. I know there are many shops
who would love to jump ship from IaaS to a less expensive platform but they
feel the cost of rolling their own zero-downtime clustered deployment
infrastructure is not worth the $ savings.

I suspect that if IaaS providers were to dedicate resources towards producing
more educational material for developers with the aim of demonstrating how to
achieve these deployment objectives on all the popular platforms using modern
open source tools then loads of PaaS developers would jump ship.

For example: How can I use ansible to instantiate 5 new droplets and
automatically install a load balancing server on one of them while setting up
the Ruby on Rails platform, and ganglia on the remaining ? How can I run a
load balancing test suite against the newly created cluster, interpret the
results, and then tear the whole thing back down again all with a few
keystrokes ? How could this same script allow me to add additional nodes and
how does the resulting system allow for the deployment of fresh application
code ? How can it be improved to handle logging and backup ?

I know that it's possible to create a deployment system to answer the above
questions in less than a few hundred lines of ansible + Ruby, so I imagine it
could be explained in a short series of blog posts, but you would probably
need to hire a well-paid dev-ops guru to produce such documentation. I bet if
you ask around on HN...

p.s. keep an eye on these:

[http://deis.io/](http://deis.io/) [https://flynn.io/](https://flynn.io/)

^ If either of these become production quality software it could be a game
changer for Digital Ocean.

------
pyfish
Thanks for the write up. It's the perfect time for me to be reminded about
starting simple and changing the architecture as needed. I have prematurely
optimized on one project in the past. It was painful. And after all that pain
the mythical millions of unique visits never arrived.

------
falcolas
Virtually no mention of how the different server setups affect availability -
this is very unfortunate. Availability (not to mention disaster recovery) are
two things which I think are significantly more important than scaling, and
your choice of server setup will affect both.

~~~
Numberwang
What do you mean with availability?

~~~
roryokane
I think by availability he means uptime. Whether your site is always
available, or there are times when your site is down, and how long are they.

~~~
falcolas
Yup: Uptime. In the face of a box rebooting unexpectedly because it's a VPS.
In the face of a data center experiencing connectivity issues. In the face of
Hurricane Sandy's big brother.

------
occam65
As the "Startup Standards" begin to take shape, these guides prove to be
extremely useful for the newcomers out there. Sure in 6-12 months it may
become a bit dated (depending on the guide) but if kept up-to-date, they can
be a powerful tool for a new company.

------
CSDude
It would be very helpful if DigitalOcean sells load balancer too as Linode,
because the bandwith limits are for each Droplet which makes it very illogical
to use DigitalOcean. Of course, we can use Cloudflare or similar, but still It
is a need.

------
sandGorgon
does anyone know what a bare minimum monitoring setup for a single server
having nginx, postgres and rails ? I'm far too intimidated by nagios to do
anything significant.

~~~
incision
If I'm reading correctly that you just want something simple and completely
hands off I'd suggest New Relic - absolutely trivial to set up, free for basic
server monitoring.

1: [http://newrelic.com/](http://newrelic.com/)

------
coreymgilmore
I propose an alteration to the typical LAMP stack: Replace Apache with Nginx
and MySQL with MongoDB. Personally, the reduced resource use of Nginx is nice
since I can run on a smaller "box". MongoDB is just a choice depending on the
data set, but it does allow for sharding out horizontally without too much
effort.

------
Jordan15
But GAP only supports some languages... you can't compare

------
derengel
nice guide, maybe a setup including redis/memcache would be useful too.

