
Why we moved away from AWS - karli
http://blippex.github.io/updates/2013/09/23/why-we-moved-away-from-aws.html
======
eterm
EC2 was designed for elastic computing. On demand high computation (low
memory) that are elastic.

With that in mind, pure EC2 is a terrible choice for general web application
hosting.

If using the complete AWS set (S3, simpleDB, etc) then it makes more sense as
stuff like db hosting can be pushed out to their services designed for it, but
if you're gonna fire up a windows box, stick SQL server on there and use it as
a general web app hosting environment then it is a terrible choice.

Unfortunately, it's a choice that still appears to be easy for management to
justify: It doesn't require a server admin to use, it doesn't require
mirroring or backups because _obviously_ amazon EBS volumes can't die because
they're in the cloud. The extra cost and lower performance is obviously just
an Ok side effect of these benefits.

(Yes, I'm being sarcastic here, but it's all arguments I've seen made.)

~~~
InclinedPlane
I know this is a tangent, but I think it's a worthwhile one to mention that
backups and redundancy are not the same thing. There have been a few high
profile ventures (including businesses) that had to shut down because they
lost all of their redundant data in some way. Redundancy doesn't save you from
malicious people who've gained access to your systems. It doesn't save you
from errors (oops, dropped the wrong DB, thankfully it's .... replicated
virtually instantly across all RAID volumes and clustered DB instances). It
doesn't save you from the one building where all your data is in burning down
or getting flooded. It doesn't save you from software bugs (either yours, in
firmware, in the kernel, in the DB, etc.) from corrupting data.

~~~
toomuchtodo
This is why you make backups from your physical hardware to onsite storage,
but also replicate those backups to Amazon S3 and inhibit the delete
functionality so you need MFA in order to complete the delete.

------
mechanical_fish
The $750/month savings cited here is not real†, but for the sake of argument
let's pretend it is.

Is $750/month a significant amount of money for the company? In the USA, this
is perhaps the cost of _one_ engineer-day, and one could raise a year's worth
of this money by successfully applying for a single additional credit card.
(Not that I recommend bootstrapping with credit cards. But it has been done.)

Of course, it may be the case that a company could improve customer
satisfaction, and therefore revenue, by double-digits by improving performance
on optimized hardware. But if this is the case, where is the discussion of
that? Where is the data: A/B testing, customer satisfaction, churn rate,
monthly revenue? They should be front and center.

† Without getting into the reduced redundancy, the additional complexity of
hosting multiple unrelated services on each instance, the "additional
maintenance" referred to in the post, the lack of server capacity to cover
emergencies and staging and load testing and continuous integration, and the
risk involved in switching infrastructure out from under a _working_ business-
critical application... any estimate which doesn't include the cost of
engineering time is wrong. All changes have engineering costs. Just _talking_
about this idea is costing engineering time.

~~~
consultant23522
Yes, $750 is about one engineer-day. Someone is now going to be spending at
least a full day per month managing your new hardware, running security
patches, etc. Even if your sysadmin guy is cheaper than an "engineer" it's not
going to be cheap.

~~~
nasalgoat
You realize you need to do all that on the EC2 instances as well, right?

This is the common disconnect I see when people tout The Cloud as a solution
to having system administrators - that somehow that instance of Linux running
in EC2 doesn't require the same maintenance as a physical one. It does.

~~~
vacri
No, you don't really. You don't need to spend time considering and researching
different load balancers to see which one is the best for your use-case,
running through your company's purchase process (in itself a big project),
lead time, physical install, configuration, and monitoring. If you want an AWS
load balancer, click EC2 > Load Balancers and config one. From "Hey, I'd like
a load balancer" to having a functioning, active load balancer in _literally_
less than five minutes. No jaunt to the colo necessary. And that's just one
item - rinse, repeat for a pile of other aspects as well.

It's not true that AWS gets rid of the need for sysadmins, but it's absolutely
not true that you do all the same sysadmin tasks on a cloud service.

~~~
regularfry
This is why we have managed hosting. You can pay someone else to do all that,
on a real, physical server, on a network they manage, and have it still come
out as much cheaper than AWS. Yes, the turnaround time might be more than 5
minutes. Or, depending on who you go to, it might be less.

------
notacoward
AWS is just not very cost-effective in terms of performance per dollar,
especially when it comes to storage performance (my own specialty). It only
appears that they are because of the hourly billing and a human inability to
compare quantities across nearly three orders of magnitude (hours vs. months)
intuitively. Now that there are hundreds of vendors with hourly billing, as
there have been for a while, it's easy to see how much they suck in terms of
cycles, packets, or disk writes per dollar. They still have the most advanced
feature set, they have by far the best public-cloud network I've used (out of
nearly twenty), there are still good reasons to use them for some things, but
don't go there to reduce permanent-infrastructure costs.

~~~
Spooky23
I just completed a project at an organization-owned datacenter where we wasted
4 months on needless BS to deploy about 12 servers.

My team's time is easily worth $500-600/hr, so we easily wasted $300k. So the
fact that my internal datacenter provider can give me a VM that costs 20% of
what EC2 charges or disk that is more performant at a similar cost is
interesting trivia, but isn't saving money.

~~~
sbov
So your whole team spent 4 full time months to get 12 servers deployed? That
organization sounds rather degenerate.

We colocate at a datacenter and can get cabinets pretty easily. We've done
this for over 10 years now. When we aren't growing or shrinking I spend about
an extra 4 hours per month because we have physical servers rather than use
something like AWS.

12 servers would probably take us about an extra 6 of our person-hours to get
up and running vs AWS. If we needed a new cabinet it might take a couple days,
but we aren't actively working - we put in a request, and they tell us when
its ready for our use. We don't sit and twiddle our thumbs while this happens,
and we do it before the development side of the project is completed.

We've talked about AWS before for the redundancy and convenience but the price
and the extra headache of dealing with the inconsistent performance never made
sense for our use.

~~~
lotyrin
> That organization sounds rather degenerate.

That may be true, but it doesn't seem that uncommon.

------
hbbio
Just so you know, OVH has just _halted_ its dedicated server offer.

TL;DR from today's French blog post:

Our offers were so competitive that too many customers wanted them, and we're
loosing money if we don't keep customers for at least 2 years. Sadly, they
migrate to new offers before that. We're halting dedicated servers until we
figure out what to do.

[edit] Link:
[http://www.ovh.com/fr/a1186.pourquoi_160sold_out160](http://www.ovh.com/fr/a1186.pourquoi_160sold_out160)

~~~
eterm
Discussion here:
[https://news.ycombinator.com/item?id=6399569](https://news.ycombinator.com/item?id=6399569)

In summary: Their main problem was no "installation fee" meaning the barrier
to hopping to a newer server every couple of years just wasn't there. If their
new offerings were priced competitively to attract new customers they would
also be priced similar to how older hardware was priced when sold a couple of
years ago, so anyone on the older hardware would jump to new boxes.

------
leokun
If you move to Rackspace, stay away from DWS, the dallas datacenter. It's
over-booked, the network has constant issues, vm's on the same host machine as
you are able to cause your vm network issues, the list of problems never
stops.

We recently switched to Azure from Rackspace, but we're still evaluating if it
will work for us long term. Azure's issues are that you have to request number
of core increases, and you can't capture an image of a vm without shutting it
down. Also you can't just give your VM a regular ssh public key, you have to
generate SSL like certs. Also weird is a lot of the documentation is only for
the Windows side of things, even though you can get some of that stuff to work
on linux and that you can do that by installing an SDK even though you might
not be installing an application, just running your own stuff on a VM.

~~~
nemesisj
I'd stay away from Rackspace London as well. Horrible horrible experience.

1\. Noisy neighbours impact you all the time

2\. The staff are really poorly trained and don't know how to troubleshoot.

3\. They're expensive.

4\. Their control panels are really bad, constantly being updated and
migrated, and are just a complete mess.

5\. They've had several major network outages that have lasted for quite a
long time (hours) that they blame on "upstream routing issues" despite
supposedly having multiple redundant upstream carriers.

6\. They'll randomly reboot your box without notice. If you open a ticket
there's an almost certain chance they'll just reboot your box no matter how
much you ask them not to.

7\. The IO on the boxes is really bad.

8\. They don't proactively monitor any of their servers, and their "new fancy"
monitoring product only goes down to 5 minute resolution, so it's worse than
Pingdom, for example.

~~~
russell_h
Cloud Monitoring (disclaimer: I work on it) can actually be configured to poll
as often as every 30 seconds from each location, or just every 30 seconds in
the case of agent checks. I believe we default to 1 minute intervals, but if
you want to change it you can browse to your check in our Control Panel and
click the little edit icon where it says "Period: 60 seconds".

~~~
nemesisj
This is either brand new (within the last few weeks) or your coworkers don't
know anything about it. The whole monitoring thing has been a farce for a year
or more, as it's been coming real soon now, then in beta, then severely
limited, then costs money, etc.

------
rb2k_
AWS isn't really a solution for people trying to run a "small" project on a
fixed amount of servers 24/7.

It's great if you want to be able to:

\- provision lots of machines without delays

\- launch and terminate new instances to cover load spikes

\- do geo-redundant failover (aka: a datacenter in Europe, Australia, the US,
...)

\- have 'plug and play' components like load balancers (ELB), storage (S3),
databases (RDS), queueing services, ...

\- ...

Amazon provides a lot of things that cheaper solutions will have a hard time
achieving (e.g. the backup space redundancy that OVH provides will probably be
quite a bit less 'secure' than S3/Glacier).

That being said, these premium features are something that a project might
simply not need. We run some of our jenkins build slaves on OVH. We don't need
to launch new ones all that often and the bang for the buck makes them very
much worth considering.

~~~
sxcurry
I'm running a small project on a fixed server 24/7 and AWS makes sense for me.
Why? I'm a one man team supporting a research project. I have no ability to
self host. I have no time to look around at a lot of options and trying to
figure out all the details of every offering. I need a server that has good
uptime and good performance. Most of all, telling my users that we're hosted
on Amazon makes them feels secure - it isn't going anywhere. Believe me, for a
certain class of users, this is important.

~~~
macspoofing
A dedicated host would most certainly be a better (and cheaper) option for
you, but hey, if you don't have time to look around, I suppose it's a
reasonable trade-off.

>I need a server that has good uptime and good performance.

Then a single EC2 instance is not a good option for you. Terrible up-time, and
terrible performance.

~~~
sxcurry
Can you supply more details - maybe I am missing something. My EC2 instance
has been up for 249 days now and my node.js webserver instance seems very
responsive. I still think it's a reasonable trade-off in terms of cost. My
time is expensive, and to be honest even a few hundred dollars a month extra
in server cost is not important. This is a research project, not a commercial
website, so my needs may be different than most.

~~~
frakkingcylons
I may not be hitting the points that macspoofing was trying to make, but at
least in my experience, you can get much better value with a different host
(like DigitalOcean or Linode) where the setup time is minimal and the
performance benefits are substantial. However, if your priority isn't
perfomance/dollar, then the trade-offs are subtle and insubstantial and EC2 is
fine.

------
kyledrake
NeoCities is currently using OVH. We were using Hetzner but we ran into issues
when our server was the victim of a DDoS attack, and Hetzner responded by
null-routing our server's IP address for a few days. OVH has better DDoS
mitigation strategies (supposedly), so that's why we're switching.

I've used AWS before in corporate work, and I have to say I was very
unimpressed with it. The prices for what you get are exorbitantly high. I've
heard people say "they are affordable for corporate standards", but my
reaction to that is just that their previous hosts were even worse about it.
Every hosting solution I have had other than AWS has been cheaper.

More importantly to me than price though is the knowledge. I really don't like
that AWS is a "black box" of mystery meat. I don't know how most of the
systems are implemented under the hood, which means I can't predict what the
failure points are and what I'm implementing. The way I would compile
capabilities of AWS systems together was through anecdotal information via
blog posts. We would have servers fail and be given no explanation as to why.
And many of the interfaces are proprietary, which means that moving to an
alternative is not an option. Not to mention the APIs are not particularly
stellar (a lot of XML). The only options for persistent storage are network
drives and local disks that go away on shutdown, which is not a particularly
good choice of options.

With OVH, I get a server. I know what a server is, how to back it up, and what
its fail points are. If OVH does something I don't agree with, I can move to
another company and have exactly the same environment.

I'm not saying AWS is useless (again, I've used it for corporate environments
before), but it's hard to justify the high cost when you're on a budget,
especially when you can't even determine if the tradeoff is worth it.

------
jakejake
My current startup is using AWS for everything and I have to admit I was eager
to get my hands on it since it seems to me that familiarity with AWS will be a
good thing for me personally and professionally.

I almost get a sense that people are signing up for AWS because, well I'm not
positive about this, but it seems like its trendy. Possibly some startups
don't realize AWS is just providing you with pre-installed systems that you
can easily install yourself? I don't think it's a bad decision necessarily
because depending on your size you may not want to devote any time to
configuring servers. Maybe some people who have made that choice could set me
straight?

My gut is telling me that, for my current situation, the main benefit of AWS -
the automatic scaling - will be quite expensive that by the time we actually
do need to scale. So we will be probably looking elsewhere for hosting at some
point int the future. Much like the article suggests.

------
mrinterweb
What about OpenStack? OpenStack seems like the best of both worlds with being
able to manage both your own hardware as well as burst to your OpenStack
host's resources on demand. There are multiple OpenStack providers like
Rackspace, HP, and many more. This means that if you don't like one provider,
you can easily move to another OpenStack provider without being locked into 15
different AWS services. You may need to schlep your physical servers to a
different datacenter, but that is still easier than decoupling your service
from AWS.

From experience, I have seem that the price of performance on AWS is much
higher than companies that buy their own hardware. Knowing what resources your
service needs as a baseline can be helpful when picking which machines should
be reserved instances, but still you may as well just buy your own hardware if
you want the best perfomance/price.

------
bowlofpetunias
AWS is a great place to start if you're not yet sure what resources and scale
you need. You can play with various solutions and easily scale up.

It makes developing so much more efficient when you don't have to make major
choices up front, and can buy yourself some breathing room by throwing
temporary resources at most performance issues while you review your
architecture.

That either stabilizes to a point where you have an architecture that you can
implement cheaper and more efficient using more traditional hosting solutions,
or you come to a point where you really need AWS's flexibility.

One caveat though: don't make your architecture too dependent on AWS-specific
services until you are 100% AWS is the right choice for the long term.

------
hashtree
Compared to custom colocated clouds, you scale, code, and build your stack
completely differently. I could not do half of what I do under any PaaS/SaaS.

I avoid disk at all costs (nearly unattainable amounts of RAM on PaaS/SaaS),
if disks are hit they must be SSDs, treat everything immutably,
concurrent/distributed computing, assume hardware is plentiful (192+GB ECC,
24+ of new xeon cores, etc). I scale completely differently than most. They
really get you on RAM, I can build whole servers for what it might cost for a
month of PaaS/SaaS.

------
dkersten
I often hear that the best way to use AWS is to host your 24/7 stuff elsewhere
and use AWS for the spikes. This makes a lot of sense, but I always wonder
what the recommended (ie most cost-effective, especially in regards to
bandwidth costs) place to host te 24/7 stuff? For example, moving a ton of
data between EC2 and S3 is free (for bandwidth; ignoring requests costs), but
moving 10TB out costs $0.12/GB which seems quite costly...

I guess the sweet spot is to use external hosting for your web apps and such
and AWS for any large spike-prone batch processing: moving data into S3 is
free (though obviously moving data out of wherever else you're hosting
probably isn't), use EC2 to process it (possibly on spot instances!) and then
move the results (which are much smaller than the raw data for a lot of use
cases) back to the 24/7 hosts?

Though my question still remains: where do HNers recommend to host these
servers knowing that AWS will be used to pick up the slack and handle
irregular/unpredictable workloads?

------
eminh
I currently spend ~$2000 on Softlayer for six servers and use about 30TB of
bandwidth. On AWS I would have paid more just for that bandwidth.

~~~
hashtree
And you can pay much less than half that via custom server builds and
colocation. It is just a matter of how far down the chain you want to go,
given your expertise and sensitivity to hardware costs.

~~~
cbg0
He doesn't actually have to keep replacement parts in the datacenter or have
staff close by to the datacenter to go and perform replacements or new
installs, or worse - pay >100$/hr. for remote hands with colo.

Over time it's certainly more expensive to rent, but you get to cancel and
move on to better hardware when it comes out, without having to worry about
re-purposing or selling old servers.

~~~
nasalgoat
I don't keep hardware spares for my 300+ infrastructure as our hardware
provider has 24 hour turnaround on warranty replacements.

As for re-purposing, I have tons of uses for older hardware to do background
computation or other jobs. I suspect I can extend the lifetime to 5+ years on
most of it, which is quite good in my opinion. You just need to design your
system with modularity in mind, which you should be doing regardless of your
hosting choices.

------
himakara
Nice post. It is important to note that these tend to be cyclical. As start-
ups go through various stages of their life cycle, PaaS/ IaaS providers update
their offerings and technologies mature/ invented, the appeal may shift
between these options. I think it makes it even more important to build your
technology stack in a way that is:

1) easy to deploy, migrate and update (using standard deployment technologies)
and 2) least dependent on a specific vendor (GAE ;)

------
manishsharan
OVH is not accepting any new orders. They claim to be sold out of all nearly
all server types.

And that in a nutshell explains why AWS is a safer choice.

~~~
adventured
There are numerous other very good dedicated hosts that are alternatives to
OVH. The pricing will be slightly higher, but OVH is dirt cheap to begin with
compared to AWS. 1tb of transfer with Amazon will cost you almost as much as a
nice e3 v2/v3 xeon server with 16gb to 32gb of memory and 10tb to 33tb of
transfer.

------
dergachev
OVH actually supports running the Proxmox virtualization distro on their
servers. That means you can easily get a 32GB dedicated server with raid1 SSDs
(around $100/month here in Canada) and spin up VMs to your heart's content.
Proxmox also supports running your host nodes in a cluster, which allows for
live migration. And if the math isn't already ridiculous, keep in mind that
all the running OpenVZ containers (which proxmox supports) actually share a
single kernel, and thus share a good chunk of RAM.

That being said, OVH is notorious for lack of support, and my experience so
far (6 months) suggests that using them is not without risk. So at the moment
I'm automating everything so that if an OVH engineer does decide to
accidentally pull the plug on my server(s), I can failover in an hour or two.

~~~
canterburry
While that certainly seems like a good idea on the surface, it creates a
horrible single point of failure for your entire setup. I certainly hope you
get more hosts than one and distribute all your VMs across them. You'll have
zero failover in case of host failure.

------
programminggeek
Amazon's win is elasticity, moving your servers up and down often. It's not as
big of a win if you have a known quantity of resource utilization over a long
time period.

Actually, there is a win to be had there too. If you can spin down your
instances with load in an intelligent way, you can save A LOT of money using a
combination of reserved instances an on demand instances.

However, if you had a program that was smart enough about dealing with load
and spinning up/down instances and managing cost relative to reserved
instances, on demand instances, and spot instances, that could save a ton of
money.

That kind of optimization is tricky so it's a lot easier to just switch
providers like the OP.

------
ksec
1\. A Correction to that post is there isn't _MANY_ provider that are around
the same price. He said Hetzner. And that is like the ONLY other provider for
the same price. And in many cases OVH offers better value then Hetzner.

2\. The problem the post mention about OVH not being elastic. That is simply
true with every other dedicated provider. ( Actually StormOnDemand offers
Dedicated at per minutes pricing ) . But OVH should have their Public Cloud
ready in October. Which means you get a Hybrid of Cloud and Dedicated.

~~~
karli
There are more provider like
[http://www.redstation.com/](http://www.redstation.com/) but we have only
experience with Hetzner & OVH, so i cannot say something about the other ones.

------
adventured
I've always found their bandwidth to be by far the most financially obnoxious
aspect. $1,200 for just 10tb of bandwidth. You can get far more than that
standard with any number of tremendous dedicated hosts on a $150 box. Digital
Ocean charges a mere $0.02 for overages, by comparison.

I don't mind paying a premium for the easy systems and integration
capabilities that AWS makes possible, but paying such extreme rates for
bandwidth (when Amazon no doubt pays next to nothing per gb of bandwidth), is
a cost too far.

------
icoder
I think these are good points! I've been held back by AWS prices as well,
especially during bootstrapping they are rather high.

The downside you mention at the end, regarding setup time: we use CloudVPS, a
Dutch based company that keeps upping its service in the direction of AWS
(currently, when your billing status is OK, new VPS-es are setup without human
interaction, not milliseconds but still fast enough for most use cases, for
new customers you're running a free trial within a working day or so).

------
tutacano
AWS was really cool back in 2007, but the truth is their pricing has not come
down in line with the decreasing cost of computing over the years and now its
pretty expensive.

------
lazyant
Another comparison between AWS and VPS hosting. AWS is a Lego with many
pieces, if you just use one piece (EC2) you may be better off with the cheaper
alternatives.

~~~
eterm
This isn't even comparing AWS and other VPS, it's comparing EC2 with a
dedicated server.

But actually from what I've seen in the wild, a lot of people just use EC2
without the rest of AWS for just general server hosting, so it's a useful
reminder not to do this unless you don't care about the bottom line. (And who
doesn't?)

------
totallymike
This sounds quite a bit like the way you're _supposed_ to use AWS--you spike
out your services quickly, figure out how and where you need to grow, and then
move to a different service that provides that at a cost-effective level.

I can't imagine building a complete business model around AWS, but using it to
begin the growth period seems reasonable.

------
zerop
I am planning to move from AWS to Linode mainly because of performance. My app
is CPU intensive. I think for such apps you need to take high end EC2
instance.. I tried with small and medium instances but found them quite slow.

With linode 8 core small instances, I could handle 2-3 times the traffic.
However from management perspective AWS rules.

~~~
threeseed
If your app is CPU intensive then why wouldn't you look at dedicated ?

Switching to Linode is always a terrible idea considering how disgraceful
their security and business practices are.

~~~
bloopletech
I use linode as well and, given that I follow the industry at least as much as
the average HN user, I'm very surprised I haven't heard of these 'disgraceful'
practices.

Could you please elaborate?

------
jpalioto
>> ... or move it to your own server as we did.

I'm curious ... have you factored in your power costs? People costs (or
opportunity costs if your existing staff is re-allocated to server admin
tasks)? Additional cost of space for your on-prem setup? Have you factored in
the cost of potential downtime? Single points of failure?

------
LanceH
There is a dead spot between using EC2 on demand and paying for the 3 year
reserved instance, both of which I've found to be practical.

At both ends of that spectrum, however, I've found the pricing to be fairly
reasonable. It just might not work for a startup.

~~~
oijaf888
How is the 3 year reserved instance practical given Amazon tends to cut prices
significantly in a 3 year span? I've seen 1 year terms make sense but never 3
year.

~~~
vacri
Talking with our account manager, he mentions that the 1-year term is what
most people go for anyway - you won't get caught short with long-term price
drops, and you have more flexibility when business demands change.
Overprovisioned capacity is less painful when there's only 6 months left
rather than 30 months...

------
solvemenow
>there are also downsides when moving it to your server, more system
administration, you have to build your own firewall, take care of security &
backup, et

Startup idea right there. But then if I thought of it so quickly, somebody
probably already does this.

~~~
geichel
Yea, we have a global public IaaS cloud that puts a real Cisco firewall / load
balancer in front of your subnet(s):
[https://nacloud.dimensiondata.com/](https://nacloud.dimensiondata.com/)

------
aquark
Does anyone have any experiencing with OVH's dedicated cloud offering?

I'm looking at this as an option vs a small AWS deployment. Seems to offer a
lot of the flexibility of virtualization at a much better price/performance
point than AWS.

------
zobzu
When the company gets big, the best deal is.. surprise.. running your own DC
with an AWS-like system for the devs. Much cheaper, also much faster..

Of course, using old school deployment is a mistake (slow, pisses off devs,
etc.)

------
anthony_barker
Does anyone offer the equivalent of AWS Security Groups? Anyone offer free
intrusion detection scanning? For me security groups is a killer feature.

------
ffrryuu
AWS is ridiculously expensive. The startup I was in was spending like $100,000
a month on it...

~~~
mnbvcxza
Way to give us no useful info.

