With that in mind, pure EC2 is a terrible choice for general web application hosting.
If using the complete AWS set (S3, simpleDB, etc) then it makes more sense as stuff like db hosting can be pushed out to their services designed for it, but if you're gonna fire up a windows box, stick SQL server on there and use it as a general web app hosting environment then it is a terrible choice.
Unfortunately, it's a choice that still appears to be easy for management to justify:
It doesn't require a server admin to use, it doesn't require mirroring or backups because obviously amazon EBS volumes can't die because they're in the cloud. The extra cost and lower performance is obviously just an Ok side effect of these benefits.
(Yes, I'm being sarcastic here, but it's all arguments I've seen made.)
As i said, we really miss the simplicity of AWS, one mouse click and you have a loadbalancer, ec.
PS: trust me, AWS EBS volume can die, and this is a pain! :)
You might not even need to spawn EC2 images very often - many sites have daily variations that are too small for it to really be worth it. If your hosting is cheap, spawning EC2 images for more than 6-8 hours per day might already be ineffective compared to renting more servers on a monthly contract. But just having the ability might make the difference between aiming for a peak utilization of, say, 50% of your servers, in case of unusual peaks or server failures, and aiming for a peak utilization of 90%+.
That can make a huge difference in cost.
"Terrible" is seriously overstating it. There are a lot of advantages to AWS (I understand you said "pure", but that really makes no sense in the context of AWS) that can justify the price premium -- ELB, elastic IPs, the ability to spit out AMI images (and machines from them) at will, the private networks, the firewall, etc. The fantastic network capacity (I am always wary of services like OVH that offer "unlimited" anything, because it is always limited, and unlimited means that your peers will be saturating switches because it's "free").
There are a tremendous number of flexibility reasons why EC2 comes at a premium, and it is an easy justification in many shops. Even if you aren't using ELB today, and won't have to spin out machines, etc, that flexibility has significant value.
I say this having machines at AWS, Digital Ocean and OVH. OVH is very, very bare bones, and you'd better have an escape hatch because the simplest configuration error can leave your machine incapacitated and beyond reach (adding the KVM option is usuriously expensive -- like $350 per month per machine).
Comparing bandwidth between OVH and AWS is a little cheeky. Bandwidth on AWS costs an absolute fortune, not remotely economical for bulk transfer.
The switch saturation problem doesn't even necessarily go away if you instate X TB/month data caps. I would have thought local switches could handle it anyway, with cheap boxes typically only having 100Mb ports.
Some data centers don't even charge for internal traffic, which means you're still exposed when cheap VPSs and dedis are used as P2P file sharing nodes and are exchanging a lot of traffic within the building.
In any case, I'm just grateful the multi-terabyte range is so affordable, bulk transit costs in the data center has been falling year-on-year for a decade, and lots of hosts don't seem to have passed on the benefits.
Incidentally, OVH do have different SLAs across their server range. The low end stuff is "best effort", the more expensive options are supposed to be "guaranteed". They even tell you what switches they use.
> OVH is very, very bare bones, and you'd better have an escape hatch because the simplest configuration error can leave your machine incapacitated and beyond reach
Their network boot facilities are pretty handy. As long as you use a sensible filesystem you can always network boot their recovery option and access your files (and I think chroot in?). The lack of KVM is annoying though... especially when you're like me and compiling and running custom kernels (but you can network boot one of their kernels as well).
Is $750/month a significant amount of money for the company? In the USA, this is perhaps the cost of one engineer-day, and one could raise a year's worth of this money by successfully applying for a single additional credit card. (Not that I recommend bootstrapping with credit cards. But it has been done.)
Of course, it may be the case that a company could improve customer satisfaction, and therefore revenue, by double-digits by improving performance on optimized hardware. But if this is the case, where is the discussion of that? Where is the data: A/B testing, customer satisfaction, churn rate, monthly revenue? They should be front and center.
† Without getting into the reduced redundancy, the additional complexity of hosting multiple unrelated services on each instance, the "additional maintenance" referred to in the post, the lack of server capacity to cover emergencies and staging and load testing and continuous integration, and the risk involved in switching infrastructure out from under a working business-critical application... any estimate which doesn't include the cost of engineering time is wrong. All changes have engineering costs. Just talking about this idea is costing engineering time.
This is the common disconnect I see when people tout The Cloud as a solution to having system administrators - that somehow that instance of Linux running in EC2 doesn't require the same maintenance as a physical one. It does.
And who could ever claim that AWS requires no maintenance? It takes plenty; I should know. But the problem isn't that Amazon is necessarily less expensive, or more expensive, or more reliable, or less reliable. All of that depends on the context. The problem is that the context is rarely reported in this genre of blog post. These posts tend to fixate on the size of the hosting bill. This is the year 2013, and unless its business model is hopelessly flawed, the hosting bill is one of the smallest problems a new company will ever have.
But maybe I'm wrong about that, so I wish these writeups would provide more context to explain why I'm wrong in this or that particular case, and by how much. Yes, I see the hosting bill is down. But are the savings significant to the business? Did the migration take one engineer-day, or twelve, or thirty-eight? Did it reduce the size of the codebase or increase it, and which modules were affected? Is the time required for testing and reliable deployment up or down, and by how much? How has your planning for various disaster scenarios changed? Are you getting more or fewer alerts in the middle of the night?
It's not true that AWS gets rid of the need for sysadmins, but it's absolutely not true that you do all the same sysadmin tasks on a cloud service.
Do you think $189K per year is an average salary? It is not true.
Also, for the skills implied, have you ever tried to hire a systems administrator who has experience in production environments with all of those aspects of back-end web servers? It's not easy, and it's not cheap.
But Boston is Boston, and SV is SV, and this is estimation, so I'll happily concede a factor of two. Okay. Suppose $750 buys you two engineer-days per month. Same question: Is the $750 important?
My team's time is easily worth $500-600/hr, so we easily wasted $300k. So the fact that my internal datacenter provider can give me a VM that costs 20% of what EC2 charges or disk that is more performant at a similar cost is interesting trivia, but isn't saving money.
Comparing EC2 costs to what sounds like a completely botched project isn't very fair, in other words. Of course there are worse alternatives than EC2 as well.
We colocate at a datacenter and can get cabinets pretty easily. We've done this for over 10 years now. When we aren't growing or shrinking I spend about an extra 4 hours per month because we have physical servers rather than use something like AWS.
12 servers would probably take us about an extra 6 of our person-hours to get up and running vs AWS. If we needed a new cabinet it might take a couple days, but we aren't actively working - we put in a request, and they tell us when its ready for our use. We don't sit and twiddle our thumbs while this happens, and we do it before the development side of the project is completed.
We've talked about AWS before for the redundancy and convenience but the price and the extra headache of dealing with the inconsistent performance never made sense for our use.
That may be true, but it doesn't seem that uncommon.
In my own case, my company ditched AWS in favour of getting our own rack with about 10 custom servers. We have a full-time sysadmin, so nobody's time was wasted on the transition; whatever stuff the developers (who are also $500-600/hr people) were needed for during that time was valuable, because it forced us to rewrite the deployment system, which would have been required at some point anyway.
What was the "needless BS" you had to do?
Did you sneak an extra zero in there? Even fully-realized, I'd $100-$200/hr tops in a prime market.
TL;DR from today's French blog post:
Our offers were so competitive that too many customers wanted them, and we're loosing money if we don't keep customers for at least 2 years. Sadly, they migrate to new offers before that. We're halting dedicated servers until we figure out what to do.
 Link: http://www.ovh.com/fr/a1186.pourquoi_160sold_out160
In summary: Their main problem was no "installation fee" meaning the barrier to hopping to a newer server every couple of years just wasn't there. If their new offerings were priced competitively to attract new customers they would also be priced similar to how older hardware was priced when sold a couple of years ago, so anyone on the older hardware would jump to new boxes.
We recently switched to Azure from Rackspace, but we're still evaluating if it will work for us long term. Azure's issues are that you have to request number of core increases, and you can't capture an image of a vm without shutting it down. Also you can't just give your VM a regular ssh public key, you have to generate SSL like certs. Also weird is a lot of the documentation is only for the Windows side of things, even though you can get some of that stuff to work on linux and that you can do that by installing an SDK even though you might not be installing an application, just running your own stuff on a VM.
1. Noisy neighbours impact you all the time
2. The staff are really poorly trained and don't know how to troubleshoot.
3. They're expensive.
4. Their control panels are really bad, constantly being updated and migrated, and are just a complete mess.
5. They've had several major network outages that have lasted for quite a long time (hours) that they blame on "upstream routing issues" despite supposedly having multiple redundant upstream carriers.
6. They'll randomly reboot your box without notice. If you open a ticket there's an almost certain chance they'll just reboot your box no matter how much you ask them not to.
7. The IO on the boxes is really bad.
8. They don't proactively monitor any of their servers, and their "new fancy" monitoring product only goes down to 5 minute resolution, so it's worse than Pingdom, for example.
You just can't beat AWS right now for reliability, feature set and speed. We started using them recently and they are a tiny bit more expensive. But it's the difference between fresh air and breathing carbon monoxide.
At least so far.
Maybe we lucked out with who else is sharing the hardware.
We only have 2 mid-sized virtual servers in DFW and things have been working flawlessly for us..
So it is kind of a roll of the dice. Are the other customers on your hardware well behaved? Will they stay that way?
It is a trade-off, you get way better performance if the other virtual hosts on the box are quiet. But if you plan your capacity around those quiet periods you can be in for quite a shock once the hardware gets busy. I've run critical servers on hosts like this and it can be a headache.
That's why I was asking about the performance with more VMs, I don't use many virtual servers at RS for my day job.
It's great if you want to be able to:
- provision lots of machines without delays
- launch and terminate new instances to cover load spikes
- do geo-redundant failover (aka: a datacenter in Europe, Australia, the US, ...)
- have 'plug and play' components like load balancers (ELB), storage (S3), databases (RDS), queueing services, ...
Amazon provides a lot of things that cheaper solutions will have a hard time achieving (e.g. the backup space redundancy that OVH provides will probably be quite a bit less 'secure' than S3/Glacier).
That being said, these premium features are something that a project might simply not need. We run some of our jenkins build slaves on OVH. We don't need to launch new ones all that often and the bang for the buck makes them very much worth considering.
>I need a server that has good uptime and good performance.
Then a single EC2 instance is not a good option for you. Terrible up-time, and terrible performance.
No amount of optimization could eliminate the 100-150ms penalty imposed by the EC2 network vs. our dedicated hardware. The local network was congested and "noisy" in the sense that ping times were highly variable and had high packet loss, and the number of hops to the internet at large were high, and the baseline latency to the world was also high.
As for instance lifespan, we had numerous instances just "disappear" and then needed to be recreated. We were running a hundred or so for our test so YMMV.
Also: HDD performance on "basic" Amazon is slow and RAM is expensive :(
I've used AWS before in corporate work, and I have to say I was very unimpressed with it. The prices for what you get are exorbitantly high. I've heard people say "they are affordable for corporate standards", but my reaction to that is just that their previous hosts were even worse about it. Every hosting solution I have had other than AWS has been cheaper.
More importantly to me than price though is the knowledge. I really don't like that AWS is a "black box" of mystery meat. I don't know how most of the systems are implemented under the hood, which means I can't predict what the failure points are and what I'm implementing. The way I would compile capabilities of AWS systems together was through anecdotal information via blog posts. We would have servers fail and be given no explanation as to why. And many of the interfaces are proprietary, which means that moving to an alternative is not an option. Not to mention the APIs are not particularly stellar (a lot of XML). The only options for persistent storage are network drives and local disks that go away on shutdown, which is not a particularly good choice of options.
With OVH, I get a server. I know what a server is, how to back it up, and what its fail points are. If OVH does something I don't agree with, I can move to another company and have exactly the same environment.
I'm not saying AWS is useless (again, I've used it for corporate environments before), but it's hard to justify the high cost when you're on a budget, especially when you can't even determine if the tradeoff is worth it.
I almost get a sense that people are signing up for AWS because, well I'm not positive about this, but it seems like its trendy. Possibly some startups don't realize AWS is just providing you with pre-installed systems that you can easily install yourself? I don't think it's a bad decision necessarily because depending on your size you may not want to devote any time to configuring servers. Maybe some people who have made that choice could set me straight?
My gut is telling me that, for my current situation, the main benefit of AWS - the automatic scaling - will be quite expensive that by the time we actually do need to scale. So we will be probably looking elsewhere for hosting at some point int the future. Much like the article suggests.
From experience, I have seem that the price of performance on AWS is much higher than companies that buy their own hardware. Knowing what resources your service needs as a baseline can be helpful when picking which machines should be reserved instances, but still you may as well just buy your own hardware if you want the best perfomance/price.
It makes developing so much more efficient when you don't have to make major choices up front, and can buy yourself some breathing room by throwing temporary resources at most performance issues while you review your architecture.
That either stabilizes to a point where you have an architecture that you can implement cheaper and more efficient using more traditional hosting solutions, or you come to a point where you really need AWS's flexibility.
One caveat though: don't make your architecture too dependent on AWS-specific services until you are 100% AWS is the right choice for the long term.
I avoid disk at all costs (nearly unattainable amounts of RAM on PaaS/SaaS), if disks are hit they must be SSDs, treat everything immutably, concurrent/distributed computing, assume hardware is plentiful (192+GB ECC, 24+ of new xeon cores, etc). I scale completely differently than most. They really get you on RAM, I can build whole servers for what it might cost for a month of PaaS/SaaS.
I guess the sweet spot is to use external hosting for your web apps and such and AWS for any large spike-prone batch processing: moving data into S3 is free (though obviously moving data out of wherever else you're hosting probably isn't), use EC2 to process it (possibly on spot instances!) and then move the results (which are much smaller than the raw data for a lot of use cases) back to the 24/7 hosts?
Though my question still remains: where do HNers recommend to host these servers knowing that AWS will be used to pick up the slack and handle irregular/unpredictable workloads?
Over time it's certainly more expensive to rent, but you get to cancel and move on to better hardware when it comes out, without having to worry about re-purposing or selling old servers.
As for re-purposing, I have tons of uses for older hardware to do background computation or other jobs. I suspect I can extend the lifetime to 5+ years on most of it, which is quite good in my opinion. You just need to design your system with modularity in mind, which you should be doing regardless of your hosting choices.
1) easy to deploy, migrate and update (using standard deployment technologies)
2) least dependent on a specific vendor (GAE ;)
And that in a nutshell explains why AWS is a safer choice.
That being said, OVH is notorious for lack of support, and my experience so far (6 months) suggests that using them is not without risk. So at the moment I'm automating everything so that if an OVH engineer does decide to accidentally pull the plug on my server(s), I can failover in an hour or two.
Actually, there is a win to be had there too. If you can spin down your instances with load in an intelligent way, you can save A LOT of money using a combination of reserved instances an on demand instances.
However, if you had a program that was smart enough about dealing with load and spinning up/down instances and managing cost relative to reserved instances, on demand instances, and spot instances, that could save a ton of money.
That kind of optimization is tricky so it's a lot easier to just switch providers like the OP.
2. The problem the post mention about OVH not being elastic. That is simply true with every other dedicated provider. ( Actually StormOnDemand offers Dedicated at per minutes pricing ) . But OVH should have their Public Cloud ready in October. Which means you get a Hybrid of Cloud and Dedicated.
I don't mind paying a premium for the easy systems and integration capabilities that AWS makes possible, but paying such extreme rates for bandwidth (when Amazon no doubt pays next to nothing per gb of bandwidth), is a cost too far.
The downside you mention at the end, regarding setup time: we use CloudVPS, a Dutch based company that keeps upping its service in the direction of AWS (currently, when your billing status is OK, new VPS-es are setup without human interaction, not milliseconds but still fast enough for most use cases, for new customers you're running a free trial within a working day or so).
But actually from what I've seen in the wild, a lot of people just use EC2 without the rest of AWS for just general server hosting, so it's a useful reminder not to do this unless you don't care about the bottom line. (And who doesn't?)
I can't imagine building a complete business model around AWS, but using it to begin the growth period seems reasonable.
With linode 8 core small instances, I could handle 2-3 times the traffic. However from management perspective AWS rules.
To be fair, the negatives I have experienced so far are: hetzner's management console is pretty poor compared to linode's (but it gets the job done), and linode is a self-serve almost instant provisioning while hetzner seems to take about 12 hours.
Switching to Linode is always a terrible idea considering how disgraceful their security and business practices are.
Could you please elaborate?
Can you please elaborate on this? I just signed up, so I'm curious.
I'm curious ... have you factored in your power costs? People costs (or opportunity costs if your existing staff is re-allocated to server admin tasks)? Additional cost of space for your on-prem setup? Have you factored in the cost of potential downtime? Single points of failure?
At both ends of that spectrum, however, I've found the pricing to be fairly reasonable. It just might not work for a startup.
Startup idea right there. But then if I thought of it so quickly, somebody probably already does this.
I'm looking at this as an option vs a small AWS deployment. Seems to offer a lot of the flexibility of virtualization at a much better price/performance point than AWS.
Of course, using old school deployment is a mistake (slow, pisses off devs, etc.)