Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The pros and cons of cloud hosting (supportbee.com)
66 points by prateekdayal on Dec 30, 2011 | hide | past | favorite | 56 comments


Though it may seem silly, it is not always about cost and performance. In many cases, EC2 allows you much more flexibility than a traditional dedicated server. One example, for EBS-based servers it is possible to clone the entire server with just one API call. This allows you to test upgrades, performance enhancements, etc. without disturbing the production server configuration. And you will be doing so in an exact replica of the machine, minimizing the bugs or issues introduced when an staging or test system had changes applied. Another one: it is simple to resize your server as needed. You can start with a micro instance when developing and then scale to bigger instance types as needed after you on production. With a dedicated server, it is much complex to migrate your setup. We take advantage of those and other features at BitNami Cloud Hosting (http://bitnami.org/cloud) and have had a lot of success so far.

Finally, hosting on Amazon is not only about EC2, it is about the whole ecosystem. You can take advantage of many other services, such as their offerings for MySQL (RDS), memcached (elastic cache), CDN (CloudFront), monitoring (CloudWatch), etc.. Like any other technologies, they have their shortcomings, but can save a significant amount of time and effort vs. doing it yourself and they are way ahead of anybody else in the space (specially traditional hosting companies)

As a side note, EC2 costs can be significantly reduced with reserved instances if you are willing to commit to 1 or 3 year terms.


> And you will be doing so in an exact replica of the machine, minimizing the bugs or issues introduced when an staging or test system had changes applied. Another one: it is simple to resize your server as needed. You can start with a micro instance when developing and then scale to bigger instance types as needed after you on production

VMware? Assuming you are limited to on-premise options, isn't this kind of flexibility addressed with in-house virtualized instances? At a previous employer (who was too paranoid about cloud) I could easily pull a snapshot of prod into a test/dev environment with a few clicks and coffee-break worth of lagtime (likely possible with command line too), then use my deployment process to push changed code/metadata changes back out to prod if need be.

The only downside to this approach is you need to have IT ops that's up to snuff on virtualization (not guaranteed)... with cloud you cut this out, and your devops guy can manage everything.


Yes, I was referring to 'traditional' hosting. The VMWare vcloud offering is actually quite interesting in the ease with which you can migrate from on premise to public vcloud providers. Still way more expensive that just going the Amazon route for most people


I totally agree with you: AWS is a platform with a range of services and much flexibility. Depending on requirements, sometimes using a PAAS like Heroku (and plugins) makes sense, sometimes AWS, sometimes naked hosted servers.

I think that very high level PAAS providers like Heroku, DotCloud, and CloudFoundry are the future, at least for what I would like to do. If I am working for customers, spending effort on plain AWS or hosted servers is OK if that is what they want, but for my own projects I prefer to spend a little more money and save a lot of my own time.


And I really agree with you too.

Adding to that, AWS does have its own PaaS: Elastic Beanstalk. We've been using it for several months now without a single hiccup. It's Java only, but it just works: load-balancing out-of-the-box, auto-scales beautifully and replaces dead instances in a couple of minutes. You can easily launch new environments for testing new features and app updates are a breeze. And you're still close to all the AWS services like S3 and CloudFront which adds a lot of value to the package.

This sounds almost like a commercial but, yes, being used to manage our own servers, this was one of the greatest moves we did. We still have a server hosted at Hetzner (mentioned by the OP), but have moved pretty much everything to the cloud. It's good to know that updating MongoDB is now just a matter of sending an e-mail to the guys at MongoHQ (which also runs on AWS) and they do it in a few seconds.

It may not work the same for everyone but from our experience it does pay off even if we're burning a few more dollars every month, as we're not spending long unexpected hours on server management anymore. And for a small team trying to focus on product development, that's gold!


Those providers do, in turn, run on top of AWS so in a way you are still running on Amazon's platform. At one point that layer will be standardized as well (and at the speed AWS is going, sooner than later...)


Exactly my point. As I mention in the post, if you are using the APIs to provision or free up resources, then the cloud is definitely for you. Ultimately if you are using the flexibility and power of the cloud to your business advantage, cost stops being a concern.


You can resize the machines or take snapshots from the AWS console, no need to use the API. Some people prefer to simplify it further and use something like BitNami to manage it, but there are options to the API. Our infrastructure runs on both dedicated and cloud servers, and we have found that the more services we move to the cloud, the easier things can be managed


There's a sweet spot where virtualized solutions make sense, but it's really easy to find reasons why owning and colocating your own metal is more economical. I have a 1/3rd private rack and a 100mbit uncapped unlimited dedicated port, for $570 a month. I have five servers in there, which I have cobbled together for pretty cheap (for the most part). I would be paying $3000 a month for the equivalent cloud solution....

For anything requiring real IO performance or tons of memory, stick with your own hardware.


The guy who runs prgmr.net (http://news.ycombinator.com/user?id=lsc) more or less says the same, despite running a virtual-server company:

Prgmr.com is no longer offering co-location ... the margin on co-location is such that I can't justify setting aside time and space that could otherwise be used for my xen VPS hosting business at this time. (note what this implies about the relitive pricing of these two commodities; quite often, owning your own hardware and co-locating it saves you money over renting virtual servers, especially once you start approaching 16 or 32GiB ram)

That said, I rent a VPS from him because I just need a small, always-on server in the cloud that I don't have to administer.


We've experienced location-wide outages due to natural disasters in host centers - putting all our servers in one place is no longer an option, so we're looking at a combo of colo and cloud. We're scaling up to 5 locations in the next months, thanks to all the great replication tools we have at our disposal now it's a fairly simple task, more than it would have been 5 years ago.


Which replication tools are you using?


There are actually 3 major alternatives here, and the article ignores the third:

1) Run on dedicated hardware.

2) Run on EC2, or another "Infrastructure as a Service" provider.

3) Run on Heroku, or another "Platform as a Service" provider.

For smaller companies, it really comes down to a few questions: Who's worrying about your database backups? Who handles security patches? What happens when a critical machine fails on Christmas week?

Many smaller companies will be happiest with option (3), because somebody else worries about backups, security, and machine failure for you. Sure, it's expensive. But it's a lot nicer than calling your senior programmer back from vacation because of a catastrophic RAID failure.

Option (1) certainly looks cheaper on paper. But many small companies are skimping on something critical, and they'll get burnt within the next 5 years.


Oh, please. That's just your list, which you haven't provided any evidence for. PaaS is still tiny in comparison to the overall market for commercial hosting.


that is quite a bit of hardware for $51 a month. Anyone know of servers in the US that are priced even close?


They tend to be out of stock, but VolumeDrive comes pretty close: http://volumedrive.com/vdrive/?a=dedicated The other thing you have to consider is the 150 euro setup fee on the Hetzner server, though that becomes less of a factor if you keep the server for an extended period of time.


That is pretty good. although being sold out and 1 - 2 weeks before they come on line make it a tougher sell.

excellent point about the $150 fee.

the ram and disk seem very cheap. I haven't seen raid-1 3TB and 16Gb ram in the states for anything less than $150+ a month.

They may be a good DR site.

Another perk is that incoming bandwidth is not tracked.


Damn, their VPS packages cost less then I'm paying for shared hosting at Dreamhost. Unfortunately, a cursory search shows mixed reviews. Have you used them personally?


Sorry for the late reply, but yes, I have used them. They are OK, you really do get what you pay for. I've had some trouble with their default Debian/Ubuntu images being messed up and my VPS has been suspiciously slow at times. Their support is prompt but they aren't willing to do much (which is fair as it is unmanaged).


I used it for about half a year. Never had any issue.


It's closer to $65 a month, with current euro->USD conversion.


If you are not in Europe, you get 19% discount (basically the prices include VAT and you don't pay VAT). The discount is automatically applied.


If you are in Europe (especially the UK), there is almost no excuse for not registering for VAT. As long as you are a 'business' (and this definition includes freelancers), then there is nothing stopping you from doing so.

Once you're VAT registered, you can claim back VAT on allowable purchases (eg servers, hardware, software, etc). You will have to charge VAT on services you provide, but seeing as most businesses you sell to are also VAT registered, it makes little difference to them.


Most countries offer low turnover businesses the option of opting out of VAT (i.e., they don't charge VAT on products and they can't claim it back on supplies), which means that, as small, labour-intensive businesses, most freelancers opt out of VAT.


Most freelancers I know have registered for VAT, even if they don't have to.

In the UK, you have to register if your turnover is more than £73,000, and the benefits for registering even if you're under that threshold are compelling (except if you almost exclusively provide services to non-registered entities)


OK, then this is fair enough. My experience here in Germany is different.


Well for the hetzner server there is a 149 € activation fee with that 49 € per month so it is a bit more than the article is saying.

The only thing I've ever found close to hetzner.de in the US with good customer feedback is (though there servers aren't that close): http://joesdatacenter.com/Dedicated_Servers.html


Thanks for the JoesDataCenter link. Definitely a great find! I'll give them a shot one day. I could only find great reviews on Google, but I'd love to hear from fellow HNers.


They are OK, they have good ticket response times. However, they did 2 things that annoyed me:

1. The default Debian OS load I specified, had their custom /etc/apt/sources.list. So when I spec'ed Debian 5 (for compatiblity with specific stuff I had to run), then ran "apt-get update && apt-get upgrade" -- it automatically upgraded everything to Debian 6. Not cool.

2. They buy bandwidth from cheaper providers like He.net, Cogent, and I think some other provider. They may not be as low-latency and as well-connected as you like.

I have servers in the NorthEast on Level3 - bandwidth between JoesDC and this location was unusable (under 1Mbps). However, I did open a ticket and gave them traceroutes, pings, etc. and a few days later it was fixed, I can now get over 30Mbps between the 2 locations.

But, you might never know what response times are for your clients, as you can't measure speed from their end.


Thanks a lot!


Why do people always compare EC2 _On-Demand Instances_ cost to classic hosting cost?

The actual cost for a planned _Reserved Instance_ on EC2 is much lower and is a much more realistic scenario for hosting : it cost about 27$/month for a 3 years reserved small instance (425$/36 months + 14.64$), not 60$/month.

No one knowing it will use an instance 100% time should opt for On-Demand instance, because yeah, it costs a lot.

https://aws.amazon.com/ec2/#pricing http://calculator.s3.amazonaws.com/calc5.html


You need to compare costs on the comparable commitment time-frame. You are assuming a 3 year commitment with Amazon. Classic hosting has a monthly commitment.


I think most dedicated hosting often demands a setup fee of 50$+ or a 1 year commitment.

And even not taking in account the setup fee and the commitment, at 27$/month, you get a lot of power for your buck comparing to the cheapest dedicated server available at maybe something like 60$/month.

Cheapest dedicated box I could quickly find is at 59$/m, and even with a 2 years commitment you get "only" a 25% reduction which put you price at ~45$/m:

http://iweb.com/dedicated/clearance


The article itself contains a much better example. Setup fee of $65 and then $51/mo for 8GB RAM, 750 GB HD and i7. http://www.hetzner.de/en/hosting/produkte_rootserver/eq4


Yes, but that dedicated box (even an older/cheaper one) is going to beat the pants of S3's I/O. Apples and oranges.


You don't always -NEED- fast I/O, btw.


While the article is a useful comparison of straight EC2 vs dedicated servers, it doesn't touch on the cloud PaaS options such as Heroku that eliminate so many complications of installing and tuning and maintaining your frameworks. IMO comparing "cloud vs dedicated" without reference to PaaS options is akin to comparing "combustion engines to bicycles" without mentioning motorcycles.


Thanks for the comment. I thought about it but wanted to keep the post short.

I agree that PaaS is a different story all together. It frees you up from doing most tasks. However a vanilla EC2 instance or dedicated are almost the same in that respect but quite different pricing and performance wise.


We use PaaS because it allows us to focus our energy on the product that the customer sees, rather than on the backend stuff that, done right, never affects the customer. When our small team grows, we can afford to concentrate on our own hardware and make cost-saving decisions. But we are very light on system administration experience, and our scarcest commodity is time, not money.


It's better, in my opinion, to think of various cloud providers as just another endpoint in the evolving infrastructural API layer available to people and companies.

The breadth of options becoming available is fantastic; it's not that long ago that hosting options were:

  * sharing a single physical machine with a group of unknown other customers
  * renting one or more single physical machines with preselected hardware/OS
  * Purchasing your own hardware and co-locating it in a data center
  * Building a data center
Right now, PaaS providers are taking advantage of all this newly available, ephemeral, programmable computing power to build abstracted services, allowing other developers/companies to take advantage of pooled expertise and resources.

I think, if I were building the infrastructure for a company today (which I am helping with, for a lot of companies), I'd definitely eat the additional cost (can be offset quite a bit by reserved instances) and idiosyncrasies (instance degradation, unexpected performance characteristics) of Amazon.

I spent years toiling over hardware quirks, flakey SCSI adapters, power outages, failing or aging machines. If you rent boxes, you're relying on SLAs (if you can get them) and sometimes insane costs for an onsite engineer to fix something you broke (I mucked up a firewall config once on leased dedicated hardware. Don't do that. Ouch.)

There are similar outages with cloud providers, but depending on which rung of the abstraction you're on, (IaaS, PaaS, etc) you might be in a much better position to redeploy your infrastructure elsewhere if it's a real disaster.

The bigger your product/service gets, the more expensive your downtime is (and the more you're spending on engineers to make sure it doesn't go down. Oh, and your hardware has quirks, and your engineers know them - if they leave, they take that knowledge with them).

Of course, there are situations where you'll want to minimize financial outlay, run something that's not a "web app", don't mind getting your hands dirty, willing to risk hardware failures, etc. Hopefully PaaS providers will continue to bridge the gap for most people.


Your analysis of dedicated servers is somewhat dated. For instance, most higher-end dedicated servers these days come with IPMI. You can use it as a KVM in case you muck something up. It has its own IP address (on a different subnet) and browser-based console access. For the servers we rent, IPMI is a requirement.

Of course, you can also build and colo your own servers with IPMI--pretty much every brand supports it.


Great discussion - would be an interesting poll to see exactly what hardware setups HN users run. When looking at options for our bootstrapped startup, was very surprised to find the costs of Amazon compared to dedicated boxes, or even a couple of dedicated VPS instances. It appears so many people I have spoken to who use AWS do not even automatically spin up instances when they need them, so have to manually react and then bring them up... And another startup who have automatically scaled up the instances have also had massive bills when someone found a loophole and distributed manga art via their services.


There's one other drawback that made us switch from AWS to dedicated hosting (among others such as those mentioned in the post): latency

At least in Europe Amazon has only one datacenter, quite at the outskirts in Ireland. We could save some 20% in load time by moving to Germany, where our customers are!


This brings up a question I've also wondered when looking at moving to EC2: it simply costs more to run when you have a few servers that use up a lot of bandwidth. At what scale does it become more efficient?


EC2 instances are not reliable: they degrade all the time. The solution, recommended by Amazon, is to have redundancy. So, plan on having at least two of every type of server or using their services like RDS.

Really, it is a rabbit hole that can lead to thousands of dollars each month very quickly. EC2 becomes efficient when talking about hundreds of thousands of customers if not more.

You can end up with an impressively robust system but at a large of upfront cost: for startups probably not worth the ROI.


> for startups probably not worth the ROI.

I definitely agree. That said, I think people seem to forget something.

Amazon says: "your instance may go down and if it does we may just terminate it, reclaiming your ephemeral disk. All data not on EBS will be lost."

Every other host (at least implicitly) says: "your instance/machine may go down, we will attempt to recover your data. The ability and speed depends largely on why the machine went down and whether we run RAID/etc on the machine."

I've waited weeks for a disk repair from very well known/regarded non-Amazon hosts. Redundancy is needed everywhere, not just on Amazon. Bare metal still dies, controllers flake out, networks go down.


This is why I'm sticking with EC2 for now as well. Though one of my instances might go down or become unreliable, all I have to do to fix it is Stop and Start it!

No messing with support tickets or waiting (sometimes hours) on someone to talk out to the data center to troubleshoot it. Its just fixed.


EC2 instances are not reliable: they degrade all the time. The solution, recommended by Amazon, is to have redundancy. So, plan on having at least two of every type of server or using their services like RDS.

With EC2 my approach has always been that if you've got enough traffic to justify it, you should move rather quickly to a load-balanced solution. This really forces you to stop relying on what's stored on any particular server, and focus your energy on making sure you can boot new ones and get them seamlessly into the rotation.

Obviously this doesn't work for stuff like databases (for which I definitely recommend RDS or SimpleDB if they suit your needs), but even there, if you're hosting your own and make sure that the data goes on to EBS volumes instead of the instance stores, you can set things up so that you snapshot those volumes regularly and can boot up new database servers without trouble.

Services like Rightscale can help manage stuff like this pretty easily, too, though they're probably a bit expensive for startups (and you've got to be willing to put in a little setup time, I've always found that their templates needed pretty substantial customization to suit my needs); in any case, Amazon is starting to pull more of that type of functionality into AWS itself, with stuff like CloudFormation and Elastic Beanstalk, so if I was starting now, I'd probably give those a closer look.


EC2 instances are not reliable: they degrade all the time

You have an interesting definition of "all the time". In my experience, EC2 instances fail less often than dedicated servers.


I didn't define it. You could have asked for clarification.

We have had three instances degrade within a week. So far, we have had five instance degrade. At current degradation rate, nearly 100% of our production servers would degrade every year.

It took a lot of work to try and get, from Amazon support, the expected degradation rates of their servers. Their response was as I described: implement redundancy. They never did let me know their expected degradation rate.

Admittedly, we have not seen any degradation in a few months. However, this doesn't make me feel any better.


From a user's perspective, what can EC2 do that a PC in a closet hanging off a T1 can't do?


Allow for an easy path to recovery, with near-zero downtime, when that PC-in-a-closet inevitably has some sort of failure.

The nice thing about the fact that EC2 instances are at least to some degree ephemeral is that it forces you, from the start, to have a solid backup/restore plan. And when an EC2 instance goes down, you can have another one booted within minutes, compared to the days/weeks that it might take to get another physical machine set up.

That's not even to mention scaling issues; right now I'm managing a stack of over 50 EC2 instances, and it's easily manageable by a single person (most of those are in a load balanced array, and they'll come up and go down automatically as load dictates). I have no idea what I would do if I had to physically set up servers to handle this type of shifting load...


compared to the days/weeks that it might take to get another physical machine set up.

Whaaaa?

What does 50 EC2 instances correspond to in the hardware world, appwise (not environment)? Each of my Rails/Apache processes are about 75M (unoptimized, natch), which means 50 will sit OK in 4G RAM. Since that is laptop-caliber hardware capacity, I'm still not seeing the benefit in the server world (or even an 8G PC). From where I sit it seems to involve a lot of domain knowledge that is not relevant once you stop using EC2.

Naturally, EC2 is totally great for spinning up and prototyping, but from my research its most compelling benefit is geographic dispersal, which can also be accomplished otherwise.


More than 1.5Mbps for starters... appearing at the drop of an API call is another... you can fill in the blanks from here.


I'll see your PC in a closet on a T1, and raise you an API-spawned, auto-scaling cluster of VMs that can be imaged, re-sized, and restored with the tiniest of delays.


When's the last time you had to do that?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: