Hacker News new | past | comments | ask | show | jobs | submit login
The cloud vs. dedicated servers (screamingatmyscreen.com)
40 points by fallenhitokiri on Dec 12, 2012 | hide | past | favorite | 59 comments



I think if you are spending more than $100 a month in VMs you should seriously consider co-locating if you have the skills to support it.

For my side-projects, personal websites and general purpose "whatever" I'm using an in-expensive colo provider(Colo@). For $50 I get 10mbit/s @ 95% (basically, burstable to 100mbps for up to 5% of a month). That's about 3 TB of data transfer which alone would cost hundreds of dollars at EC2. Of course, it is also way more data than most people would use.

The server I bought used on eBay for $365. Its a dual Xeon L5420 (8 hw cores) and has 24 GB of RAM. I run seven or 8 VMs under KVM on it presently. These images are pretty portable and a couple of them I backup regularly to S3; I could recover to an EC2 instance if I lost the box.

I monitor this with an EC2 micro instance and have not had any network outages in 6 months there. If I wanted to run a production site there I would need at least a second machine for redundancy; that would be another $30-40 a month. I'd probably also replicate real-time to a small EC2 instance so that would cost a little (though the incoming B/W to EC2 is free) - I don't do that now as I don't have real "production" data.

Not everyone should do this but if you like servers you should consider it. Another advantage here is that I own the server. If I get in a billing dispute or other issue with my provider they can take me off the network but they cannot hold my server hostage. Also they cannot login to the box - any attempt at social hacking is pretty well doomed.

On the other hand, on the two occaisions I've needed remote hands and the one time I needed a KVM they responded in less than fifteen minutes. It is mind-blowing the level of support you can get with the right provider.


$100/month doesn't really matter to a real business. By the time you are paying employees, renting offices, buying equipment, paying for book-keeping and tax services, and other things, you stop caring about $100/month. What you do care about is starting a new project like moving to a colo, which will take up time, money, and distract you from your mission.

I think it would be better to put it this way - if you care about $100/month, you should consider co-locating. Otherwise, you need other reasons.


I agree completely, but the OP is talking about a handful of Linodes and he's looking at cost so I would think he fits the bill there. I don't think a start-up should spend their time on Colo. A small business - maybe - the question should be, how big a part of your total expenses is your hosting and bandwidth. For some businesses like a very busy blog, hosting may be a very big part of their total costs (and a meaningful number in absolute terms).


I agree with you. I think this is a good time to (again) point to Pinboard's "The Five Stages of Hosting": http://blog.pinboard.in/2012/01/the_five_stages_of_hosting/

You should move up the hosting chain as your needs demand it, and not before. Yes, you'll pay more for cloud-hosted solutions, but if one of the racked servers has a power supply failure or a motherboard failure or bad RAM or a failed drive, that's their problem to deal with, not yours -- and at the moment, most of the cloud providers seem to be very good at dealing with it.

All of those things are a distraction, and worse still, if you don't have multiple layers of redundancy -- which will add to your costs -- then they can result in down time for your service, which is bad.

I, personally, would not move to a dedicated server until the costs for required resources became so egregious in the cloud that I could no longer afford not to use a dedicated server.


Well firstly the OP would be crazy to go with Linode.

Secondly there is actually little difference between the amount of time spent managing a VPS versus a dedicated or colo. Hardware failures aren't going to be your biggest concern. It's going to be services going down, database misbehaving, OS misconfigurations, general scaling issues, network dropping out etc e.g. witness all of the AWS/Linode outages almost all of which are due to DC power/networking.

And the fact is that my Mac Mini has an equivalent Unixbench/disk test as the Linode instances + AWS m1.xlarge. So it gives you an idea of how much better performance and value you will get from a dedicated or colo.


You need to qualify your first statement: "the OP would be crazy to go with Linode [because I disagree with how they handled a particular security-related incident]."

You should be as transparent as you expect other entities to be.


I completely agree, having co-located a few projects in my day as well as using dedicated HW through Rackspace.

One of your key points is what turns some people towards more managed services: "skills to support it". I'd add time to that factor as well.

If I had the support staff, I would co-locate in a second. Heck, one of our previous locations was next-door to a Level 3 co-location facility. It was pretty nice to be able to walk 10 feet and access your hardware.


Great point about the time factor. To be clear I'm not really recommending this to a startup - in a startup your time is probably too valuable for this. For myself the work is just as enjoyable as programming so it makes sense for my side-projects. I'm on-the-fence if I would actually use it in production. But its been pretty hands-free since I initially set it up. Its a pretty small sample size for time, so far it hasn't happened but I could easily have an issue that blows out a whole weekend.


Relatively none of the AWS/Heroku-type startups of the type most publicized on HN is spending money hiring those skills.


There's a distinction to be made between AWS and Heroku. If you're hosting on AWS you still need to have someone who's able to maintain a server. With heroku, you don't. So the advantage of AWS over Colo is mainly scalability and the reduced need for expensive hardware. Depending on your apps load behavior, spikes can require that you keep 10 times as much HW than you'd need on the average, that's where AWS really shines. But in the end, the instances you buy at AWS are virtual servers that need a real admin.


It's a distinction without much of a difference when most are deploying via Rubber, Vagrant, Chef/Puppet/cfengine, etc. to maintain a policy of programmer-deployers. Of course this isn't a rule, but it's prevalent.


No, not true. Server maintenance is a job that requires that you track what services are deployed on a server, which ones need security updates, knowledge about how to correctly configure a firewall and lots of other stuff. Deployment via Rubber/Puppet/Chef etc. only changes how you get the needed packages and configuration on the server. It doesn't tell you magically what configuration you need on a system.

Nitpick: Vagrant is not a deployment system. It's an awesome tool, but it falls back to puppet/chef for the actual configuration.


I know what server maintenance is.


Then I don't get your point. One of the major advantages of heroku over raw AWS is that you don't need to do the server maintenance - it's all done for you. And yet you say that the distinction blurs when people use puppet - which is not true.


Author here. Co-location sounds nice, if you or one of your employees is always near the datacenter. This alone is reason enough for me not to consider it.

For $100 I could get a Xeon E3-1245 with 32GB ECC Ram, 2x 3TB HDs from Hetzner. One year later? Better system, same price. Upgrade costs? One to two days of work migrating.

I believe co-location has its place but not if we are talking about "only" $100 and pretty simple needs. There is always a time factor that outweighs its gains.

"Hostage situations" and incompetent personal is something, I believe, that is hard to compare. Everyone near your system, no matter what or where, can cause problems.

What if they refuse to hand out your server till all payment problems are solved? Maybe this depends on the country but getting the server if they refuse to hand it out will likely consume more time than just paying whatever they want. Also if you planned for this situation with some EC2 instances just switching the domain over and settling the dispute should be doable.


A good compromise is renting a dedicated server. I usually prefer Linode, but I'm planning on taking this route for a current project that requires a disproportionately large amount of disk. I'm currently looking at gorillaservers.com and ubservers.com. (Advice welcome.)


Softlayer.



I don't intend to sound harsh, but comparisons like these are absolutely useless. It's simply incorrect to make blanket statements on the pros and cons for each service without some context. The benefits and drawbacks are going to change depending on the characteristics, purpose, and needs of the application. This post makes a "one-size-fits-most" generalization, which makes it almost entirely useless.

What kind of application are we trying to deploy? What is your budget? What is the traffic level? Is performance a top priority? How many sysadmins do you have at your disposal, and how many are you willing to add? What kind of sensitive data are we storing/transmitting?

The answers to these questions drive the selection process, and end up altering the importance of each pro and con the author mentioned. Depending on your application, some pros and cons are eliminated, and new ones added.

Please please PLEASE, for the love of all things good, don't use an article like this as the sole basis for selecting providers. Think about what you need, ask questions, and craft your search to your purpose. Don't go pick method X because other people say it's great (for their purpose).


Author here. You are right that I should have given a better overview what exactly I am doing (the introduction + requirements are a bit short).

The pros and cons can change, some of them, like availability of features, support,... don't. But you are right that there is no "one solutions fits for everyone" plan.


> The pros and cons can change, some of them, like availability of features, support,... don't.

But they do change. What features are important depends on your application. The amount of support you need, and by who, changes as well.

What I'd love to see is a series of articles that helps walk people through the platform selection process from the perspective of a few sample applications/organizations. I feel that would be really constructive.


This issue is near the top of my list at the moment.

I currently spend $100/month on 4 Linodes (3 x 512MB, 1 x 1GB). I love Linode -- efficient support, and their London datacentre has been utterly rock-solid for me for several years -- but I'm beginning to think that, for me, it's the worst of both worlds.

On the one hand, I could move all 4 servers to a dedicated Hetzner box (EX6 or EX6S) running Xen, for a small setup fee and similar monthly cost, and get 4 or 8GB ECC memory on each one. This has a slightly higher sysadmin burden (5 servers to administer instead of 4, slightly higher risk of disk failure), but not that much. And the move is relatively painless, because I can directly transfer the disk images with dd over SSH.

On the other, I could move the services to Heroku, probably pay a bit more, and essentially stop doing any sysadmin. This is superficially attractive... but moving a load of old things to Heroku isn't straightforward, and that probably rules this option out.


For non-production usage (eg staging / development / hobby sites), you can also look Hetzner's "Server bidding". It's where they (dutch) auction off their slightly older servers, and there are some great deals there - currently, for example, there's a 12GB i7 with dual 1.5GB drives for €44 a month(including VAT), which I think is a great price.

The downside to Hetzner is their support is fairly brutal when it's outside their normal parameters: obvious hardware fails are replaced reasonably quickly (eg within an hour) - anything non-obvious and you'll batter your head against their support, or you'll have to pay extra to change hardware if you can't prove that it's their problem.

I had an EX4S and after a month I terminated it because it kept failing on a minimal Ubuntu install and they refused to do anything without proof or money, but have been very happy with a cheap 8GB dual core Athlon server I got through the Server Bidding process.


I am quite fond of Heroku. It solves a lot of things for you and makes it more obvious that you should decouple things and use other services from the get go.

The time you spend doing sysadmin things could be spent writing code or otherwise growing the business.

Also, as you grow you aren't going to need to hire a bunch of sysadmins just to build, manage, and deploy more infrastructure. Saving even one $60,000 a year employee buys you what, $5,000 a month in infrastructure? That's quite a bit on Heroku + various addons.


Sure. But something to remember is that a dedicated/colocation server is likely to be at minimum 10x faster than Heroku for the same price (considering AWS is around 5x slower for me). Maybe even approaching 20x.


"I could move all 4 servers to a dedicated Hetzner box "

How do you deal with hardware failure when you have a dedicated box at Hetzner? Specifically if/when something fails are there spares etc?


This is a really common oversight. The thing that posts like this fail to address is whether, in their theoretical application's case, downtime is a ballbreaker, or a minor annoyance.

There is no one-size-fits-all or simple, formulaic solution to choosing cloud vs. bare metal. It's hugely dependent upon the organization and the application.


How do you deal with hardware failure when you have a dedicated box at Hetzner? Specifically if/when something fails are there spares etc?

Just to play devil's advocate, it's not unknown for large VPS providers to have major issues. This is something you need to consider regardless of whether you're using dedicated or VPS.


"This is something you need to consider regardless"

I'm not talking though about a separate issue which is proper backup procedures. I mean if you have a server racked at Hetzner (or elsewhere) what is your "plan b" when there is a hardware failure.

In the case of a server that I just racked somewhere I will purchase a supply of the most logical parts to fail (fan, hard drive, board, power supply etc.) so that the parts are available and can be replaced quickly. I know that some providers take care of this for you so if the hardware fails (your hardware) you are back up and running quickly.

In the case of a VPS, by contrast, it can generally be assumed (nice if a way to verify this but I'm guessing there isn't) that they take have taken care of and planned for hardware issues and spares and have a strategy. Of course if they haven't you have a big problem. At least with your own hardware you can plan accordingly and insure a better outcome.

There are other issues as well. If the colo place is close to you do you keep the spare parts yourself or do you leave the spare parts with them? The answer depends on many factors (such as is there security where the parts are and who has access to them. If that is not the case better to keep the spares and drive over yourself with them even in the middle of the night).


You're talking about colo issues, not dedicated servers as originally mentioned (the Hetzner EX6 package). It's simpler and sometimes even cheaper to just rent the hardware and let the hosting company deal with all those issues, while of course taking care of redundancy yourself.


That's specifically why we're moving to Linode + AWS / Rackspace for backups. Hardware is something we're not willing to deal with. My cofounder is good at / likes to deal w/ server software, but hardware is just a dealbreaker for us. We'd rather view the extra spent as an insurance policy on bad hardware.


hetzner is known to be very strict if one of your hosts misbehaves. Your network is cut off immediately, getting the box online again take quite some time. I just witnessed a case where that took 5 VMs on one host offline because one got hacked.


Whilst I don't like Linode because of their security issues I definitely wouldn't choose them if I was based in London. Why wouldn't you go with Hetzner and use the massive price saving to move to SSD ?


As a Linode user, I would really appreciate any specific comments you can make about security issues you have observed.


As I'm looking at setting up a blog, website, and company, my inner nerd keeps nagging me: "You could build it and host it all yourself". But I know I don't need to.

I nearly majored in economics, and I've worked in a datacenter, so I know it's simply more efficient to depend on hosted services. Yet I still want to set up the whole stack. For me, it's a question of letting go and trusting the services that others host and others use. And it's foregoing the pride of "doing it all myself".

There simply isn't enough time to build everything from scratch -- if you build your own servers, you're sourcing HDDs and motherboards and power supplies and other components. If you make motherboards, you're sourcing copper and other raw materials. No single human is so tall as to pull copper ore from the ground, pull silicon from sand, and move vertical enough to self-produce a tablet or PC. Currently this takes several thousand humans.


Don't forget hybrid solutions. I've done things in the past with:

a) co-location for the main DB servers (allows you to be very specific about hardware choices: RAID cards and SSDs not just preferred manufacturer but also the exact model) and backup machines (needed higher density HDDs than could be supplied by hosting providers choice of dedicated servers)

b) some unmanaged dedicated servers for the core servers that don't rely upon specific hardware requirements (HTTP servers, memcached, varnish). Also easier to slowly ramp up the number of these month on month.

c) virtual boxes spun up when required to handle spikes in the load and then canned when it goes quieter again

Even better if your hosting provider provides all 3 and can arrange a private VPN between the sets of hosts so you don't get billed for your 'internal' bandwidth.


...and these kinds of issues, which I've faced myself many times, are why I'm building Uptano. The "cloud" vs "dedicated" vs "co-located" are issues that were created by the artificial separation of a few good ideas.

There's no reason you shouldn't be able to have dedicated hardware performance, instant deployability, on-demand usage-based billing, at costs close to, or better than, co-locating it yourself. As I'm working to prove with Uptano (https://uptano.com)

I really think server hosting is going to look very different in a few years. We've not come very far in the past 5 years.


In fairness, baremetal clouds have existed for a while. E.g. baremetalcloud.com, stormondemand.com and a few others.

That said your offering strikes a nice balance in terms of price/performance. What I'm missing is bigger profiles (64G Ram please?) and information what CPUs and hardware you are using (blades?). "Compute units" are a terrible metric, give me a model number so I can look it up on cpubenchmark.net.


Thanks for the feedback. Bigger hardware profiles are definitely coming (some exciting profiles as well). CPUs vary a bit, but I added Passmark numbers. Clarified that the servers are 1U rack-mount machines.


In my experience, Linode is the best roll-your-own you-are-on-your-own cloud provider. Obviously they are aimed at the savvy but it's reliable, cheap while being easy to estimate costs, simple to configure and expand, pretty good documentation, plus it doesn't have the learning curve or linguistic peculiarities of Amazon.

Regarding Rackspace, I've had good experience with them when working at mid-size and larger companies. Unfortunately I've had the opposite experience when functioning as a freelancer, working with startups, or as an entrepreneur myself. Rackspace didn't even respond to sales inquiries. Initially I figured this was a strangely repeated fluke, but other small companies and entrepreneurs I've spoken to have reported the exact same thing, where they send an inquiry to Rackspace or ask to speak with a sales engineer, and they get no response. Nothing, zip, nada. I find that very strange, and am speculating RS no longer wants to deal with the growing pains and frequent support requests of startups, but it certainly makes the decision to stick with Linode or EC2 much easier.

I don't have much experience with dedicated anymore, but have repeatedly heard good things about ServInt and SingleHop. Have also heard good things about Firehost for a managed cloud provider. I would love to hear others opinion and experience on any of the aforementioned companies though.


I really do not understand why people keep recommending Linode on here. Apart from their woeful and disgraceful security policies some of their data centres e.g. Fremont is very unreliable.

I would recommend http://www.webhostingtalk.com as you will find out much better options for your specific needs.


I defended you the last time this came up; this time I think you're being wholly unfair. A single incident -- severe though it was -- does not make "woeful and disgraceful security policies". And the only data center that they have that has occasional issues, as far as I know, is Fremont, and it's worth pointing out that Fremont has had less downtime than AWS this year.

I use their Dallas and Newark data centers currently. I have had zero downtime this year, which puts Linode at the head of the pack in terms of reliability.

So if you don't understand why people keep recommending Linode, it's because:

1. The prices are fair;

2. The service is as reliable as anything else out there, and in some cases, far more reliable;

3. The performance is good;

4. The support is blow-you-out-of-the-water fantastic;

5. The software (their management console) is pretty good;

6. There are very very few complaints overall, other than their handling of the Bitcoin incident.

I agree that they should have handled that incident differently, and that they still haven't taken proper care of it. However, you're being otherwise dishonest in your portrayal of Linode.


Linode also offers native IPv6 support, and they will route you a /64 on request.


Good comparison between the 4. Rackspace has come a long way since we evaluated them a few years ago (they wanted like 24 hours to bring up a new instance/server for us back then, so we ended up going with AWS).

Generally speaking our biggest challenges with AWS have been storage (making TBs of web content securely available to various autoscaling clusters) and network i/o (especially across VPC/public internet boundaries).

We've actually found that AWS' pricing beats the costs of hosting internally, especially once you look beyond raw server cost and factor in power/cooling/manual labor/datacenter space/etc. And there are lots of different options for monitoring your usage to avoid surprises (we're looking into programmatic usage reports and New Relic for that, though we've been there a couple years now so we have a good idea what our bills are going to run each month).

As far as CDN, we get way better pricing from Level3 and Akamai than we could from CloudFront or Rackspace, but our traffic patterns are more 95th-percentile-friendly than most.


The issue with these comparisons is that it tends to be about VMs and storage only. A modern applications requires a lot of moving pieces. Setting up and managing, say, a queue service, has costs associated with it where something like SQS becomes a serious value add.


I totally dislike comparisons of dedicated hosting versus cloud. Especially when they dont factor in any of the costs of the support contract, hardware replacements, etc. involved in supporting physical hardware.

He also mentions that there is no way to see what your next bill will be in AWS. They offer an 'Account Activity' link, that shows you current charges in the current month. That can be helpful when testing things.

I hope people that are new to setting up infrastructure and supporting it do no use comparisons like this to make the decision for them. There are far too many variables not discussed in this article for this to be very valuable to anyone.


> He also mentions that there is no way to see what your next bill will be in AWS. They offer an 'Account Activity' link, that shows you current charges in the current month. That can be helpful when testing things.

Good to know, thank you. I was not sure about this feature and after asking one of the sales engineers on the AWS event they told me that this is just not possible, especially if you want some details.


A good balance I found is to have a dedicated server with a stand by AMI in the cloud and switch over using DNS.

What you pay for in the cloud is convenience and not performance.


I've got a con for the Rackspace list, that in some was conflicts with one of it's pros. Pricing is simple because of the small choices for instance performance. I would love to have more choice in instance performance beyond memory based tiers. I'd kill for a c1.medium analogy on Rackspace.

With that being said, I'm a loyal Rackspace customer and love their cloud offering.


There's a very simple formula for figuring out whether self-hosting or cloud hosting makes more sense.

Add a month's worth of colocation fees, capital depreciation and associated labor costs. If it's less than your monthly cloud hosting bill, then it's time to self-host.

And if you run your own firm and haven't figured out how to calculate capital depreciation yet, it's time to learn. :)


This is a pretty thin 'comparison' with dedicated servers being given only a cursory mention and no analysis and VPS not covered at all.


how much cheaper is Rackspace vs Amazon CloudFront? From our experience, Amazon also has more nodes that its CDN pushes files to and our CDN cost with 100k+ views a month is still under $3/month, for each solution.


You can always still use CloudFront from Rackspace. It isn't bound to Amazon EC2 VMs in any way. You get much better transfer speeds when you're interacting from within EC2, but even outside it's not too bad.


I'm not sure on pricing at the moment, but Rackspace CloudFiles use Akami for it's CDN nodes. I'm also not sure how this directly compares to CloudFront, but I thought it was worth noting.


Author here. I should have elaborated this a bit more. The pricing statement comes from the fact that I factored in invalidation requests and they sum up on my project.


>Of course there would be the point where I would need help from people who are specialized in database design / sharding / partitioning, etc - likely earlier than going the cloud hosting route

Where does this misconception come from? That is the exact opposite of reality. With the "cloud" route, you are limited to absurdly inadequate servers, which is a large part of what drove the "nosql" fad, you need to shard if you are on EC2, because they offered nothing with reasonable IO. Even now they have an SSD option, but it is a single crappy SSD, and barely any RAM. With the dedicated route, you can do a 512GB RAM, 24 SSD array server and not have to worry about sharding until you are in the top50 sites on the web.


Author here. From what I understand AWS tries to address the performance problem with RDS - marketing statement "this is build for solving this problem". Do you have any experience with RDS? Still the same problem?

That this problem does not exist if you kill it with hardware in the first place is true. But the way from evaluating an idea, to gaining traction, to hiring people and buying hardware is still a long way.


Unfortunately, RDS is EBS backed, so it does nothing to solve the issue. And you are stuck with oracle on top of that.


This basically rules out the last provider I knew of with a hosted DB option. Looks like DBs of a certain size are still a "do it yourself" area :/ Thanks for the info




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: