Hacker News new | past | comments | ask | show | jobs | submit login

My honest question is why there is this odd loyalty to virtual environments in this community. I realize that it may be boring but you guys are passing up insane savings that can be had by using colocation. All cloud providers are very expensive when you actually do the math and you need more then 10 servers.

Our example may be a bit extreme, but we are just building out a new datacenter at a colocation and will recover the entire up front investment ( about 150k, we have the cash to not need leasing ) in a bit over half a year.




Dedicated (rented) offers similar performance improvements and cost reductions without the extreme upfront cost (obviously costing more over its lifetime as a result). But these days it really seems like a blind spot. Maybe it's just that I never completely jumped on the cloud bandwagon (or was running servers far before it got rolling), but I really haven't found a generic use case for cloud hosting. Planning infrastructure upgrades isn't rocket science; you'll have more than 10 minutes' notice to get a new machine. There are certainly many specific use cases for cloud hosting, but as a generic hosting tier I find it incredibly overused—and companies waste money and are forced to deal with unreliable performance (EBS i/o anyone?!) as a result.


GoGrid lets you mix and match dedicated and cloud servers, which offers the performance and value of a baseline of dedicated, and the flexibility of the cloud. I find it to be unparalleled.


Softlayer (my preferred dedicated host for years) does this as well. I agree that it's currently impossible to beat the combination of bare metal performance and cost efficiency with the ability to spin up cloud servers within one's own VLAN.


Rakckspace just launched something called Cloud Connect, which I believe does the same thing.


I do colocation as well, but if you don't have someone there to babysit your server, sometimes it can be nightmarish. It's nice to not worry about if a disk fails and you need someone to swap out one of your raid spares and hopes he doesn't swap out the wrong drive bay.

With colocation companies, everyone is sending their own custom built servers to the data center so nobody knows the exact configuration per server. If a drive fails, or a memory chip goes bad, you need someone there that knows the layout of your hardware config to debug the issue. I have a bunch of servers at RippleWeb but I'm sure the guy there has to deal with 1U servers from Dell, Tyan, Supermicro, and all kinds of other manufacturers.


Agreed. We do colocation for about 50 servers. Up front costs higher, long term costs MUCH lower - but man can it be a pain in the ass sometimes when there's a problem. :)


Can you provide some horror stories?


Here's one from a friend of mine: a hard drive just failed in the middle of a raid array. An OCD worker there decides that instead of just swapping out the dead drive, he's going to move all the drives up one slot and put the new drive at the bottom of the raid array. So instead of having 1 drive failure, the disk controller now thinks there are 3 drive failures which ultimately lead to data loss and his removal from employment.


There's a big difference between the cost to install and operate and the cost to scale. If your hardware utilization characteristics are easy to calibrate for and capacity requirements are predictably steady, you can certainly optimize your own racks at a colo. However, hardware changes quickly, businesses typically change even quicker and capacity requirements are often highly variable. Unless the above conditions are met, it's insanely risky to go into a contract for a colo. Let us know how that one year investment recovery works out for you.


We also use colocation. I'm not a developer on the team that uses the infrastructure but we use OnRamp in Austin, TX (http://www.onr.com/) and I have rarely heard of any issues. Most delays are from international CDN requests which I believe we have also outsourced to speed it up.


I don't think you can compare every ones case equally though. In my case it would be a lot more expensive to go with co-location even ignoring the upfront costs.

Right now I pay $4K a month for ~93Mb of Ram available per hour. Averaging 2gb per server for web, bit more for caches and application we can fit a lot of servers in our 93Mb.

Could I get a couple beefy colocated servers setup Xen and run my own cloud? Sure, but then I have to pay someone to run that cloud and I have to manage and worry about the hardware and those costs are a lot more for me than a small monthly markup.

There are a few things I don't host in the cloud(master db, and primary load balancers) but even those I pay a bit more to have them be managed hardware wise.


I have to say, for every provider I've looked at, the cost of disk space on virtual hosts seems insanely excessive.


Can you provide some details on your infrastructure?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: