Firstly get your own IPs from your local RIR. Have your co-lo provider publish your routes, but they will be YOUR IPs. if you co-lo provider sucks, you can move and keep your IP space. (This is vital for email, but I recommend it for everyone).
Secondly buy a Out of Band Management card with your server (iDrac for Dell, iLO for HP, etc) these cost fairly little and will save you hours of access / remote hands. They will pay for themselves, you can even boot an ISO from your laptop over the internet. Get your co-lo provider to give you an extra uplink for this and give it a separate IP (use one of the providers range)
Thirdly consider Mission Critical support on the servers from a solid vendor (In australia I consider the enterprise vendors Dell, HP, IBM and Acer, and of those I will only use Dell or HP). 4 hour response means you don't need as much spare hardware, and you can have things fixed FAST. I have only lost 2 disks in a rack of servers over 4 years. Both had a replacement in place within 4 hours (once at 1am).
Fourthly look at a good Virtualization solution. We initially went oVirt (Open source version of Red Hat Enterprise Virtualization) but ended up migrating to VMWare. VMWare Essentials Plus costs us $15K for 3 years in extortionate australia prices and is worth every cent. It provides Backup (VMWare Data Protector), Failover, Virtual SAN, Live Migration and a heap of useful features that save huge amounts of time.
Finally if your going to grow consider getting a rack (or half / third of a rack). This will likely give you unescorted access to the data centre, and is often not that much more then a few RU of servers (depending on the DC and racking availability).
In terms of picking a good colo, find one that has high security ratings in an area that doesn't have fluctuating power. If they're in Florida, make sure they're Cat 5 Hurricane rated. If they just happen to also have an entire floor dedicated to government hardware, or are on the same power grid as a hospital, and have buried fiber/power lines instead of exposed, even better!
That said ARIN and others might be different. For example APNIC will only allow up to a final /22 allocation at the moment, where as ARIN just allocated a much larger block despite being on final allocation rationing.
Why is this "vital for email"?
I agree that obviously if you can you want your own space.
But if you are planning your move you can handle the new ip space by dns ttl. Which is what we have done since the 90's with moves where we didn't have our own IP block. And yes it is a huge pain to be avoided so I'm not disagreeing just wondering about the "vital for email".
Spammers have gotten good, and because of that anti-spam efforts have had to increase their restrictiveness over time. Because of this reliably getting email out of your network and into another inbox is anything but trivial these days.
For smaller projects, the convenience of the cloud is absolutely worth the price. For a larger build - like over 10k a month in infrastructure cost the cloud starts to make less sense economically but 'colocating your first server' is not a right of passage anymore - its unnecessary and a huge waste of time.
All of the functionality/services you have to provision yourself in colo like - redundancy, backup, remote hands, environmental monitoring, hardware maintenance are just not worth figuring out until there is substantial cost savings to realize.
You're not going to be able to troubleshoot or optimize down to the hardware level in the cloud. If some cloud service doesn't work or is not available, all you can really do is wait and hope they get it working again, while if you manage your own systems you might have a chance of fixing it or at least finding out what happened and work to prevent it from happening again. Some applications just won't work on the cloud or exhibit mysterious bugs, failures, or bizarre behavior.
Security and data ownership is pretty much out of your hands when your servers are in the cloud. You can only hope and pray your cloud provider is doing a good at securing your data and isn't stealing or selling it themselves. You generally have zero visibility in to how security is handled by your cloud provider or whether a security compromise has taken place.
And then there's the issue of vendor lock-in, which becomes more and more likely the more unique cloud features you use.
Of course, for maximum control, you wouldn't rely on a colo either, and just host your servers in your own server room(s).
So you don't need to monitor temperature sensors any more with a VM, but most of the above are still costs with cloud - flaky RAM, redundancy, backups, monitoring, etc. There are also the things you previously didn't have to worry about - crappy resource isolation turning your scratch disks into 2kb/sec joys, total ineffectiveness of the CPU cache, managing a now-essential network fabric to tie pieces of your app together where previously it all fit on 2 master/slave machines, etc.
Of course if your application isn't simply some stock PHP/MySQL application, and you want to really "embrace cloud", then the time you saved fighting a subset of hardware problems is replaced by a fixed development cost integrating with someone else's higher tier APIs (S3, Dynamo, etc) you can then never escape even if you wanted to.
I've never seen any realistic numbers comparing the use of traditional hosting facilities, say, providing managed servers, to the new generation VM stuff. Any material I've seen has been sponsored crap involving some multinational.
My own experience is similar to yours - hosting your own hardware is a pain in the ass. However there is middle ground, there are many colos that will happily provide managed hardware, and perf/pence, this still tends to be far cheaper than the equivalent in VMs, and increasingly they're coming with similar APIs to order/replace machines
However, if you anticipate growth, or need to be able to spin up a test server, then shut it down a day later, etc. then you are better off paying premium for the cloud. Sure, there are economies of scale at play here: AWS has so many servers, they are not paying a person to log into every one of them every so often to run updates, etc. However, make no mistake: everything you would have to do with a server, Amazon has to do too. In fact, they have to do much more to keep all of them running at once. That cost will be passed onto you.
Even with all of that, it's cheaper if you want to be able to spin something up, then shut it down. Another great example is the additional services provided by the likes of AWS: you can get things like load balancers, cache servers, database servers, orchestration services, etc. You can do all of this yourself, but at some point it's cheaper to just pay for something like ELB than to learn how to do it yourself and spend the hours to set it up. Human time is more expensive than that.
Lastly, if you just need a really small machine, there is no beating the cloud. You simply cannot get a dedicated machine for $5/month, and you likely never will.
I found OVH's dedicated server offerings to be so cheap that there was no point in using shared hardware for the flexibility. Then again, I'm not running my entire business on these boxes... but I don't think I'd have a major issue if I wanted to!
However, I disagree about colo'ing not being economical. If you setup a physical box in a colo with a hypervisor (kvm, xen), then the density you can get out of 1U is amazing, and makes the entire thing much more affordable.
Take this site for example: https://www.ubiquityhosting.com/cloud
16GB ram, 8 coers for $128 a month. Considering most 2U colo spots I've seen hover around $150 a month, seems like the colo is not a good deal. However, the VPS for $128 a month is a single server. Your $150 a month colo, with xen, you can pack 3-7 vm's in the same space, making the cost-per-(virtual)server much lower.
Agree. If there was a question on some test that said "I find colo'ing a new server quite fun and enjoyable" I would give that a 9 or 10 for sure. Have always loved the sound of the machine room. (This dates back to the days of the computer center at school with the tape drives and Dec terminals.)
For sure once you reach above the $10k/month level there's a very good chance that it will make sense to colo. You can fit a LOT of hardware in a single full rack, and they're like $750/month plus bandwidth and power costs (so around $1200-1700 all said and done per month).
Hurricane Electric data center in Fremont, CA, always has specials running.
The opposite is true for dedicated, since they have to compete with us-based players, and OVH got in the East Coast market big time. Prices are lowering.
It isn't just a "good middle ground". It is an amazing middle ground that allows you to benefit from AWS services (simply choose providers close to AWS data centres) whilst getting substantial performance/cost benefits.
I would imagine at minimum you are getting 10x the performance compared to a typical VPS.
I've seen cases where there are charges (as only one example) of $10 per month to be able to do a remote reboot which of course can be done on any box with an IPMI at no charge if it's your box and that's the way you bought it. So if you don't know what you are doing my point is you can easily be lured into thinking you are paying a reasonable price but then paying more simply because you aren't doing a true comparison of features and benefits.
I was blown away by the fact that I can sit there and watch it reboot over screen sharing from my iPad. I treat it as a cloud desktop (runs the latest vanilla Ubuntu) and so of course it was easy to get apache and PHP and Ruby and a whole web server environment up. I do all my work on it, as well as my play. I use Plex to stream myself media, and OwnCloud and other tools to replace Dropbox and even deploy sites.
I want to sell people on the idea that it's easy to have a cloud desktop you can access from anywhere, that can also be a web server (not selling web servers that can also have a desktop). I want to sell people on the Idea that with freely available software, we can each have a private cloud with just our data.
I'm not quite sure how to get started, and I'm not trying to make a killing with profit, I just don't see people trying to make it simpler for the average Joe to have a cloud desktop and not need to pay to use shared cloud services which then become huge targets for data breaches.
I don't prefer building powerful hardware. I prefer reliable and cheap to build and maxing out the 4U of my rack. I like to think of my servers as "life support for an internet connected harddrive". My CPU's are fanless Intel Atom's with 2GB of ram and I get Mini-ITX motherboards that can be powered by a brick DC power supply. Ultimately, I'll move to Flash hard drives so I'll have no moving parts in my server, but I'm waiting for the price to come down and for reliability to match spinning media.
For the boxes, I have a 1U enclosure holding two Mini-ITX motherboards. Then I have a 1U switch, it's nothing special, cheapest gigabit switch that got good reviews. And then for my load balancing and SSL, I got a Kemp Technologies hardware load balancer. It's got a little ASIC in it that offloads the SSL from the servers. I think I can sustain 200 concurrent SSL requests, which is fine for me right now while I develop my app.
In my 1U holding the servers, each server has a two disk software RAID-1 setup. I can't physically get to the colo but once or twice a year max, so I need to be able to withstand a drive failure here or there.
I think I get a drive failure about once every 2 years, and one ram stick failure so far. I had one motherboard failure, but it was a VIA technologies chip/board, and since then I have switched fully to the intel-produced slim Mini-ITX Atom motherboards, and those things are rock solid - just wish I could get 8GB of ram for them for more memcached goodness. ;)
I don't even know if my dual Mini-ITX server enclosure is sold anymore, it's kindof freakish especially with heat. Since I built this box I have been investigating "shorty" or "short depth" 1U enclosures. I wanted to be able to pack in the servers, and you're allowed to bolt servers to both sides of the rack. So by transferring my servers to shorty enclosures, I should be able to spread the heat out a little better and max out my space. I think I have space right now for 3 more shorty boxes if I need to expand my cluster.
Edit: for physical access, you just call the support line and either ask for remote hands/smart hands if it's something simple like rebooting a box for you, or you can schedule a time to come in. I've never been turned away from coming in the same day and when I want to. I am usually alone in the server room when I work. With all the cooling equipment and servers, it is very loud in there. Sometimes I just wear earplugs to dampen the noise. They've never charged me for smart hands, but I don't ask very often, once a year maybe.
That said, there are a couple of companies on Webhostingtalk that have good prices for hobbyist colo if you look in their colocation forums and search for "Seattle." Opus is nice ($129 for 4U and 3A of power with 400GB of transit) and there is a company in Seattle--their name escapes me but I've seen them on WHT--that is $35 for 1U and 1A with 500GB of transit.
So, yes, I think you can do better especially if you just want space for a medium-usage 1U.
Another power gotcha comes with the redundant circuits provided. For example if you are allocated 15Amps total that usually means total across both circuits not 15Amps per circuit.
IP addresses is another. AWS gives you 5 before you have to start applying for them. We do email marketing, so we need a lot of IPs (we give dedicated IPs to each customer). With APNIC (Asia Pac) we have a /22 and a /23 range (1536 IPs) for ~ $2900 per year.
Legal reasons are a whopper as well, Australia has some tight laws around privacy and liability if an overseas partner leaks your data.
We still host a lot of things on cloud providers (Rackspace & AWS in both Sydney and US) sites, but at the end of the day our VMs out perform cloud VMs and are considerably cheaper.
If you can build a redundant system, and don't need the extra 99.999999s of hardware resilience, but can have 99.995 network uptime, dedicated is great.
I have had 4x 1 minute dropouts in the past 18 months from our hosting vendor upgrading their routers always at midnight localtime. They provide diverse data paths and we have not lost connectivity despite major outages. If I thought I needed to I could get a separate connection from one of about 20 providers in the local DC I am located in.
I really believe many people put too much weight on the complexity of managing physical hardware, when your already doing 90% of the administration anyway on a cloud server. Yes it is capital intensive, but you will likely make the money back in your first 12-18 months.
I collocate 5x 4u servers with 24 and 36 drive bays, 128GB ram, SSD drives squished in for OS, for a total (multiple raid 5 volumes for each 6 disks) usable space for my project of 375 TB
Power and 2-3gbit of bandwidth is included,
as well as remote hands to check up on server (for example IPMI sometimes does not work) or replace drives (pay extra for enterprise grade drives would save you alot of hassle in long term!)
for .... €1500 / month
now go and calculate the cost of that on AWS
I tend to favour dedicated servers which I own over VPS but for small businesses with extremely constrained budgets ("prove we can make money before we invest in hardware") and startups, flexible virtual servers can be a great way to ramp up in the beginning.
Anyway its a once off cost, that accountant can do all sorts of magic with this.
Ive been in business for 7 years and hope to remain around for as long.
Have already saved money as to compared to renting dedicated servers before. AWS etc were never an option the sums simply never work out.
AWS might be great at first when you are starting out, but the costs can cripple a business, especially if you dont have others peoples venture capital to burn.
edit: when bandwidth costs are factored in the difference is and order of magnitude between my current setup and amazon, at time of this post using 2200 mbit outgoing, 800 incoming
edit2: my only regrets is not collocating earlier, have spent well into upper 6 figures over all the years :( on renting, AWS etc werent around when was starting off either.
you get no tax relief for opex.
It's a shallow orange vs. apples comparison, but look at the price of colocating 1/3 rack at Hetzner (119EUR). If you already own the hardware, have no need to use any advanced AWS feature, have the skills to manage it, etc.. sometimes it make sense.
For personal and/or SMB needs, it usually not worth the trouble (specially regarding skills to maintain it).
So you're right, it's becoming more and more a niche.
You could also do hybrid approach - colo for core infrastructure and cloud to scale out, but that's more difficult to setup.
At a certain scale of colo or with very heavy security requirements you'll find it's cheaper to have your own datacenter.
AWS had the fastest network and but colocation allowed for cheaper CPU and RAM upgrades. As it turns out, my RAM/CPU needs plateaued at 4x my previous server so I have twice as much as I need. AWS got steadily cheaper over that same time and the servers got faster.
Upgrading to a new server is pretty disruptive. It would probably be easier to do slowly over time rather than big jumps.
The only thing I didn't see a mention of is DC power, whereas your out of the box power supplies on most OEM equipment is for AC. Most of the server supplies nowadays should be able to handle 240V, 208V, and 120V with AC on the same unit, when you go DC you want to consider buying a separate AC power supply for setting up the server in your office (unless you drop ship it to the colo).
Make sure you get a very efficient power supply too, because while you can get the most efficient or power miserly server on the planet, an inefficient power supply will increase the draw significantly. You also want to right-size the power supply, because drawing too little power lowers the efficiency (there's an efficiency curve available for most PSUs that are rated).
Colocation by no means indicates you have multiple servers that need housing nowadays.