I got in touch late on Sunday night, discussed the problem with a couple of their support staff, and by midday on Monday morning, all was fixed, a new hard drive in place. Really quite incredible service, especially considering the price.
Their customer support is terrific.
For comparison see: http://www.hetzner.de/en/hosting/produktmatrix/rootserver-pr...
Just make sure you don't have anyone on the server (client, etc) running SSH attacks on their core routers or UDP floods, because that becomes a nightmare!
1. Uses desktop grade hardware (i.e. no ECC, single socket, limited networking, etc)
2. Is located in Germany (i.e. high latency for your US user base).
Don't get me wrong, the pricing Hetzner provides is unbelievable.
I just wish a US based hosting provider was available that used server grade component who was even 2x Hetzner price because it'd still be a steal.
(For those of you unaware of their pricing, you can get a Xeon E-3 with 32GB of ram for just 79 euros/mo.)
To be fair to them, they do offer servers with ECC for a (slightly) higher price:
tl;dr, yes, ECC does matter— a lot more than you'd guess!
ECC exists to prevent data corruption so that you don't have to restart your server.
Since I imagine you restart you iMac near daily, not having ECC isn't a problem.
Question: (since your the OP), how do you deal with the huge latency to Germany from the USA?
This is more of a physics issue ("speed of light") than anything else.
In some cases the location is actually an asset. Surprisingly, not everyone lives within the US.
More helpfully, the partial solution is to use either the Rackspace or HPCloud CDNs. Both of them are pretty cheap and both use Akamai, which gives you PoPs everywhere that matters. In my case (Australia), Amazon doesn't have a PoP for Cloudfront nearby so I using Amazon means I'm stuck with either West Coast US or (even worse for routing reasons) Singapore.
If you are big enough then you might be able to find yourself a better CDN deal, but most of the cheaper ones don't have a POP down here.
1. Making lesser HTTP requests by using best known practices (for example YSlow recommendations)
2. When you start growing, moving static assets to CDN etc
3. When you grow even more move servers to US :)
If you have a cage, it's the datacenter (peering, power, environment, physical security.)
Do you need to care about these things? Probably not. (But maybe you do, and you happen to care less about price, or database write latency/throughput/predictability, or...) Pick whatever set of tradeoffs works for you.
i'm not talkin about Hetzner, but generally
Setting up link failover between switches (you can't bond for 2gbps, iirc, if you are split onto two different switches) is sort of kludgy, too.
One's best bet is to just have multiple locations with low latency between them, and then just do it all in software, and leave the n+x redundancy to BGP routes. It's a lot cheaper and works just as well.
Note that this is how the Big Boys do it, as well - but it works for two machines as easily as it does two million.
One way involves the use of cisco stacking switches, allowing you to use 802.3ad between two independent 'stacked' switches. You can also use the external PSU to provide redundant power to each switch (giving each switch redundant PSU's and having each switch redundant).
The second involves the use of the linux bonding driver in balance-rr configuration. This has a slight bug with the bridge driver in that it sometimes won't forward ARP packets, but if you're just using it as a web head or whatever, you don't really care about those.
The 'big boys' do use ibgp/etc. internally, but that's for a different reason: At large scale you can't buy a switch with a large enough MAC table (they run out of CAM), so you have routers at the top of your rack that then interlink. You can still connect your routers with redundant switches easily enough with vlans and such (think router on a stick).
For those not able to afford a fulltime sys admin that can be a significant expense and bring in unnecessary risk.
Cost: Cheaper, because you're doing the work yourself and only paying for a VPS or two.
Time: A weekend.
If you're running a start-up and you can't hire a sysadmin, yes, managed hosting is a good idea and will net you a reliable system for a decent price. But if you're spinning up test/hobby projects which aren't mission-critical, take the time to build your own stack/servers. It takes a minimal amount of time and energy and will give you valuable experience you can use for the rest of your career.
Sysadmin is something that you can learn by doing, and any competent software developer should be able to pick up enough knowledge to manage the kind of simple deployment that a freshly minted startup needs.
If I was moving a start-up from Heroku to self-managed hosting (which could even just be Linode VMs!) I'd include time to train them on what I was doing, and why, and I'd probably stay on retainer for emergency support.
Personally, I'm also more than happy to chat to local start-ups informally and share my experience. (And if anyone in Scotland, particularly the Edinburgh area, wants to take me up on that, my email's in my profile blurb.)
There are certainly many many more which never had a failure.
The only complaint I have about it are that the relay they have between France and the US is nearly always congested during US prime time. Because of this, download speeds from my server are really slow around 8PM est. Otherwise it's great.
Development box and not in EU? Do it! Production box and not in EU? Maybe go with Hetzner or someone domestic.
I've been having trouble with my existing UK dedicated server provider of late and am looking to move. I could get a lot more bang for my buck with OVH/kimsufi, but wouldn't want to move if they were more unreliable than what I have now.
Also; they are HUGE and growing fast. When I started with them in 2005, they would ONLY speak french, you could not ask for support in another language, you could not pay in another way than the french bank and so forth. And machines + network would be down often. They improved a lot, so my guess is they will improve more over time, but they are a monster in hosting land.
Thanks for replying.
I switched 5 different providers before settling with Hetzner.
Companies like Amazon and Google no doubt spend a lot of time thinking about the physical locations of servers and how failures might affect them in terms of uptime and data loss, but for your average small application I think it is ok to accept very small risks that will result in downtime as opposed to spending a massive effort or engineering around it.
I also appreciate that services like Heroku hand stuff like this for you, but what I'd be really interested to see is take your average dedicated machines at your average datacenter and compare the uptime to a service like Heroku. Because while dedicated machines have failure cases (power outage, networking switch breaks, one of your machines hardware dies, hosting company has networking issues, etc), AWS/Heroko have them too (AWS outage, DDOS attack against Heroku, AWS/Heroku engineer makes a mistake, etc).
see for example http://www.last.fm/user/Russ/journal/2008/02/21/zd_postgres_...
in short: pgpool is an old-fashioned unix architecture (process based), pgbouncer is fancy event based. so it usually is a bit more performant. reliable are both so that should not make any difference.
People have brought up reliability and that they are using consumer grade hardware. This is an issue if you have SPOF. If you have a fully distributed system (rare these days, for sure) it isn't much of an issue.
My current plan is to use DNS and each box is a full stack. (web app platform on top of riak with authoritative DNS on the box.) So a web request might look up example.com and get back a list of authoritative name servers NS1-6.exampledns.com When the client then does the query to one of those auth servers the auth server is in the cluster and returns the list of other servers in the cluster ranked by load (Eg: multiple A address response for the query.) Then when the client goes to connect to the web server it will hit the least busy node.
I wonder, though, if there are 5 authoritative name servers listed in the root for a given domain, will the root return them in the same order every time, such that my first authoritative dns server (the one listed first at the domains registrar) will get most of the DNS load? Or is there a way to have the root name servers randomize the order of the authoritative servers they give back to the client?
(Yes all this will be open source, eventually. I've learned not to make promises about when-- soon as its viable outside the lab.)
There are a couple of caveats to your load balancing strategy. With enough headroom, these probably aren't total game breakers, but you should be aware of them. More at http://serverfault.com/questions/60553/why-is-dns-failover-n...
1) You shouldn't expect even or consistent load balancing across servers. Some caching DNS servers (such as those at large ISPs) have very many downstream consumers, and they won't do any randomization. If a large DNS server sees a new order of records, it might trigger a synchronous switch of 10% of your customer base from one server to another. This will cause spiky traffic.
2) You can't rely on any kind of sticky sessions. This may or may not be a problem, and many load balancers drop this guarantee as well for performance reasons, but it is certainly possible that a client may see a DNS records TTL expire and switch to a new IP. If you aren't prepared for that you may start dropping sessions.
You probably want to have an external dns host returning two ip addresses for a haproxy or LVS cluster, which you then route into your actual web tier.
I have no idea about how authoritative name servers work, but I'm assuming it's a prioritized list. I'd probably have all your authoritative servers provide all the IP addresses in any case.
Hetzner is comparable to Heroku and AWS, except that you have to do your own rack buildouts, private IP subnets, load balancing, redundancy zones, and CDN.
Is that right?