Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's quite scary expensive. I got a better spec'ed server on Hetzner for 1/11th the cost of an Azure instance:

http://www.slideshare.net/fullscreen/newmovie/what-istheserv...



Yep it is expensive, bit hetzner are unreliable, their kit performs poorly, their support sucks and you can't deploy a sql server instance quickly (the latter is quite useful if you use sql server).


Hetzner has never failed me, it processes 3k+ C# redis commands in <1s which was 8x faster than my previous old Linux server of Leaseweb (costing 4x more). Although my old Linux server did quite well with over 480 days uptime and processed more than 10M commands. http://www.servicestack.net/mythz_blog/?p=838 In cloud-speak, that's more than 99.99999% uptime.

The current Hetzner server is looking good over 176 days uptime having processed more than 9.7M+ commands: http://www.servicestack.net/RedisAdminUI/AjaxClient/

There have multiple Azure outages within the same time-frame my Linux servers have been happily chugging along.

It's trivial to setup PostgreSQL/MySql on Linux. I personally don't deploy my own stuff on expensive SqlServer instances.


Uptime is not a problem. Stuff breaks. Being able to contact someone, get a root cause of an issue and get help to mitigate it is what is important.

In our case, a trial with Hetzner found their support to be somewhat lacking if something went wrong.

The azure outages didn't affect us at all.

Out load characteristics are somewhat higher than that. We can shift 10 million http hits an hour quite happily.


"Stuff breaks. Being able to contact someone, get a root cause of an issue and get help to mitigate it is what is important."

This.

This reminds me of the article about Heroku's "intelligent routing" and a discussion about scaling/scalability and performance.

We pay more for mitigating and reducing risks. Cloud services, particularly Iaas and Paas, such as Azure, AWS, GAE reduce (or are at least suppose to) the downtime when crap hits the fan and also enable more flexibility, speed, and agility in building out solutions.

The interesting part about the Heroku discussion pertained to the following:

Assume we have a straight forward, minimum layered architecture and that the average response time is 1.45 seconds. Using average responses times is a bad idea for determining true performance but let's continue for purposes of illustration. To get this average lets say 19 out of 20 requests return in 1 second, 1 out of 20 takes 10 seconds (Please correct me if my math is off/wrong). If we could add a layer that would shorten our long tailed curve that would be an improvement. However, adding a layer will incur some overheard. Lets assume for illustration purposes this layer will add 500ms to every request but will reduce the worst case from 10 seconds to 5 secords. So, 1 out of 19 take 1.5 sec, 1 our of 20 takes 5 for an average of 1.675 seconds. The average time is worse! However, the worst case scenario is much better. We've mitigated the costs of the worst case.

Using cloud services and vendors that have expertise and the ability to quickly address hardware issues is suppose to reduce the risk and cost when things go bad. Yeah you might have hiccups here and there that put you down for a few hours. But that is better than being down for days! It's really all about risk mitigation.


It seems to be what's most important to you. Uptime and Value are other important metrics cherished by sites that want to scale reliably and efficiently.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: