Do a new benchmark comparison vs a m1.small and it'd be interesting. I bet the small wins by an absolute mile.
If you can't avoid 502s with trafic HN can make, you should'nt do benchmarks about hosting matter.
Serverbear notes that Amazon 7.5GB Large instances (which cost $180+ / month) benchmark at ~650 on Unixbench... with 30 MB/s for its disk. In comparison, a 8GB VM from Digital Ocean only costs $80/month. I don't have the numbers for the 8GB VM, but the smaller $20/month 2GB instance has a UnixBench of ~1900 with over 300MB/s I/O from its solid state drive.
(I presume the larger instances have more CPU power / priority in the VM scales)
That is half the cost for triple the CPU performance and 10x better disk performance. Other smaller providers, such as RamNode, offer extremely fast I/O with RAID 10 Solid State Drives in their Virtual Private Servers (500+ MB/s).
Amazon vs Digital Ocean
To be fair though, Amazon's CPUs are more consistent... consistently bad, but consistent. VPS CPUs and I/O are affected by their neighboring VMs, while Amazon seems to have removed that uncertainty. Nonetheless, in practice, you will always get a better performing CPU and I/O from other providers.
And if we compare both to bare metal servers, obviously bare metal servers win in price/performance, but are harder to maintain, so its hard to do an apples-to-apples comparison. But Digital Ocean VMs can be spun up/down just like Amazon instances... although Amazon has more load balancers and other infrastructure. (But nothing is stopping you from setting HAProxy on a front-end VM to loadbalance a cluster of VMs from Digital Ocean. Even then, other VPS providers like Linode offer Load Balancers as part of their infrastructure now)
Its hard for me to see the case for Amazon's cloud offerings. They don't have very much price/performance at all. At all ends of the spectrum, low end to high end, VPS providers such as Digital Ocean offers more vertical scalability as well as a cheaper price on all of Amazon's offerings.
Unless you need some specialized VM from Amazon (ie: GPU compute), or are locked into their vendor-specific API (oh I feel sorry for you), there is no reason to use Amazon's services IMO.
Anyway, there's your reason.
The other reason is that big businesses just don't care. Margins are high enough on software that cost of EC2 over another provider is outweighed by the benefit of existing infrastructure, developer experience, and the risk limitation by choosing AWS.
And certainly, for the small 2 or 3 server clusters that a small startup uses, Amazon's prices are much significantly higher than other providers.
Anyway, I'd have to check out the latency based routing thing, and how it differs from typical Geo DNS or "Anycast" DNS that is offered by a number of providers. My bet is that its just Amazon marketing speak for GeoDNS or Anycast technology.
EDIT: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Cre... As far as I can tell, Amazon's "Latency based Routing" is just GeoDNS with much better marketing name. Its all about reducing latency, but at the end of the day, it is no different from GeoDNS.
That said, Route53 does seem to be a good DNS service from Amazon. $0.75 per million anycast queries per month + $0.50 per zone is a good price methinks.
So while I'd never use a compute instance at Amazon, I probably definitely keep their Route53 service on my list. Looks pretty nice from what I can tell.
Another thing to consider is the number of mistakes a company has done. While Amazon and Linode have been around for years... Amazon had the Virginia fiasco this past year (Netflix outage), and Linode had the bitcoin hack. Digital Ocean has only been around for a few months, so their security / reliability is basically untested.
With those caveats in mind, it is then possible to look at the inherently flawed benchmarks and work off of them. Serverbear is a good resource for comparing those things.
Therefore seeing how Amazon compares to that is an interesting exercise. I was personally floored by how poor performance some EC2 instances has for some types of tasks (java/clojure related things among them).
I quickly decided Amazon was not able to serve my needs within the price-range I was willing to pay.
Why would you use a journaling file system on a sd card?
Actually I take that back, I was not aware ext4 performance has been improved over ext3 and even ext2
and apparently in ext4 you can even turn journaling off entirely.
But maybe JFFS2 or similar 'flash file systems' can be better depending on the use, not sure how the state of these kind of file systems are