
Benchmarking AWS, DigitalOcean, Linode, Packet, and Vultr - raiyu
https://goldfirestudios.com/blog/150/Benchmarking-AWS-DigitalOcean-Linode-Packet-and-Vultr
======
wcarron
This is great to see. I love DigitalOcean and they've really stepped up their
game wrt. product offerings.

But I was surprised at how DO beat AWS EC2 in most but not all of the tests.
Their performance is impressive considering that they're not on the same scale
as AWS, Azure or GCP

~~~
pram
EC2 (EBS in particular) has always had lackluster performance from my
experience, compared to the alternatives. To be honest though, relative
performance has never been a factor or even a consideration in most of the
places I've worked at.

I'm not saying that to minimize the issue either, it's just that enterprise
users/management simply don't care.

------
Sohcahtoa82
I used to use DO, but switched off after they decided to disconnect my droplet
for 3 hours when it got DDoS'd. It didn't matter that my node was able to
handle the traffic. I was only using it for a Mumble VOIP server and an IRC
bouncer, so it's not like I was going to lose money by having some business
going offline, but still frustrating and enough to decide that should I ever
need to run an actual business, I definitely won't use DO for it.

------
notacoward
I did a similar set of benchmarks, except with a bit more of a focus on
storage performance, several years ago. Even included the results in a
presentation at LISA. The most striking thing at the time was not so much the
averages but the _variability_. IIRC Amazon was particularly bad in that
regard, and Vultr particularly good (so kudos to them), but Digital Ocean's
advantage in raw performance was so big that it still won out. Looks like not
much has changed.

------
MotiveMe
I think the AWS failures on iops tests should've been examined more prior to
publication, or at least explained more to the reader.

AWS General Purpose EBS volumes scale based on volume size, so a purely
naively-done test with a default AMI's performance could be as low as 24 iops
(8GB*3 IOPS per GB) once exhausting it's burst iops quota. I think it's unfair
to compare apples to oranges here, as you can make these volumes scale to
absurd numbers, if you have the cash.

~~~
nodesocket
Agree need to use 1TB EBS volume (the smallest volume that removes bursting
limits) and an EBS optimized and enhanced networking instance to be accurate.
I'll be the first to admit that AWS has a serious problem with
overcomplicating things though. You really shouldn't have all these different
options and gotchas.

------
ac29
Linode, who didn't fair all that well in this test (though was the cheapest)
actually does offer a dedicated CPU option as of recently:
[https://blog.linode.com/2019/02/05/introducing-linode-
dedica...](https://blog.linode.com/2019/02/05/introducing-linode-dedicated-
cpu-instances/)

Curious how much of a difference it would make.

------
SkyLinx
I have tried/used the providers mentioned and others, and am now with UpCloud
which really has great performance, better than Do etc for what I have seen.
Only thing is that they don't offer much more than just servers yet.

------
colvasaur
> The virtualized nature of cloud hosting makes benchmarking over a period of
> time vital to getting the full picture.

It's so nice to see a benchmark of VPSs that takes this into account.

~~~
vegardx
Seems kind of pointless if there's going to be a single data point from all
providers. That doesn't account for noisy neighbours and other issues.

------
deedubaya
I love DO, but man in practice the CPU performance of their machines have been
horrible in my experience. Like 2-3x worse than the same $ spend on ec2.

------
karmakaze
Rubbish. Why is there even a section describing its methodology when it's
comparing $40 and $50 instances against $20 ones? I can see why they might
compare the $62 EC2 instances against other vendors' cheaper ones as that is
the point of their investigation, but the challengers should be on a level
playing field. Seems to me that they wanted DO to 'win'.

~~~
james33
If you would read the paragraph directly after the list of instances tested
you would see this was directly addressed. This test wasn't meant to mislead
and was simply exploring the best options for us. This isn't the same for
everyone, which is why we open-sourced the tool we made so that you can run
your own tests as well.

~~~
karmakaze
> Even though neither Linode nor Vultr offer a CPU optimized tier, we wanted
> to test the options we would actually be using if we went with each
> provider.

This is the part that doesn't make sense. They basically chose one type from
each vendor, _before_ benchmarking. If there are clearly instance types at
twice the cost and still lower than types from other vendors, the results were
stacked. How can you see this any other way?

------
ksec
Missing 2018 in the Title.

