
Object Storage: AWS vs Google Cloud Storage vs Azure Storage vs DigitalOcean - Elect2
https://www.chooseacloud.com/objectstorage
======
dsacco
This is great, but I’m disappointed Backblaze B2 isn’t included. That seems
like an oversight unless someone can point out how B2 doesn’t hold its own
with these options in a glaring way. There are tradeoffs, but B2 seems to be
very competitive overall.

B2 is cheaper than every option here for both storage ($0.005/GB) and egress
($0.01/GB).[1] Their transaction pricing is also cheaper.[2] Despite being
cheaper, it’s still hot storage, so you can immediately download buckets, in
whole or in part. I’ve personally used it to backup (and restore) terabytes of
data for over a year. I doubt it has an SLA like GCP or AWS, but DigitalOcean
doesn’t either, yet it’s listed here. I find the B2 API documentation to be
very readable as well.[3]

I’ve used AWS S3, Glacier and GCP Nearline, Coldline. I can’t think of a
specific thing that has disappointed me about B2, and the reliability has been
excellent. The nature of my work is that I have very large datasets, and B2
becomes extremely competitive when you’re backing up tens of terabytes or
more.

_________________________________

1\. [https://www.backblaze.com/b2/cloud-storage-
pricing.html](https://www.backblaze.com/b2/cloud-storage-pricing.html)

2\. [https://www.backblaze.com/b2/b2-transactions-
price.html](https://www.backblaze.com/b2/b2-transactions-price.html)

3\. [https://www.backblaze.com/b2/docs/](https://www.backblaze.com/b2/docs/)

~~~
truetraveller
I really wanted to like B2. I love the docs and super-clean interface. But B2
was just too "weird" when it came to uploading an object. You can't replace an
object key, so you always get a new object key when uploading. This is unlike
any normal object store, where you can just upload to an existing key.

Also, if you upload an object with the same name (e.g. myphoto.png) it creates
a new version, and there's no way to stop this. I don't want or need a new
version!

I have a feeling this restriction is in place because of the underlying
"vault" implementation.

~~~
arghwhat
If you upload an object with the same 'name' to S3, you also get a new
version.

When you say 'key', do you mean the opaque object ID granted to objects in B2
which object stores like S3 doesn't have at all? I don't understand the rant
here. S3 only operate by 'name' (key).

I have had no problem integrating B2 side-by-side with S3 and GCP in the
products I have written. Their high-level models are largely compatible.

~~~
mayank
S3 versioning is optional and off by default. Writes (including rewrites) to
S3 are also atomic, so you’ll never see partial writes.

~~~
arghwhat
Ah, fair enough. I have oddly enough never in my life written to the same key
twice, so I hadn't noticed it was off by default for S3.

Just to clarify you statement about atomicity: All writes to S3 and B2 are
'atomic' (and B2 can also verify the hash and reject on failure for an extra
layer of security). The difference you mention is just that you can
superficially "disable" versioning for S3, so that each key only stores one
object version at a given time.

The only difference to the upload when S3 versioning is disabled is what
happens during the metadata update: With versioning, a version is appended.
Without, the version is replaced.

For B2, simulating disabled versioning is two operations: Upload a new object,
and delete the old one. As long as the object is only referenced by name, this
will also be atomic.

------
willow_sp
Shameless Plug :)

I wrote recently a series of articles comparing most of those providers and
explaining how to use them with JavaScript:

\- Amazon S3: [https://medium.com/@javidgon/amazon-s3-pros-cons-and-how-
to-...](https://medium.com/@javidgon/amazon-s3-pros-cons-and-how-to-use-it-
with-javascript-701fffc89154)

\- Google Cloud Storage: [https://medium.com/@javidgon/google-cloud-storage-
pros-cons-...](https://medium.com/@javidgon/google-cloud-storage-pros-cons-
and-how-to-use-it-with-javascript-ea9ce60a94c0)

\- Microsoft Azure Blob Storage: [https://medium.com/@javidgon/microsoft-
azure-blob-storage-pr...](https://medium.com/@javidgon/microsoft-azure-blob-
storage-pros-cons-and-how-to-use-it-with-javascript-ca5aaf5d5ffd)

\- Backblaze B2: [https://itnext.io/backblaze-b2-pros-cons-and-how-to-use-
it-w...](https://itnext.io/backblaze-b2-pros-cons-and-how-to-use-it-with-
javascript-8c2d2a9a69d9)

\- DigitalOcean Spaces: [https://medium.com/dailyjs/digital-ocean-spaces-pros-
cons-an...](https://medium.com/dailyjs/digital-ocean-spaces-pros-cons-and-how-
to-use-it-with-javascript-1802559ce2bd)

\- Wasabi Hot Storage: [https://medium.com/@javidgon/wasabi-pros-cons-and-how-
to-use...](https://medium.com/@javidgon/wasabi-pros-cons-and-how-to-use-with-
javascript-fa528c3779a2)

~~~
jopsen
shameless request :)

trying comparing performance: latency, bandwidth up/down and scalability in
the face of many concurrent requests. And then do that wrt. compute resources
in various locations.

------
truetraveller
I was really excited about DO spaces. I compared every major Object Storage
(OVH,B2,Wasabi,S3,Azure). DO spaces came out much ahead. I did dozens of hours
of research. I was a customer (and I still am). But I am less excited now.

Basically, there are loads of issues with rejected requests because of rate
limiting (returns a lot of 503 "slow down" responses). Basically, I don't
recall ever receiving this from S3. You can check the forums to see more in-
depth discussion.

The good part: This is a solvable problem, and I hope they relax these limits
very soon.

Another great anecdote: their API is 99% compatible with S3. In fact, the
official recommendation is to use the AWS SDKs on the server, which I am
doing!

~~~
aclelland
S3 will sometimes return a "503 temporary error" response if you start writing
lots of files per second. From my understanding, if they see that your bucket
has a constant high write rate they'll make some configuration changes in the
background to accommodate the higher write rate.

That being said, last month I wrote over 60 million files to S3 and the number
of failed writes were tiny (solved by simply retrying the write)

I've not used DOs spaces yet, I'd love to know if they guarantee read after
write consistency. I know S3 has some issues with that depending on use case.
Maybe it's time for another look at spaces.

~~~
truetraveller
Yes, DO spaces is very strict about both GET and PUT. I did a benchmark
requests-per-second (literally just fetching a URL a bench of times). I get
about ~180 requests per second, after which all requests failed.

Amazon S3 is much better in the sense that there IS dynamic scaling if it
notices spikes.

To the defense of DO, they are newer and their business model is
"cheap,cheap,cheap", so they can't compete at the same level.

------
cyberferret
(*) for a particular use case.

The example is 200GB storage with 2000GB data transfer (out) every month.
That's a LOT of data going out every month, so I am guessing the scenario is
if you are hosting a photo library and lots of people are downloading every
month.

If however you are just using the service as an online storage to hold <100GB
of data as backup (i.e. mainly only transfer in), then S3 turns out way
cheaper than DO.

Not knocking either service - I actually use both, for different use cases.

~~~
tobias3
Plus the SLA is different. It probably should be compared to at least AWS S3
reduced redundancy storage.

~~~
qaq
right and what are the penalties when AWS breaks SLA?

------
milesward
Cool site, but noticing an error: This compares AWS S3 single region pricing
($0.024/gb) to GCP GCS multi-region pricing ($0.026/gb) rather than GCP GCS
single region pricing ($0.020/gb). Hopefully the creator/author will correct
the discrepancy... Disclosure: I'm a pricing dweeb at Google Cloud

~~~
arghwhat
I sincerely hope your business card says "Pricing Dweep, Google Cloud
Products".

If not, you need to get this fixed.

~~~
milesward
New Card Order: Submitted.

------
martinald
The bandwidth egress charges on AWS (and GCP/Azure) are way too high. It
almost seems cartel like.

Bandwidth costs have dropped by a huge factor over the past few years; but
none of this has been passed on.

I really hope backblaze and/or DO manage to cause the big three some hurt on
this and get them to reduce prices significantly; 7c/GB is really high these
days.

~~~
sitepodmatt
What are bandwidth costs? This comes up again and again, people see HE.net /
Cogent / Level3 / Hibernia / NTT whatever offering 10gig handoff IP transit at
a colo neutral location for X per month and somehow determine the true cost.
When I suspect the reality is that much of cost is invested in routing
equipment both internally and externally, evolving SDN and all the goodness it
affords, being able to offer SLA per instance in terms of mbit that can be
pushed, being able to mitigate DDOS, hiring top network engineers and
researchers, being fully redundant across multiple providers and protected
circuits rather than just a lonely hand off - or in GCP case building out the
network themselves with multiple SLA tiers, being able to mitagate network
issues quicker than a PagerDuty alert at a banwidth blender. Granted I'd say
there still probably cushioning in there but judging network costs by
comparing to a 10gig handoff at a random colo neutral, or blender, or
OVH/Hetznet commodity servers, or Linode/Vultr/DO seems unfair in my opinion.

~~~
mrep
I met a network engineer for amazon about a year ago and I brought up the
prices since I had heard about them being expensive here on hacker news. He
laughed about it and said it was one of their most profitable departments with
like a 90% margin.

~~~
corobo
Gotta be able to negotiate wiggleroom somewhere when the Netflix-types come
knocking

------
hacknat
I work for a company that has to provision hosted products for customers
across all the clouds and the one that has been impressing me the most lately
is Azure. The load balancers are also the gateways (sound network topology),
so there is no need for elastic IPs, NAT Gateways, or proxy protocol. The
other thing I like about Azure is they have storage classes that automatically
cross region replicate. The automatic storage encryption is a bit of an issue,
but I know they were working on it, last I checked.

You can’t beat the offerings of AWS, but there are definitely some compliance
scenarios that are easier to fulfill on Azure.

We rarely get customer requests for Google Cloud. Seems like it’s mostly Azure
and AWS (at least at the enterprise level).

~~~
manigandham
You don't need any that for load balancing on any cloud. Google Cloud has the
best load balancer with a global anycast address that will route to the
nearest DC with instances and free capacity.

For automatic cross-region replication for storage, are you talking about
Azure's GRS class? That's the same as GCP's multiregional class and AWS allows
you to setup entire bucket replication to anywhere else in a few clicks.

------
Brajeshwar
Can we include Wasabi[1] in the comparison? They seem to have a really
compelling offering when it comes to Object Storage and their pricing.

1\. [https://wasabi.com/](https://wasabi.com/)

~~~
thebigjc
Their pricing looks great. Has anyone used them?

~~~
askaboutit
Pricing has recently changed to remove egress costs completely. The speeds
were ok. Nothing special. But this is most likely better used for long term
data storage? With a CDN it would suit well as a large media store for
video/image delivery.

------
zbjornson
This uses the pricing for multi-regional GCS, which is geo-redundant across
two or more locations separated by at least 100 miles.

Regional GCS is the storage class equivalent to standard S3 and is $0.02/GB.

~~~
Elect2
Corrected.

~~~
milesward
Woo, thx!

------
manigandham
This is a rather simplistic comparison. These object storage services have
several tiers and features that you need to take into account like zone vs
regional replication, strong-consistency listings, bandwidth and access
depending on where your compute is, integrations like notifications and
functions, etc.

That being said, the clouds are great if your compute is co-located in the
same place because the transfer fees are waived. Otherwise DO or B2 are
probably better options for less usage or more neutral network locations and
egress.

------
alexbilbie
What is the durability objects stored with DO? S3 offers 11x9s of durability.

Likewise what is the replication story? Can you get event notifications when
objects are uploaded/deleted? Is there versioning? Static website hosting?
Lifecycle management?

~~~
icebraining
_S3 offers 11x9s of durability._

So they say, but the SLA doesn't make any promises about durability.
[https://aws.amazon.com/s3/sla/](https://aws.amazon.com/s3/sla/)

~~~
alexbilbie
The SLA governs availability.

Details of durability here
[https://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurabili...](https://docs.aws.amazon.com/AmazonS3/latest/dev/DataDurability.html)

------
turblety
Sia [1] could be even cheaper, coming out at $1.10 for a TB [2]. Although this
price might not be forever, and I'm still not sure about the reliability.

In theory it should be more reliable, as it's decentralised and your data gets
split among multiple servers around the world. The question remains what if
the Sia network itself stops being profitable and people all exit at the same
time. Although the same could be said for Amazon?

Sia will actually soon be adding a backend to Minio too [3].

The only thing that has stopped me using Sia is you have to have the
blockchain running on the machine.

1\. [https://sia.tech](https://sia.tech)

2\.
[https://siastats.info/storage_pricing](https://siastats.info/storage_pricing)

3\. [https://blog.sia.tech/introducing-s3-style-file-sharing-
for-...](https://blog.sia.tech/introducing-s3-style-file-sharing-for-sia-
through-the-new-minio-integration-bb880af2366a)

~~~
cyberferret
Curious as to why you edited your post to remove mention of the other service
that you had listed originally?

~~~
turblety
Sorry @cyberferret. I previously listed Storj [1] but did a bit of research
and it didn't actually look cheaper when you consider bandwidth cost.

$0.015 per GB per month

$0.05 per GB downloaded

[1] [https://storj.io/](https://storj.io/)

~~~
cyberferret
Cool thanks! I presumed that it didn't meet some sort of criteria that was
being discussed here. I just wanted to make sure there were no issues with the
service itself per se.

~~~
digianarchist
They're in closed BETA. I looked into them but they are not that good for long
term storage. Plus their payment system is centralized.

------
stagbeetle
The main cost seems to be bandwidth (speeds). If you've ever tried Spaces, you
know it's a real slog uploading and downloading (even with good client-side
speeds). DropBox has similar specs to Spaces, and does it for only $10.
OneDrive has the same thing with $7 for 1TB. And there's another Chinese
company, Tencent, that gives you 10TB for free (except it's slow and it's all
in Chinese).

The problem I have with this article (very brief table), is that the author is
comparing two "enterprise" solutions to a consumer solution. With "enterprise"
solutions, you get guaranteed uptime and speeds. With Spaces, and the rest
I've mentioned, you don't get any of that. Only that your data will still be
there as long as you pay out.

~~~
icebraining
Spaces may be slow (never tried it), but it's still a different offering that
Dropbox/OneDrive, which are not designed for massive public access. For
example, Dropbox has a traffic limit of 200GB/day, which can't be increased.

------
Lunatic666
Is there any open source S3 compatible software? I know Riak Cloud Storage
([http://docs.basho.com/riak/cs/2.1.1/](http://docs.basho.com/riak/cs/2.1.1/)),
but I think it’s not maintained anymore.

~~~
tobias3
OpenStack Swift or Ceph with Ceph Object Gateway. Don't use minio, it's a toy
for testing.

~~~
icebraining
I've never used it in production; in what ways is minio lacking compared to
Swift and Ceph?

~~~
tobias3
S3 is an abstraction that gives you a limited amount of operations with
limited guarantees (e.g. eventual consistency, can only replace whole files).
What you get in return is easier scalability and performance. If you use an S3
API to store files (like minio does) you give up power and gain nothing. So
you are better off using NFS, samba, webdav, ftp, etc. Additionally minio
doesn't seem to sync files to the file system, so you can't be sure a file is
actually stored after a PUT operation (AWS S3 and swift have eventual
consistency and Ceph has stronger guarantees).

~~~
icebraining
That's curious, because the OpenStack Swift docs say "Objects are stored as
binary files on the filesystem with metadata stored in the file’s extended
attributes (xattrs)".

As far as I know, the advantage comes from exposing a course-grained API over
the network, which is generally more efficient and reliable than doing many
small operations. It would be hard to implement Minio's distributed mode using
the filesystem API.

------
SlowBro
Maybe I'm missing something, but where is the Azure data?

~~~
kryptkpr
Also missing the SoftLayer/IBM S3-compatible offering. We use it very lightly
(our IoT firmware builds from CI are going up there) and pay nothing, their
free limits are quite generous.

------
blowski
How does Digital Ocean compare in terms of reliability and compatibility with
existing tools? I haven't used it, so genuine question.

~~~
overcast
From their documentation.

Spaces provides a RESTful XML API for programatically managing the data you
store through the use of standard HTTP requests. The API is interoperable with
Amazon's AWS S3 API allowing you to interact with the service while using the
tools you already know.

------
buryat
Would be nice to add things like: \- Consistency \- First byte latency \-
Limits on requests \- Max upload size \- Extra features

~~~
robbyt
Yes, for example GCP does not have S3's "eventual consistent" behavior. When
data is written to GCP, it's available to read immediately.

------
erikpukinskis
One of the neat features of Microsoft’s storage service is that you can append
to existing files.

It surprised me that this isn’t standard on other services.

[https://docs.microsoft.com/en-
us/rest/api/storageservices/un...](https://docs.microsoft.com/en-
us/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-
blobs)

------
askaboutit
Wasabi storage has unlimited egress now. So spaces with its rate limiting
doesn’t seem that much of a big deal.

Storage is something that won’t make much money in a few years I believe. I
think the Egress overcharging maybe finally seeing decent competition.

[https://wasabi.com/pricing/](https://wasabi.com/pricing/)

~~~
corobo
"unlimited" \- It looks like if you actually try to use unlimited you get
limited, it's closer to a Dropbox competitor than an s3 competitor

> If your use case creates an unreasonable burden on our infrastructure, we
> reserve the right to limit your egress traffic and/or ask you to switch to
> our Legacy pricing plan.

> Wasabi’s hot cloud storage service is not designed to be used to serve up
> (for example) web pages at a rate where the downloaded data far exceeds the
> stored data or any other use case where a small amount of data is served up
> a large amount of times

~~~
askaboutit
Oh, well that puts a kink in that. They had an example of 1PB costing $90,000
to download from S3. On their old plan it would be $40,000 (quick math). I
think if the company was that large S3, comes across as a much better
offering.

------
squid3
NodeChef's object storage is very attractive option especially with no data
transfer charges. Available in two regions.
[https://nodechef.com/s3-compatible-object-
storage](https://nodechef.com/s3-compatible-object-storage)

~~~
manigandham
At $1/GB, it's the most expensive option mentioned on this page by far. Even
with GCP's pricey storage, you would have to download your data 10x for this
to make sense (or 88x with DO), and that's before considering cross-
cloud/transit costs.

Also since you're the cofounder of nodechef, you should add a disclaimer when
mentioning your own product.

------
johnnycarcin
Azure storage is available in way more than 10 regions FYI:
[https://azure.microsoft.com/en-us/global-
infrastructure/serv...](https://azure.microsoft.com/en-us/global-
infrastructure/services/)

------
lyager
.. but with a monthly fee, which kind of breaks the idea of “pay as you go”,
atleast for my purpose

~~~
k__
Haha, yes.

But I have to admit, the $5 fees some of these new offerings have is probably
to filter out some type of customers.

------
driverdan
You can negotiate lower S3 prices if you're a heavy user. I assume the other
companies will as well. By heavy I mean exceeding the lowest price tiers
significantly.

