- Their performance claims are incredibly biased. Amazon S3 has far better write performance than their claims.
- They claim 100% S3 compatibility but it fails a large number of API calls using Ceph’s s3-test. I didn’t dig into this too far but they do claim “No need to change your S3-compatible application” so changing my endpoint + credentials should have worked. To their credit - PUT, GET and DELETE did work but that is only 3 of 100’s of API’s.
- Their durability claims are highly suspect. I would want to see a white paper breaking this down.
- Their first round was debt financing.
Why this business model does’t work...
Most people don’t use S3 alone. S3 is a source for other AWS services. That being said, Wasabi becomes a more expensive option as you have a 4 cent egress fee to access data from the rest of your AWS infrastructure. The only place Wasabi becomes cheaper is for those using S3 direct/alone which is a very small subset of S3 usage. AWS is very open about this in white papers, conferences, tech talks, etc.
Wasabi is an economy at scale play that cast way too far a net. There is opportunity in specific vertical markets to sell a solution (object paired with compute) but a pure S3 endpoint will never take substantial marketshare away from AWS.
Are you suggesting that Wasabi might go away because, maybe, this is not the "traditional" valley model?
Convertible notes were all the rage a few years back, too. It was all over the blogs and HN.
Perhaps in the "dev/ops" world, but S3 as a standalone repository could absolutely work for most enterprises and quite frankly soho users as a backup target. That being said, I have almost no faith this will survive and as such wouldn't trust it with my backups.
Validating claims of S3 compatibility is important. The S3 API has corner cases like and misfeatures like BitTorrent hosting but sometimes vendors omit key features like multi-part upload and v4 signatures. s3-tests is the best way we have to evaluate implementations yet only Ceph and S3Proxy seem to contribute to it. Users should hold vendors' feet to the fire about these these claims.
But S3 originally launched by itself, before EC2.
If Wasabi adds a Lambda-like serverless compute layer that could be powerful.
What if they deploy a silent corruption bug next year?
Is there such a white paper for AWS/Azure/GCP? Or are they running on reputation alone?
UPDATE: Well, egress is cheaper. B2 is $0.005/GB storage with $0.02/GB egress. But one thing to consider is that B2 storage is located within one single datacenter.
(Disclaimer: I am not affiliated, but am in the process of deciding to use B2.)
You will be better off using any standard OpenStack provider. That way you won't have a lock in, and great performance (in my testing). E.g. OpenStack Swift on OVH.
Sure it's a tiny bit more expensive. But you get a mature cross-provider API and better performance.
Even for huge chunks of cold data, I wouldn't want to use B2.
That it works so well for me might be connected to the fact that their own products are backup solutions. In any case, I am now paying a significantly less monthly fee for B2 than on Amazon S3 before and the setup had a much lower complexity.
It looks really promising, and pricing is very reasonable. Thank you for pointing it out.
How stable is Wasabi?
In the process of moving from one provider to another and it's pretty tedious getting it all in sync while I manually move from one to the other while still backing up to the previous provider until the move is complete.
It's going to be a bit of a process over the next few weeks :(
> 7. Your website indicates $.0039 per GB per month but the pricing comparison on the website indicates 1 TB is priced at $3.99 / month (instead of $3.90 / month for 1 TB). Why is that?
> The Wasabi monthly price is $.0039 GB / month. Given that there are 1024 GB in 1 TB (not 1000 GB), the price for 1 TB is $.0039 * 1024 or $3.99 per 1 TB per month.
Come on you are a digital storage company let's call things what they are. There are 1000 GB in a TB. There are 1024 GiB in a TiB.
https://www.ovh.com/us/public-cloud/storage/object-storage/ (S3-comparable performance)
Outgoing traffic: $0.011/GB
https://www.ovh.com/us/public-cloud/storage/cloud-archive/ (archival storage)
Incoming/Outgoing traffic: $0.011/GB
No traffic costs (because its archival storage)
The main downside is they are located in 1 physical area even tho they are labeled as multiple DCs.
But for high traffic uses, honestly, you can just double the storage costs (i.e. OVH CA and OVH France) to get redundancy while saving _massively_ on traffic costs.
On OVH you create an "OpenStack/Cloud" project -> They charge you $40 -> Then you have use of the cloud storage.
I opened my account awhile ago via OVH CA in USD.
You'd need to use that to build it out.
(Source: Posting this from Meguro-ku in Tokyo ;) )
Such a startup would require a lot of tech infrastructure and know-how, and their page http://www.wasabisys.com isn't even configured.
It feels like Wasabi is startup with no physical product, but I might be wrong.
~ dig +short console.wasabisys.com
NetRange: 18.104.22.168 - 22.214.171.124
NetType: Direct Allocation
Organization: PSINet, Inc. (PSI)
Comment: Reassignment information for this block can be found at
Comment: rwhois.cogentco.com 4321
It does make it sound as if it is a market test, or that PSINet have developed large scale hosting capabilities in the last few years.
In addition, they use the exact same terminology Amazon uses in their console file, including ARN and two policies named after S3:
The policy syntax looks familiar to those coming from S3 as well.
Servers look like they're in VA-US.
In this case Cogent on the IP space is roughly equivalent to seeing Comcast or something on IP space whois.
The end-user is:
network:Street-Address:44060 Digital Loudoun Plaza
Cogent has large scale hosting capability
Or my second question: wait, doesn't this sound an awful lot like Pied Piper's product from the newest season of Silicon Valley?
Presumably Amazon made it up because people kept asking them "how likely are you to lose my data?" and Amazon needed to be able to say something other than "it's impossible to say due to human factors being the dominant likely cause but it's very unlikely."
1. Amazon claims 99.999999999% durability of objects over a year.
2. I store 1EB of data with an object size of 4MB for a year (so 250,000,000,000 objects).
3. I can expect to lose 250 objects in a year, or 1GB.
Now to my experience:
I have stored in excess of that amount of data in S3. I have lost considerably more data -- solely because of data losses internal to S3 -- that these numbers would suggest. It was a tolerable amount of data loss, I didn't curse Amazon's name or swear vengeance, but it was definitely not 1 gig.
The standard S3 SLA provides credits only based on uptime. There is no mention of durability whatsoever. That tells you that Amazon is not willing to put their money where there mouth is on their 99.999999999% durability claims. The reality is the number is a design target, not an operational guarantee.
I didn't use SNS notifications at the time (which might only work for reduced redundancy).
So that left two options: find out when attempting to fetch the object, or run bookkeeping jobs against the object catalog to periodically spider the data and ferret out any objects that are lost.
The second option may be a tad nicer, but it is also more complex and more expensive and the end result is the same either way.
Even with 100% technical availability (no downtime ever), organizational / legal risks exist. No company can realistically be free of them. Amazon, compared to many smaller companies, may have somehow lower risks of this sort: they are hard to shut down.
If you care about your data really much, you likely have backups and / or mirror copies of it across several providers, in multiple countries, and have a well-tested contingency plan to move a complete copy of your production service to any of 2-3 other providers. (And likely most people don't have your risks and the amount of money enabling this all.)
(Completely replaced the use of https://rsync.io ;)
S3 IA has a 30-day minimum, like Google Nearline, and Google Coldline is 90-day minimum. These make it very hard to predict and control pricing.
Backblaze B2 may not be as high performance, but their pricing is very low AND very predictable. No gimmicks. I've received many HashBackup customer emails mentioning that they use B2 and have never received complaints about their service.
I often wonder how this works. With the whole Sun lawsuit with Google over the Java API making a clone of another platforms API sounds dangerous.
I'm curious what HN thinks.
I've wanted to have a "compatibility layer" that mimics my competitors APIs but have been scared of the possible repercussions.
see: https://minio.io/ "implements Amazon S3 v4 APIs. Minio also includes client SDKs and a console utility."
You can't copyright the dimensions of the blade head, since they're just the dimensions of the screw slot, and the only thing the copyright does is harm interoperability between screws and screwdrivers. You can copyright some other aspects of the screwdriver design, and if screwdrivers were a new thing, you could patent the idea of a screwdriver.
This is why the Java case hinged on a former sun employee literally cutting and pasting method implementations he wrote at Sun into Android's implementation while he worked at google. /double-face-palm
so the only "catch" is $3.90 a month minimum?
So then if you deleted your 1TB each month, and uploaded new data, you'd be paying 3 times more due to the 90 day minimum charge, so about $12/month in the worst case.
Egress: $.05-.09/GB (even lower if you're big)
Sending data one outside of AWS costs equivalent of 2-4 months of storage.
Regularly pushing lots of small files on S3 can get expensive. $1/200k files ($0.005 per 1k PUT requests).
That you can host on your own infrastructure
"Wasabi - a faster, better clone of S3?"
That's why I see others quote the original in the reply.
It's definitely one of the most common codenames in IT, together with Phoenix, Firebird and Panda (and in the enterprise, all greek/roman gods).
sorry to say that we don't also make the fire alarms. That would be cool technology to work on though!
you can find all of the tested compatibility our PACT team has done here: https://wasabi.com/help/interop-results/
Presently our main data centers are in Massachussetts (home sweet home) and Virginia (similar to that of AWS east). Having our data located here has advantages in the present cloud ecosystem that we are excited to roll our in the months to come.
Not to mention b2 and several others that are cheaper.
"Wasabi’s durability is 11 x 9s, the same as Amazon S3. To put that in context, if you stored 1 million 1 GB files in Wasabi, you would expect on average to lose one file every 659,000 years"
Can someone walk me through the math here? I specifically curious about why the size of the file being 1 GB is relevant to the calculation.
In practice, this is almost certainly not going to be why you lose data. That will be because of a chain of human errors, or a code bug, or because the user accidentally deleted the data, or because earthquakes destroyed your data centers.
I don't know anything about Wasabi other than what's on their web page, but I half suspect that what they did was look at Amazon's durability guarantee and then write that number down as their durability guarantee.
Cloud storage by itself (just like delivery) is a commodity, and if you look at the pricing trends per GB, it's a race to the bottom (will be interesting to see which CDN decides to become "free" first).
So, without a suite of offerings a la AWS - how will they make money in this market?
We agree with you exactly: storage should be a commodity. The goal of making something like Wasabi is to be able to do cloud storage of datasets that were too large to be financially feasible before. Allowing data on the Petabyte and Exabyte scale to be easily accessible between institutions could be revolutionary and we are excited to be on the forefront.
> The Wasabi infrastructure has been built using industry best practices for redundancy in data center design.
Sounds too generic. Maybe put in something concrete & technical.
Preferably an option that can do S3 upstream, and support for signed requests with expiry is a must.
Try pricing it against Google https://cloud.google.com/cdn/pricing and Level3 http://www.level3.com/en/products/content-delivery-network/
As someone mentioned, CloudFlare initial seems cheaper but you give up some stuff like control of your DNS and they can shut you off or start showing CAPTCHAs at any time unless you pay for their higher plans. Which might be worth it, you have to weight the pros and cons.
CloudFlare is okay as long as you pay $200/month. That's nothing in the CDN world. Amazon wants $600/month just to deploy a SSL cert on CloudFront.
That's a bit misleading Amazon SSL is free on CloudFront if you are OK with SNI. And almost every is now... browsers without SNI are virtually none. The only browser with any market use that does not support it is IE 8 on Windows XP!
Disclosure: they run on my day-job's systems.
We beat CloudFront pricing by 50..80% in most cases.
(I am a co-founder)
Thanks for your suggestion (honestly) but we won't ever be using CloudFlare so long as I have a say in it.
All software has bugs. They just got unlucky, though their handling of the situation did raise a bunch of red flags.
I also have my dns with them on the free tier. But I am not running my regular traffic through their Network. I just plan to create a subdomain and run js, css, and images through it.
Would this not be recommended? Free CDN does seem too good to be true.
No extra charge to get your data out.
The point is that for individuals it might make more sense to consider "friendlier" storage options. These types of services don't seem to be tailored to storing "family albums".
Of note we are not cold storage: you can get your data back instantly whenever you want it.
I use S3 because of convenience. Build something more convenient, I'll switch.
Presently we accept all credit cards (via stripe of course), and are rolling out invoices for ACH / etc soon.