Hacker News new | comments | show | ask | jobs | submit login
Wasabi – Simple storage solution (wasabi.com)
311 points by gglanzani on July 28, 2017 | hide | past | web | favorite | 155 comments



Some quick observations:

- Their performance claims are incredibly biased. Amazon S3 has far better write performance than their claims.

- They claim 100% S3 compatibility but it fails a large number of API calls using Ceph’s s3-test. I didn’t dig into this too far but they do claim “No need to change your S3-compatible application” so changing my endpoint + credentials should have worked. To their credit - PUT, GET and DELETE did work but that is only 3 of 100’s of API’s.

- Their durability claims are highly suspect. I would want to see a white paper breaking this down.

- Their first round was debt financing.

Why this business model does’t work...

Most people don’t use S3 alone. S3 is a source for other AWS services. That being said, Wasabi becomes a more expensive option as you have a 4 cent egress fee to access data from the rest of your AWS infrastructure. The only place Wasabi becomes cheaper is for those using S3 direct/alone which is a very small subset of S3 usage. AWS is very open about this in white papers, conferences, tech talks, etc.

Wasabi is an economy at scale play that cast way too far a net. There is opportunity in specific vertical markets to sell a solution (object paired with compute) but a pure S3 endpoint will never take substantial marketshare away from AWS.


>Their first round was debt financing.

Are you suggesting that Wasabi might go away because, maybe, this is not the "traditional" valley model?


Many, many startups in the valley have raised on convertible notes (aka debt) first.


Indeed. That's what I don't understand. Nothing about the funding is notable.

Convertible notes were all the rage a few years back, too. It was all over the blogs and HN.


>but a pure S3 endpoint will never take substantial marketshare away from AWS.

Perhaps in the "dev/ops" world, but S3 as a standalone repository could absolutely work for most enterprises and quite frankly soho users as a backup target. That being said, I have almost no faith this will survive and as such wouldn't trust it with my backups.


> They claim 100% S3 compatibility but it fails a large number of API calls using Ceph’s s3-test. I didn’t dig into this too far but they do claim “No need to change your S3-compatible application” so changing my endpoint + credentials should have worked. To their credit - PUT, GET and DELETE did work but that is only 3 of 100’s of API’s.

Validating claims of S3 compatibility is important. The S3 API has corner cases like and misfeatures like BitTorrent hosting but sometimes vendors omit key features like multi-part upload and v4 signatures. s3-tests[1] is the best way we have to evaluate implementations yet only Ceph and S3Proxy seem to contribute to it. Users should hold vendors' feet to the fire about these these claims.

[1] https://github.com/ceph/s3-tests


Yes, it's hard to find use cases where Wasabi storage could compete without compute.

But S3 originally launched by itself, before EC2.

If Wasabi adds a Lambda-like serverless compute layer that could be powerful.


It would be a great destination for cloud backup, where one of the concerns is loss of all your backed up data due to malicious action - look at the trouble the hackers had to go to in Mr Robot to take out their offsite tape backups. I'd be more concerned though with the durability of the company rather than their disk systems over the long term though.


Separate AWS account with write-only permission to backup S3 objects from production pushed out to the backup account. Enable versioning and glacier in backup account. Lock down backup account credentials appropriately. (And add alerting and periodic fire drills of course.)


Amazon is still a single point of failure when you backup S3 to S3.

What if they deploy a silent corruption bug next year?


Also you can change your s3 bucket to enable multi-factor delete, which essentially makes it immutable unless you delete using a 2fa device, which shouldn't really happen accidently.


There would be still a lot of tooling needed around just lambda like serverless compute layer.


I've found that most people / companies actually do use S3 alone. Adoption of S3 on it's own is far greater than the rest of their services, at least in my experience.


> - Their durability claims are highly suspect. I would want to see a white paper breaking this down.

Is there such a white paper for AWS/Azure/GCP? Or are they running on reputation alone?


So would it be a fair assessment to say Wasabi is to S3, as Flask is to Django?


It would map better to a Django app rather than Flask


There's also B2 (https://www.backblaze.com/b2/cloud-storage.html), which is I think the cheapest of them all.

UPDATE: Well, egress is cheaper. B2 is $0.005/GB storage with $0.02/GB egress. But one thing to consider is that B2 storage is located within one single datacenter.

(Disclaimer: I am not affiliated, but am in the process of deciding to use B2.)


B2 is horribly unstable, has limits left and right and most of all, the latency is horrendous.

You will be better off using any standard OpenStack provider. That way you won't have a lock in, and great performance (in my testing). E.g. OpenStack Swift on OVH.

Sure it's a tiny bit more expensive. But you get a mature cross-provider API and better performance.

Even for huge chunks of cold data, I wouldn't want to use B2.


I am uploading encrypted backups from servers to BackBlaze B2 Cloud Storage using duply/duplicity. This works pretty well for me. Latency is not important for this use case and I have not noticed any limits. There were a few upload failures initially, but I just increased the upload retries, which sufficiently remedied this problem.

That it works so well for me might be connected to the fact that their own products are backup solutions. In any case, I am now paying a significantly less monthly fee for B2 than on Amazon S3 before and the setup had a much lower complexity.


I took a look at OVH's Object Storage (which you mentioned) https://www.ovh.com/us/public-cloud/storage/object-storage/

It looks really promising, and pricing is very reasonable. Thank you for pointing it out.


Thanks for sharing. This is news to me. I haven't ever seen a bad review of B2 over the internets. Can you elaborate further?

How stable is Wasabi?


What are some good OpenStack providers? and are there any that can 11 nines durability?


I know at least that Rackspace has a hosted Swift storage service. Of the storage integrations I've worked on, Swift was probably the most error-prone to get your integration stable, though. I guess that's just a credit to the robustness of the S3 client APIs and services that copy it (Ceph, GCS).


OVH is pretty good and very cheap.


We just added B2 support to Arq Backup. It's been working great for us so far.


Thanks for Arq, it's a fantastic product. Any chance of an option to migrate from one source to another?

In the process of moving from one provider to another and it's pretty tedious getting it all in sync while I manually move from one to the other while still backing up to the previous provider until the move is complete.

It's going to be a bit of a process over the next few weeks :(


Any plans to add OneDrive Business? Office365 accounts come with 1TB and most people don't know what to do with it.


How is B2 cheaper at .005/GB than Wasabi at .0039/GB? Am I missing something here?


Sorry, clarified that only B2 egress is cheaper. Wasabi storage is cheaper at .0039 vs 0.005


I haven't used B2 in a production setting. But I've been using it for backing up my NAS and have been quite happy. Thinking about switching my other personal machines from crashplan to backblaze.


What are you using for backup? I have Arq (to Google) and Crashplan.


I did some research myself to decide which storage solution to use for backups and B2 is the cheapest of all, at least for backups. I hope that Backblaze will also offer some kind of redundance option (replicate customer data in 2+ datacenters).


B2 is awesome for when you have a local backup. For example, B2 is ideal to back up a NAS. I wouldn't want to use it for set-and-forget cold storage, because of the single datacenter problem.


From the FAQ:

> 7. Your website indicates $.0039 per GB per month but the pricing comparison on the website indicates 1 TB is priced at $3.99 / month (instead of $3.90 / month for 1 TB). Why is that?

> The Wasabi monthly price is $.0039 GB / month. Given that there are 1024 GB in 1 TB (not 1000 GB), the price for 1 TB is $.0039 * 1024 or $3.99 per 1 TB per month.

Come on you are a digital storage company let's call things what they are. There are 1000 GB in a TB. There are 1024 GiB in a TiB.


If the goal is price reduction:

https://www.ovh.com/us/public-cloud/storage/object-storage/ (S3-comparable performance)

$40/year minimum

Outgoing traffic: $0.011/GB Storage: $0.0112/month/GB

https://www.ovh.com/us/public-cloud/storage/cloud-archive/ (archival storage)

Incoming/Outgoing traffic: $0.011/GB Storage: $0.0023/month/GB

https://www.online.net/en/c14#pricing Storage: €0.005/Month

No traffic costs (because its archival storage)

The main downside is they are located in 1 physical area even tho they are labeled as multiple DCs.

But for high traffic uses, honestly, you can just double the storage costs (i.e. OVH CA and OVH France) to get redundancy while saving _massively_ on traffic costs.


Where did you see the $40 min with OVH?


Experience. If you dig into the billing information and/or actually attempt to create a cloud project it'll tell you there is $40/year minimum.

On OVH you create an "OpenStack/Cloud" project -> They charge you $40 -> Then you have use of the cloud storage.


It was 10€ in prepaid credit for me just yesterday, Which is valid for 13 months IIRC


Ah perhaps it changed then.

I opened my account awhile ago via OVH CA in USD.


does this work with django storages?


No. It uses the OpenStack API.

https://github.com/openstack/python-openstacksdk

You'd need to use that to build it out.


Probably no need to do this on their own. Found two already existing implementations, and suspect there are more:

https://github.com/dennisv/django-storage-swift

https://django-cumulus.readthedocs.io/en/latest/


All you would need is to add a Django storage backend to talk to it.


Wouldn't it be funny if this was just a market test/exercise, using actual S3 as a backend, just to see if it gets any traction before building own HW/SW solution?


Kind of like how most wasabi in the U.S. isn't really wasabi, it's just horseradish? lol


Not just in the US. Real wasabi is very expensive. Even in Japan, unless you're seeing them grate it for you at the table, it's still just green horseradish.

(Source: Posting this from Meguro-ku in Tokyo ;) )


Well if this service were called horseradish I would definitely sign up.


That's what I thought, given that I didn't see any datacenter photos, tech blog, setup, and that their team is so goddamn small.

Such a startup would require a lot of tech infrastructure and know-how, and their page http://www.wasabisys.com isn't even configured.

It feels like Wasabi is startup with no physical product, but I might be wrong.


Could explain the 1tb minimum. Hoping most people don't get close to it.


Intriguing idea. With the 1tb minimum, it's conceivable that you could actually turn a profit doing that.


Their IP addresses belong to a company named PSI net:

    ~ dig +short console.wasabisys.com
  38.27.106.13
  38.27.106.12
  38.27.106.11

  NetRange:       38.0.0.0 - 38.255.255.255
  CIDR:           38.0.0.0/8
  NetName:        COGENT-A
  NetHandle:      NET-38-0-0-0-1
  Parent:          ()
  NetType:        Direct Allocation
  OriginAS:       AS174
  Organization:   PSINet, Inc. (PSI)
  RegDate:        1991-04-16
  Updated:        2011-05-20
  Comment:        Reassignment information for this block can be found at
  Comment:        rwhois.cogentco.com 4321
  Ref:            https://whois.arin.net/rest/net/NET-38-0-0-0-1
There's not much to be found by PSINet or Cogent who own them, but they don't seem to offer hosting services. The only relevant piece of information that I could find as to what PSINet do these days is this one:

https://forums.digitalpoint.com/threads/lots-of-weird-traffi...

It does make it sound as if it is a market test, or that PSINet have developed large scale hosting capabilities in the last few years.

// Edit

In addition, they use the exact same terminology Amazon uses in their console file, including ARN and two policies named after S3:

AmazonS3FullAccess AmazonS3ReadOnlyAccess

The policy syntax looks familiar to those coming from S3 as well.


Cogent/Psinet are a transit provider.

Servers look like they're in VA-US.

In this case Cogent on the IP space is roughly equivalent to seeing Comcast or something on IP space whois.

The end-user is:

    network:IP-Network:38.27.106.0/24
    network:Org-Name:Bluearchive, Inc
    network:Street-Address:44060 Digital Loudoun Plaza
    network:City:Ashburn
    network:State:VA
    network:Country:US
    network:Postal-Code:20147


Cogent bought PSINET some time ago. http://www.cogentco.com/en/news/press-releases/279-cogent-co...

Cogent has large scale hosting capability


I'm surprised the FAQ doesn't answer the question that immediately came to mind: why should I risk my data with an untested startup when the only benefit is claimed performance/price?

Or my second question: wait, doesn't this sound an awful lot like Pied Piper's product from the newest season of Silicon Valley?


Exactly. Performance means nothing when the company runs out of funding and goes belly up... along with all your data.


They're claiming the same 11 9s durability that S3 does. I'd be pretty suspicious of that claim without a track record but it looks like Wasabi's founders come from Carbonite. Bring on the competition, commoditization of fundamental building blocks is great for everyone except people trying to make startup-scale returns on them.


Those durability numbers for both Amazon and Wasabi are pure marketing and don't really mean anything even remotely important. Durability of data stored by a single company, even a company like Amazon, is actually very low, you should be scared of how low it really is. You could get kicked out from the service, lose data because of a bug or an operational mistake, be prevented from using the service by your government for political reasons and so on.


You are absolutely correct. The odds of all of the company's data centers being simultaneously destroyed by meteors are far more likely than 11 9s. It's complete marketing fluff that is not targeted at engineers.

Presumably Amazon made it up because people kept asking them "how likely are you to lose my data?" and Amazon needed to be able to say something other than "it's impossible to say due to human factors being the dominant likely cause but it's very unlikely."


Do you have any evidence for these claims of low durability? (None of those issues, except for bugs in the service, would count against them, by the way.)


I think factors beyond the storage algorithm are pretty important to consider when thinking about storing data that's important to your business. To your specific point though:

1. Amazon claims 99.999999999% durability of objects over a year.

2. I store 1EB of data with an object size of 4MB for a year (so 250,000,000,000 objects).

3. I can expect to lose 250 objects in a year, or 1GB.

Now to my experience:

I have stored in excess of that amount of data in S3. I have lost considerably more data -- solely because of data losses internal to S3 -- that these numbers would suggest. It was a tolerable amount of data loss, I didn't curse Amazon's name or swear vengeance, but it was definitely not 1 gig.

The standard S3 SLA provides credits only based on uptime. There is no mention of durability whatsoever. That tells you that Amazon is not willing to put their money where there mouth is on their 99.999999999% durability claims. The reality is the number is a design target, not an operational guarantee.


AFAIK, S3 only provides notification when reduced redundancy objects are lost, not regular objects. How did you detect your data loss?


All of my objects used standard redundancy. My recollection is that regardless of object class, you will get a 405 error if you try to fetch an object that has been lost.

I didn't use SNS notifications at the time (which might only work for reduced redundancy).

So that left two options: find out when attempting to fetch the object, or run bookkeeping jobs against the object catalog to periodically spider the data and ferret out any objects that are lost.

The second option may be a tad nicer, but it is also more complex and more expensive and the end result is the same either way.


You're mixing the technical availability, as in the amount of downtime, with organizational availability, as in your contract being abruptly terminated.

Even with 100% technical availability (no downtime ever), organizational / legal risks exist. No company can realistically be free of them. Amazon, compared to many smaller companies, may have somehow lower risks of this sort: they are hard to shut down.

If you care about your data really much, you likely have backups and / or mirror copies of it across several providers, in multiple countries, and have a well-tested contingency plan to move a complete copy of your production service to any of 2-3 other providers. (And likely most people don't have your risks and the amount of money enabling this all.)


It doesn't make sense to me to think of durability for cloud storage services as anything other than the probability of data retention for a client, which is impacted by your contract being abruptly terminated too, although not by the amount of downtime.


I wonder how this compares to rsync.net, especially with their HN discount and http://rsync.net/products/attic.html if you're doing the kind of backups I'd imagine Glacier is used for.


After reading about https://rsync.net for years on this site, I signed up last week. Great service, and almost pain-free. I've only bought 100Gb but that's enough for my virtual machines.

(Completely replaced the use of https://rsync.io ;)


You should make rsync.io redirect to rsync.net with your coupon code and get a bunch of storage :)


Rsync.net has more features but it's also 10x the price, even with the Attic discount. Egress is free on rsync.net but of limited usefulness since the only supported protocol is SSH.


They have a gimmick in their pricing: 90-day minimum storage. So for objects that don't have 90-day lifetimes, you can end up paying WAY more than S3.

S3 IA has a 30-day minimum, like Google Nearline, and Google Coldline is 90-day minimum. These make it very hard to predict and control pricing.

Backblaze B2 may not be as high performance, but their pricing is very low AND very predictable. No gimmicks. I've received many HashBackup customer emails mentioning that they use B2 and have never received complaints about their service.


Has anyone here done a migration from S3 to Wasabi, and successfully realized the lower total cost Wasabi is claiming?


> Wasabi is built to be 100% AWS S3 bit-compatible (same AWS API constructs for storage & identity management). No need to change your S3-compatible application when using Wasabi

I often wonder how this works. With the whole Sun lawsuit with Google over the Java API making a clone of another platforms API sounds dangerous.

I'm curious what HN thinks.

I've wanted to have a "compatibility layer" that mimics my competitors APIs but have been scared of the possible repercussions.


Google Cloud Storage does this (not sure about % of API completeness, but I've tested it with basic use cases like using Transmit)

https://cloud.google.com/storage/docs/interoperability


Minio also has a S3 "compatibility layer"

see: https://minio.io/ "implements Amazon S3 v4 APIs. Minio also includes client SDKs and a console utility."



It wouldn't keep me up at night. Just do it.


This. Would love to hear more about this.


You can't copyright a softwarw API in the US because they are considered purely functional (in the mechanical sense). Take flathead screwdrivers as an example.

You can't copyright the dimensions of the blade head, since they're just the dimensions of the screw slot, and the only thing the copyright does is harm interoperability between screws and screwdrivers. You can copyright some other aspects of the screwdriver design, and if screwdrivers were a new thing, you could patent the idea of a screwdriver.

This is why the Java case hinged on a former sun employee literally cutting and pasting method implementations he wrote at Sun into Android's implementation while he worked at google. /double-face-palm


> Wasabi storage costs a flat $.0039/GB/Month with a 1 TB minimum usage.

so the only "catch" is $3.90 a month minimum?


Not only. There is also minimum 90-day charge for objects. https://wasabi.com/pricing/pricing-faqs/


According to that, there is an extra "9" they didn't show, so $3.99 rather than $3.90.

So then if you deleted your 1TB each month, and uploaded new data, you'd be paying 3 times more due to the 90 day minimum charge, so about $12/month in the worst case.


That is to be expected for bottom-barrel prices, to make up for the cost of maintenance (GC / compaction / rebalancing / etc of data). They might just wait 90 days before doing the first compaction, or whatever their system does to delete data.


They're using 1TB as in 1024GB. 1024*$0.0039 = $3.9936


Traffic is horrible expensive :|


Lacking comparison with their closest competitor B2 ($0.005/GB, cheaper outbound at $0.02/GB). Also no information on DC location and zones.


They claim to have multiple data centers. It sounds fishy to me. I would like to see some pics / info af their setup before I trust them with 1 byte of my data.


My guess is that their data centers look exactly like this

https://aws.amazon.com/about-aws/global-infrastructure/


Storage is cheap on any cloud, the network egress is expensive.

1. Wasabi: Storage: $.0039/GB/Month Egress: $.04/GB

2. AWS: Storage: $0.023/GB/Month Egress: $.05-.09/GB (even lower if you're big)

Sending data one outside of AWS costs equivalent of 2-4 months of storage.


The benefit of S3 is that traffic is free within the same region. If you're hosting on AWS that alone is going to save you money versus the competitors.


A less seen cost is for the various operations.

Regularly pushing lots of small files on S3 can get expensive. $1/200k files ($0.005 per 1k PUT requests).


Not to mention the minimum file size charge, which I think was a few kb per file in infrequent access S3.


IA is only applicable for items that are 128kb or larger in size. Also Glacier adds 32kb of glacier data to each object as well as 8kb of standard tier storage per object. S3 is really inefficient for huge numbers of small files.


Thanks, I'd not considered that one before, I'll make sure to keep an eye on that.


LeoFS: A stable and scalable S3 clone with NFS support?

That you can host on your own infrastructure

https://leo-project.net/leofs/


Well they edited the title so this isn't as tongue and cheek as it was meant to be. The original title was something like:

"Wasabi - a faster, better clone of S3?"

That's why I see others quote the original in the reply.


Not affiliated, by just found out that's compatible with Arq and when I saw the prices I was stunned.


Same here though I'm leaning towards backblaze since there's no minimum.


75MB/5 sec benchmark results (using internal AWS network to pull from S3!) sound dubious. You can get 4Gbps+ down from S3 within the same region in my experience, that's 30x faster than these numbers.


On the same page on HN there's another Wasabi that's a fire alarm for deaf people.


And I'm old enough to remember Wasabi as the proprietary vb-to-php compiler that FogCreek developed for a few years. And I think there was something related to OpenBSD or NetBSD, some company called Wasabi System...?

It's definitely one of the most common codenames in IT, together with Phoenix, Firebird and Panda (and in the enterprise, all greek/roman gods).


NetBSD, and also storage related[0], confusingly enough. Wasabi brought journaling (WAPBL[1]) to NetBSD.

[0] http://www.wasabisystems.com/

[1] http://netbsd.gw.com/cgi-bin/man-cgi?wapbl++NetBSD-current


This is Richard: a product manager with Wasabi

sorry to say that we don't also make the fire alarms. That would be cool technology to work on though!


Sometimes people make these as copycat posts, which is not the best kind of post, but I figure HN can handle a lot of wasabi.


This looks great. Are there any client libraries for access, perhaps similar to AWS's `boto`? Having a hard time finding that on your website ..


Since it claims to be "100% bit compatible with Amazon S3", I would assume that you can use boto if you manually configure their endpoints.


Would be nice to see a note about that somewhere


This is Richard: a product manager at Wasabi. Yes we work with the boto SDK.

you can find all of the tested compatibility our PACT team has done here: https://wasabi.com/help/interop-results/


Depending on your access patterns, Backblaze B2 might be cheaper at $0.005/GB stored and $0.02/GB downloaded.


Curious .. where (region) is the data stored?


This is Richard: a product manager with Wasabi

Presently our main data centers are in Massachussetts (home sweet home) and Virginia (similar to that of AWS east). Having our data located here has advantages in the present cloud ecosystem that we are excited to roll our in the months to come.


Neat job, keep up the great work! There is also OVH as competitor. Their panel and documentation is not the best but once you integrate, it works like a charm and I guess they are the cheapest object storage service out there in the market: https://www.ovh.com/us/public-cloud/storage/object-storage/


Why would you say they are the cheapest object storage when it is significantly more than the very service you responded to?

Not to mention b2 and several others that are cheaper.


Worth noting the opportunity cost of not being in S3. Something like Athena won't be yours for the asking. It's been saving my butt lately. Nice to be able to actually see what's in your S3 sometimes :) http://blog.ratelim.it/blog/log-aggregation-at-scale-for-che...


Would love to hear from someones perspective whom actually moved current infrastructure over to this provider. Would love a write-up and pros-cons after transitioning.


There is also NodeChef object storage. NodeChef charges only by the storage size of your instance. No Data transfer charges. No additional charges for PUT, GET, COPY, or other operations. https://www.nodechef.com/s3-compatible-object-storage


The article states:

"Wasabi’s durability is 11 x 9s, the same as Amazon S3. To put that in context, if you stored 1 million 1 GB files in Wasabi, you would expect on average to lose one file every 659,000 years"

Can someone walk me through the math here? I specifically curious about why the size of the file being 1 GB is relevant to the calculation.


It doesn't matter. This is marketing fluff. What companies like Amazon do is count how many hard drives would need to simultaneously fail to lose data, calculate the odds of a hard drive losing data, and then multiplying. This gets you to 11 9's.

In practice, this is almost certainly not going to be why you lose data. That will be because of a chain of human errors, or a code bug, or because the user accidentally deleted the data, or because earthquakes destroyed your data centers.

I don't know anything about Wasabi other than what's on their web page, but I half suspect that what they did was look at Amazon's durability guarantee and then write that number down as their durability guarantee.


Yes, I think is disingenuous to call it "data" durability. "disk" durability would be more accurate.


I am curious what their strategy is.

Cloud storage by itself (just like delivery) is a commodity, and if you look at the pricing trends per GB, it's a race to the bottom (will be interesting to see which CDN decides to become "free" first).

So, without a suite of offerings a la AWS - how will they make money in this market?


This is Richard: a product manager with Wasabi

We agree with you exactly: storage should be a commodity. The goal of making something like Wasabi is to be able to do cloud storage of datasets that were too large to be financially feasible before. Allowing data on the Petabyte and Exabyte scale to be easily accessible between institutions could be revolutionary and we are excited to be on the forefront.


Thank you.


Just came here to say I think the design of the website looks great, colors, spacing, type etc.


For me, when choosing an object storage service, the most important question is WHERE is my data is stored. If I cannot choose where my data is stored, I won't use the service. Why? Because my clients will ask me the same question for their audits.


We just added Wasabi as a destination option in Arq Backup. Seems to do that job well.


> 12. How reliable is Wasabi?

> The Wasabi infrastructure has been built using industry best practices for redundancy in data center design.

Sounds too generic. Maybe put in something concrete & technical.


Isn't Wasabi a programming language from Fog Creek Software?


It was an in-house compiler that they're not using anymore. https://blog.fogcreek.com/killing-off-wasabi-part-1/


Does anyone have a suggestion for a cheaper CloudFront, not S3?

Preferably an option that can do S3 upstream, and support for signed requests with expiry is a must.


I know this doesn't add value but if you think CloudFront is expensive you should see how much CDNs used to cost before CloudFront came around. The fact it doesn't change you for the bandwidth between S3 and CloudFront is key too.

Try pricing it against Google https://cloud.google.com/cdn/pricing and Level3 http://www.level3.com/en/products/content-delivery-network/

As someone mentioned, CloudFlare initial seems cheaper but you give up some stuff like control of your DNS and they can shut you off or start showing CAPTCHAs at any time unless you pay for their higher plans. Which might be worth it, you have to weight the pros and cons.


CloudFront is one of the slowest CDNs out there.

CloudFlare is okay as long as you pay $200/month. That's nothing in the CDN world. Amazon wants $600/month just to deploy a SSL cert on CloudFront.


> Amazon wants $600/month just to deploy a SSL cert on CloudFront

That's a bit misleading Amazon SSL is free on CloudFront if you are OK with SNI. And almost every is now... browsers without SNI are virtually none. The only browser with any market use that does not support it is IE 8 on Windows XP!


Have a look at edgemesh[1]. They are doing an interesting take on CDN.

Disclosure: they run on my day-job's systems.

[1] https://edgemesh.com/


If you need to do video, check out https://advection.net/

We beat CloudFront pricing by 50..80% in most cases.

(I am a co-founder)


The cheapest cdn I'm aware of is OVH's. https://www.ovh.ie/cdn/


Cloudfront gets much cheaper at volume, you have to commit to usage though. But it's a huge cost savings over list price.


Check out KeyCDN.


I second KeyCDN, we switched to them a while back because CloudFront had a mystery problem with SSL certs that bit us. KeyCDN has been superb.


Cloudflare?


I know this isn't going to be a popular opinion on HN, but I had the misfortune of using CloueFlare when they first started and experienced firsthand how amateurish the team behind it was. I tried them again three years ago and didn't see anything to change my opinion. The security incident last year cemented it for me.

Thanks for your suggestion (honestly) but we won't ever be using CloudFlare so long as I have a say in it.


They've definitely improved since then. You need to be on the $200/month plan and basically turn off the WAF (which is craptastic), then things are pretty good.

All software has bugs. They just got unlucky, though their handling of the situation did raise a bunch of red flags.


I was considering using cloudflare as a free static asset CDN. On several of the speed test rankings they show cloudflare at or near the top.

I also have my dns with them on the free tier. But I am not running my regular traffic through their Network. I just plan to create a subdomain and run js, css, and images through it.

Would this not be recommended? Free CDN does seem too good to be true.


They explicitly forbid that use case in their ToS, so no, it is not recommended.


Oh, that is good to know. I had no idea that wasn't allowed.


Am I doing the math wrong or is this $4/mo for me to dump 1TB of copies of my family albums and whatnot into for long term cold storage?


Can you make do with storing only 50 GB? That's $0.99/month from Apple. Can you make do with storing only 200 GB? That's $2.99/month from Apple.

No extra charge to get your data out.

The point is that for individuals it might make more sense to consider "friendlier" storage options. These types of services don't seem to be tailored to storing "family albums".

https://support.apple.com/en-us/HT201238


Agreed. But I wouldn't trust Apple with my data; iCloud has lost some of my iWork files and has failed to sync important Apple Notes when changing phones. This was during the start of iCloud, so reliability might be better now.


Sounds right if they stick around long enough for you to get your content back in a few years. Don't forget to price in that risk.


Im a product manager at Wasabi and your math is accurate!

Of note we are not cold storage: you can get your data back instantly whenever you want it.


If I am using Arq, will I have additional costs except '$.0039 per GB per month'?


The biggest reason why I use S3 is not price or performance. Competing with them on price or performance is not going to work.

I use S3 because of convenience. Build something more convenient, I'll switch.


If it is an S3 API clone, doesn't that make it equally convenient?


What payment methods do they accept?


This is Richard: a product manager with Wasabi

Presently we accept all credit cards (via stripe of course), and are rolling out invoices for ACH / etc soon.


Is there a web UI?


BUT WHERE IS THE SUSHI?


CTRL+F "availability". 0 results found.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: