rclone supports multithreaded upload, and even has experimental support for FUSE mounting. However, the sync command gets you Dropbox-like behavior and can be cronned: https://rclone.org/commands/rclone_sync/
I really like the price of B2, I hope it stays low :-)
Encryption raises the barrier for both third-parties and family. In case something happens to me, I want the technical barrier to be low enough for my family to discover the backup. Another reason is that in my experience encrypted data is more sensitive to bit rot and bugs than unencrypted data. I'm backing up important work stuff with Arq Backup for example and I've had my archive corrupted once. Not sure if it was the software's fault or the storage.
My rule of thumb is ... if the data should be discovered by my family in case I'm not around, then I won't encrypt it. Photos are not worth encrypting anyway, since a lot of them end up being shared on Facebook, Flickr, Instagram, etc, as photos are meant to be shared, at least with your family.
That said I still expect Backblaze or Dropbox to keep my data private. Not secret, but private and there is a difference.
Where do you draw the line around data that needs to be discovered? I'm thinking about instructions to access things like bank accounts or such that they may or may not already have access to, where I'd want them encrypted but accessible. Not that I've got secret Cayman accounts or anything, but financials are usually things i want heavily encrypted, but do want family access to in case of the worst.
I know people that were arrested for fraud because their identity was stolen and someone else was committing financial crimes in thier name. They always have to carry official police reports with them just in case they get arrested again.
Can you shed some light on how you share the photos with non-technical family and friends, given that B2 has no app as such?
I have some experience with AWS/Azure and both of them do not support folders, and the workaround is to have slashes in the filename to create a virtual directory. Is it the same with B2?
I keep my photos on Dropbox too, which is how I share them with family, besides sending files over WhatsApp, which is popular these days. But they only provide the history of changes for 1 month, or 3 months for Pro. As has been said before, solutions like Dropbox are not reliable for doing backups without specialized software like Rclone or Arq Backup, that can keep a version history.
My archive is currently less than 150 GB, so B2 is really cheap. I also have an offline backup on a portable hard drive. The idea with backups is that if you have data you care about, then it's a good idea to have at least 2 backups in different locations, made via different software.
> I have some experience with AWS/Azure and both of them do not support folders, and the workaround is to have slashes in the filename to create a virtual directory. Is it the same with B2?
B2 has folders, you can navigate them in the online interface. That said the service doesn't have polished apps available, being a platform like S3. It has no desktop or mobile apps currently. Although if they survive, given its price, I'm sure apps will happen at some point.
The online interface simply assumes that a slash in the filename should be represented as a folder; and they encourage apps to do the same. I believe they also enforce a max distance between slashes that is smaller than the max filename length.
What this means is that their is no way to, for instance, query what the root directories are, short of listing all files.
If you have a directory, you can list its contents using a prefix search (although the prefix need not be a directory, and this will not just list the toplevel elements)
>their is no way to, for instance, query what the root directories are, short of listing all files
This is not true! Try this from the b2.py command line:
b2.py ls <bucketName>
That would list all the top level folders. The APIs are designed to support two things: 1) listing all files, or 2) navigating and listing the contents of each folder.
Yes, but you're paying for that storage. If you sync 100GB of photos then locally make a small EXIF data change to all of them and sync again, you're now paying for 200GB of storage. B2 has Lifecycle Rules  to help keep versions from getting out of control and the API has methods for handling versions for clients like rclone  to use.
B2 doesn't have it's own desktop app but 3rd party desktop apps like Cyberduck use the API work with B2.
I wrote up a walk-thru some time back. The changes basically include replacing all the "fatal logs" with "error logs". I keep merging the upstream back regularly.
I'm currently backing up about 1.7TB of pictures to B2 from my Qnap NAS. Qnap has a backup app called Hybrid Backup Sync.
The problem is, while doing the one-way upload sync, the Qnap app downloaded a lot of data as well. I got confused why I was showing a lot of 'b2_download_file_by_name' API calls on the Backblaze reports page (600 GB upload resulted in 700 GB download calls).
I've also opened a thread on the forum - https://forum.qnap.com/viewtopic.php?p=673557
Contacted Qnap support and they said a little bit of download is normal but this looks abnormal. Logs are all fine on the Qnap so they suggested I contact Backblaze.
I wonder if anyone else has faced this.
Cheap and easy: buy a 2 TB drive and keep it at home. If some disaster affects your home -- flood, fire, burglary -- it can take out your data and its backup.
Cheap and reliable: buy a 2 TB hard drive and keep it somewhere else. Keeping the backup up-to-date means regularly bringing the drove home, updating it, and putting it back.
Easy and reliable: pay for a service like Backblaze that automatically backs up all your files to a remote server.
There are other benefits to services like B2 especially, namely being able to access your backed-up files from any device or location, or being able to link people to your files on a high-speed server.
You put the 2 TB drive somewhere else (at a relative's) and keep it updated regularly via network.
That's my set up (but with a bigger drive).
At home, I have the master copy of the data on my file server.
Then I have backup #1 that is in the same location and backup #2 that is in a different location.
Both #1 and #2 get updated at night with a "timemachine-like" backup system based on rsnapshot. The network traffic goes over ssh.
Remote backup system #2 cost a UPS, a RaspberryPi and an 8 TB drive, which is about ~$250-$300 total.
The initial sync is best done locally of course, but deltas can generally easily go over network at night.
Cheap, reliable, and (relatively) easy (if you're a geek, that is).
The name escapes me right now, but basically you had to add "friends" in the software, then dedicate a certain amount of HDD space to it. It would then back up your files to your friends' computers, and theirs to yours. Backups were encrypted so your friends wouldn't be able to see your files.
It was a super neat idea, I wish I could remember what it was called so I could see if they're still around...
Both use rsync-style deltas to only send changes, but they use a content-addressable scheme like git so renames are a small metadata change record.
Also, both offer ftp and fuse interfaces if you need to access an older backup.
I'm looking into using the incremental diff/snapshot feature of btrfs to implement a more efficient solution :P
> Why not use $60 you're paying per year for 1Tb of online storage to buy a 2Tb hard drive and use it for the backup?
We HIGHLY recommend both. There is a philosophy called the 3-2-1 philosophy of backups. You should always have three copies of the data, two onsite, and one remote. https://www.backblaze.com/blog/the-3-2-1-backup-strategy/
Local backup is cheap and fast, and you should do it too. But it doesn't provide geographic redundancy.
But realistically, isn’t it worth $60 a year to have a constantly backed up hard drive? The alternative is to take the hard drive out of the safe every so often and do a backup and put it back.
Modern higher density drives are probably less resilient, and who knows how flash drives will fair after 17 years in the closet - but my experience so far is that HDDs trump backup tapes on every measure including costs except at extreme sizes (at this point in time, into the petabytes).
Your point is valid and constant, seamless backup is indeed a good thing to have. Whether it's worth $60/year is one's own decision.
Care to share what kind of procedures you use?
I've recovered some data for friends and employers who want it back but aren't prepared to pay > USD1000 for it but if I cannot connect to the disk I'm lost.
(My tricks: tilting the disk, freezing the disk, leaving ssds powered on but sata unconnected, and even before that photorec and ddrescue etc.)
Note: don't do any of the above if data needs to be recovered at any cost, in that case just contact a data recovery company.
It is proven that if you introduce friction into process, over time that process will be followed less.
A B2 based backup solution costs more but you don’t have those limitations.
If you have an externally attached drive on your computer and it isn’t connected in 30 days and your computer is online, they erase that backup of the external drive.
If you reconnect your computer after a month and you don’t have the external drive connected, they erase your backup.
I perform hourly backups of my VPS and personal computers, storing it all into a giant repo on OVH and B2. If my house goes up in flames, I have to redo, at worst, 1 hour of work.
Additionally I won't have to deal with expanding to a 4TB drive eventually.
WRT your other comment: Yes, there's some small level of incremental security risk but there's so little that's genuinely sensitive in my storage, I'm willing to take that risk. And, yes, it's probably overkill but for the cost, there are a lot of things I spend money on that are probably unnecessary :-)
> I do onsite backups using Time Machine, and also Backblaze.
You are doing everything correctly. You are following the 3-2-1 backup philosophy, which is: "3 copies of the data, 2 copies locally, 1 copy remote". Here is a blog post we wrote about it: https://www.backblaze.com/blog/the-3-2-1-backup-strategy/
Not sure why that matters, or why it's an attack. Since they have your data anyway, as that's the whole point of the service, to store your data on their hard drives. Why go through the trouble of sending it elsewhere? To play games with your data for giggles?
Backblaze's object storage product, B2, is priced per GB-month, so you pay for what you use. Fair enough. Because it is charged this way, it is open for whatever creative use developers can come up with.
I use B2 because I'm locked out of using Backblaze Online Backup - and that's fine with me, because it's the right product for the job.
Your interface is rsync/scp/ssh.
They give you ZFS snapshots, you can use s3cmd from their machines, so you can delegate uploads to S3 via rsync.net.
Our prior backup setup was duplicity with GPG hitting S3, and this sometimes was flaky for listing the current keys.
Glad I read HN, I heard about rsync.net. They even have/had a HN discount. You should use the search functionality to find other threads.
So it's still 6x the price of B2, which made me go with Backblaze.
The ZFS-created snapshots of your filesystem are disabled - it is assumed that you will handle your retention/point-in-times with the borg tool itself (we don't like doing snapshots of snapshots ...) Also, while you get full technical support for the use of rsync.net in general we offer no technical support for your use of borg.
The assumption is that borg users know what they are doing - and that assumption has proved to be correct.
I wish I had a TB for $50 so I didn't have to be so judicious with my photos, but the ability to use Borg is so fantastic that I can't complain.
We lowered the borg rate to 2c this past month and, as is our policy, existing accounts get enlarged to match ...
So in the near future, your 150 GB account will become ~208 GB in size ...
(straight link: https://www.time4vps.eu/storage-servers/)
AWS Glacier, hardly a VC-backed startup, charges a literal tenth of the cost. Given that most people are going to be holding on to their backups rather than retrieving them regularly, the pricing math works out better even though it's a bit more complicated.
Say you push 2TB up to Rsync, AWS Glacier, and Backblaze B2, and you need that data back a year later.
Rsync will cost you $80x12: $960, bottom line.
Glacier will cost you $8.00x12: $96 for the storage, plus .01 for a thousand retrieve requests, plus 0.01 per gigabyte retreival, plus 0.09 per gigabyte transfer.
$96 + .01 + $20 + $180 = $296.10
Backblaze B2: $10x12 = $120 for the storage, plus 0.01 per gigabyte retrieved:
$120 + $20 = $140
I'm guessing the "startup" dig was directed at Backblaze, but they're actually charging more for the plain storage than AWS, where you're paying more for the bandwidth!
> I'm guessing the "startup" dig was directed at Backblaze
And ironically, Backblaze is 99% self-funded and doesn't have VC funding and no deep pockets. We're profitable, the only way to stay in business without VC funding.
(Note: we did have a tiny "friends and family" round in 2009 which was 9 years ago. Plus we sold a small percentage of the company to a silent investor who didn't even get a board seat, no votes, no control. 100% of the board of directors are founders of Backblaze.)
Amazon Glacier and Google Nearline are not comparable products. What we offer at rsync.net is a live, online, random access filesystem - so the appropriate comparison is with Amazon S3.
I believe our current pricing is reasonably comparable to S3 - and at larger quantities is actually cheaper. Also, the borg pricing (2 cents) is cheaper at any quantity.
If you hadn't told me this, or if I don't call a human on the phone number (why? this is an immediate turnoff) listed on your cloud storage page, or go read on "open platform" (which sounds less like a tech page and more like a marketing page), I'd never know about it.
It takes your plaintext files and directories, chops them into gpg-encrypted chunks with encrypted, random filenames, and will upload (and maintain) them, with an efficient, changes-only update, to any SFTP/SSH capable server.
My understanding is that the reason people are using borg instead of duplicity is that duplicity forces you to re-upload your entire backup set every month or two or three, depending on how often you update ... and borg just lets you keep updating the remote copy forever.
Hetzner's robot marketplace changed my (operations) life.
ECC hyperbole much?
When you've decided to put your personal data somewhere in a cloud on the other side of the internet, this kind of stuff should probably be absolutely on the bottom of the list of things you need to worry about.
I had previously thought that dedicated servers were doomed to be too expensive/heavy weight for me. I also felt like most VPS providers charged too much (especially true in the case of AWS -- $10/mo for a t2.micro is ridiculous).
I first found INIZ (http://iniz.com/) and was super happy with them, then someone introduced me to Hetzner Robot Marketplace and I was blown away by the affordable prices (+/- setup fee) and have had one ever since. Hetzner also has a cloud offering that is also pretty great -- slight limits on operating system choice and some other features and you can have very competitively priced machines in a more cloud-friendly fire-up-and-go format.
I wrote about the revelation here (including link to HN thread where it happened): https://vadosware.io/post/fresh-dedicated-server-to-single-n...
Now I have a ~6 Core (12 vCore/hyper-thread) 24GB RAM monster that I can run experiments with for a decent monthly price.
If you go to other providers like Packet, OVH or Amazon, you're going to see way higher prices -- I'm don't have too many requirements so Hetzner worked for me.
https://billing.time4vps.eu/?affid=1881 (affiliate link)
https://www.time4vps.eu/storage-servers/ (straight link)
I had to go through a manual send photocopy of my ID process etc.
This is often for tax reasons -- specifically, whether VAT can be waived.
"Your files on Storage Boxes are safeguarded with a RAID configuration which can withstand several drive failures. Therefore, there is a relatively small chance of data being lost. Please note, however, that you are responsible for your data and there is no guarantee from Hetzner against potential loss of data. The data is not mirrored onto other servers."
"Backblaze will make commercially reasonable efforts to ensure that B2 Cloud Storage is available and able to successfully process requests during at minimum 99.9% of each calendar month.
What if, for arguments sake, Amazon's secret setup is exactly the same as Hertzner hardware wise, with Amazon merely putting a number against the reliability that setup offers?
"The data is not mirrored onto other servers." says it all.
It's like renting some shared space on a dedicated server with auto-monitoring.
Can you elaborate this? I mean besides if the center burns down? They offer "snapshots". If the drive fails, they can not recreate the last snapshot?
As these are used internally in their CLI, there's probably a higher chance that they'll continue to work in the future.
PS: They also don't even use their own library in their code examples so I don't think they meant it to be used in that fashion.
Regarding feature requests I'd love to see a well-maintained B2 Django Storage. I'm currently using an existing implementation, but it's not that well maintained:
Then again, just for backup, I could use S3 glacier for $.004/gb. That’s cheaper than B2 and I get multiple AZ storage. The data charges would be higher - but its backup. If catastrophe struck and I lost my primary and my local backups, getting my data fast is the last thing I would worry about.
Having done that in the past, I have to say that's just a million times less practical than basic S3-like storage. And if you want to automate that setup, Glacier is even worse.
I could see using something like rsync + Cloudberry (maps S3 and make it look like a network drive). Set it up to use one zone infrequent access, and then after x days use a lifecycle policy to move it to Glacier.
My use case for backups is solely for movies and music. For source code I use hosted git repos, pictures Google photos, and for regular office documents, they are either on Google docs or One Drive.
You had to upload pre-prepared "tapes" for backups. You couldn't mutate an existing backup, you had to create a new one. And frequently fetching and/or deleting existing "tapes" (backups) would cost you money (more so than the original cost of the backup).
That meant you couldn't just ZIP it all up, backup the latest version and the delete the previous one to avoid being doubly charged for storage either.
Basically at time of archiving you needed to determine what was already archived and create a new bundle with only what's new, and archive that only. In the same spirit, restore meant piecing together multiple such tapes into a full restore-set.
Absolutely terrible. It was like having traditional backup-software constraints, but none of the software-support.
If Amazon has improved on that now, good for them, but I figured they probably had to if they wanted any users at all.
My offsite backup would only be accessed in the case of catastrophic failure - my primary and local backup data is unavailable. Data transfer does cost more but if I had that type of catastrophe, worrying about getting my movies back for my Plex server would be of little concern. Everything that I would care about - source code, photos, documents etc are stored other places.
That’s another strike against Backblaze backups (not B2 based backups). When we were in between residences last year - we left our apartment when the lease was up and stayed in an extended stay waiting for our house to be built, my main computer was offline for 5 months. One more month and my Backblaze backup would have been deleted. I forgot about it and I restarted my computer before I reconnected my external drive - so my backup from my external drive was erased from Backblaze as soon as I came back online. It wasn’t catastrophic but irritating. Luckily I have gigabit upload.
Could you do a summary of your evaluation for others that didn't test most services?
S3 - not much to say, fast, durable, expensive...the gold standard. Given limitations of below, we use for rotating nightly backups despite cost.
Glacier - great for cold storage/archive, but has 90 day minimum
OVH hot - open stack based, cheaper than S3 but not absurdly cheap, charged for egress even intra-DC which is absurd and kills many use cases. They have crippled OpenStack permission management (i.e. no write-only keys with lifetime management per bucket which is necessary for doing backups securely)
OVH cold - charges for ingress but then storage is crazy cheap, and egress not as bad as Glacier. This is our preferred archival option.
C14 - not object storage, more like a "cold" ftp dump
B2 - pricing is epic, S3-incompatibility is a pain and lack of Backblaze-sponsored libraries (the library in the python b2 cli is not a proper API)...we've been working on adding B2 to WAL-E. However, their permission/user management doesn't cut it.
Wasabi - S3 compatible, great pricing if not for 90 day minimum, which they hide in the fine print
> B2 - their permission/user management doesn't cut it
Have you seen the new "Multiple Application Keys" APIs we have published docs for (and the release coming in a week or two)? I'm curious if they satisfy your permission needs. The docs are here: https://www.backblaze.com/b2/docs/application_keys.html
A screenshot of the web GUI to these keys is here: https://i.imgur.com/RdlgdAs.jpg
(NOTE: the web GUI does not expose the full power of the multiple application keys, it is meant to be easy to use and hopefully satisfy 95% of customer's needs.)
> Q: How do safe-deposit boxes work?
> A: The safe-deposit box is a free temporary storage space
> that lets you to upload your files before creating an
> The safe-deposit box can be accessed for free using Rsync,
> FTP, SFTP, SCP protocols for a period of 7 days and
> supports up to 40TB.
> After 7 days or when you archive your safe-deposit box,
> your data are permanently stored on C14.
> When unarchiving, your data are delivered untouched,
> including file metadata.
Also, AWS costs a lot just in traffic. A lot of people store things on S3 and then make that publicly available.
AWS is "da cloud" for a lot of people. So they ride that wave high and mighty, charging a lot for everything they can easily measure. People will just pay it and will try to [post-]rationalize how it's cheaper than other providers, because AWS is better.
Edit: removed incorrect stuff.
> If you use the large_file API (needed for multithreaded uploads)
We recommend for small files that you use multi-threaded where each thread sends a totally separate file. So if you have to upload both cat.jpg and dog.jpg, you upload cat.jpg in one thread and dog.jpg in another thread.
Based on the Backblaze architecture, that means cat.jpg will be sent to one "vault" in the Backblaze datacenter with one thread, and dog.jpg will be sent to a totally different "vault" in the Backblaze datacenter with another thread. This scales incredibly well, in that it should be twice as fast for two files, and 20 times as fast for 20 files if you do it correctly.
Source: I wrote a lot of the Backblaze Personal Backup client, which uses this philosophy.
Maybe it also has a strategic advantage? Now every product has to announce they support B2 whereas nobody has to announce they support Wasabi, because they support any S3 compatible storage such as AWS S3, Google Cloud Storage or Wasabi.
Backblaze is in a different market. They may be finding out that there's overlap and allowing that use. But they are not the same and probably aren't prepared for developers to start using b2 en masse.
I think it makes business sense. You want to save some money? Do a little extra work for the cheaper product. Want to save even more money? Roll your own with Riak CS. Cloud services all work along the same spectrum where you pay more for convenience and ease of use, and you pay less up front if you're willing to pay in developer or devops or infrastructure costs. I think this fits in nicely on that spectrum.
Had you used Blackbaze B2? How was your experience?
Bandwidth is limited since they aren't connected like the major clouds, but it's workable if you don't need gigabit speeds. Single API key for permissions and lacks all the other features like events, object lifecycle, etc. Basic reporting but shows bucket size in real-time which is nice.
API can be annoying because it requires a request to "start" an upload (to get the address of where to upload), then doing the actual upload itself, but this can be automated away. Only single region for now (with multiple datacenters that aren't visible to you) so no global replication for extra durability or locality.
They have a partnership with https://www.packet.net (cloud bare metal) for free interconnect between their servers and B2 so you can do processing on your data without the public internet bottleneck and fees. Allows for an interesting data lake/warehouse option.
Use Cyberduck for a decent GUI client. If you just need personal computer backup, then use their actual backup offering which is unlimited storage and has auto-uploading background app.
A few considerations: Their web UI can not handle large amounts of files (support said after a few million the file browser will not work). Sometimes when making a large number of deletions at once the API may serve 500 errors and the web UI give Java Servlet errors (this only happened a few times and resolved itself in a hour or two). As another user noted the per file/fragment upload speed isn't fantastic but I could max out my gigabit fiber with many concurrent downloads/uploads. The API has no concept of folders, only file path strings (which is mildly annoying to work with). Lastly I think all the data is currently housed in one geographic area but they are working on a DC in Phoenix.
Overall a pretty smooth experience but I was mostly using it for cold data.
As I am using this for backups only, I went back to Google Drive where I can max out my Gbit upload.
It's 100% european (with Swiss and German regions), and priced pretty aggressively. https://www.exoscale.com/object-storage/
Disclaimer: I now work for exoscale, but was a happy user before that.
I use use it to back up my Nas where all my other computers are backed up. I set it up with duplicacy-cli, rate limited the upload to 700KByte/sec (internet stay usable that way) and the script that launches duplicacy check that it is not already running.
Since I never upload more than 60Gb per day on average to my Nas I don't have any issues
> From Europe it is way too slow.
We are opening a European datacenter in 2018, so stay tuned!
For now, we recommend you use multiple threads and you should be able to saturate any network connection, including yours in Europe. However, we do realize not all programmers or applications are capable of using threads and it would be more convenient to have lower latencies, thus the European datacenter in 2018. :-)
That's why we're still using AWS S3 currently in spite of the price, and consider moving to Digital Ocean's Spaces: B2 location is a non-starter.
> B2 location is a non-starter.
The Java SDK is OK. So I plan on eventually having my application talk to Minio which will allow me to use the S3 API.
Edit: lack of webhooks or something similar for doing follow-up after successful uploads is also irritating.
> You still have no way to tag a snapshot, put in any notes, anything.
We totally agree, and the project is fully spec'ed, just waiting for an available engineer to implement it! On a side note, we also have open recs for engineers. :-)
It helps reduce the blast radius of a compromised server.
In the case where the server is operated by a third party (as is the case with the B2 API server), there can be many compliance implications if that third-party-operated server has access to an internal network.
We don't accept when SSH clients or web browsers have the ability to do things they shouldn't based on instructions sent by the server they connect to.
Why would we suddenly have lower expectations of our file storage API clients? (or any other network/HTTP clients for that matter)
At the moment, you're probably still more at risk of downloading a malicious library from PyPi or npm but this is sure to turn up in a CTF at some point - even curl is technically vulnerable.
Have you talked to anyone from Backblaze about this?
Thankfully command-line curl won't follow redirects unless you pass it a special flag, though if you do need it to follow redirects, I'm not sure what the best way is to restrict the range of redirects that it will follow.
This issue was part of a broader coordinated disclosure and was only published today. I've gotten in touch with B2 support & I'm hoping my support ticket will make it to the correct people.
Are you sure ? http://gaul.org/object-store-comparison/ says there are cheaper options
EDIT: Edited the link away from https://wasabi.com/pricing/. Wasabi seems cheaper than B2 and claims to be a hot storage solution
EDIT: Response to your edit, Wasabi also has 90 day minimum storage policy.
edit: Woah, you edited your comment. What made you change from Wasabi?
Borg was a runner up, but Duplicati had built in B2 support, provided scheduling, and a web interface which makes navigating for specific files in a tree easy when needed.
Crashplan had a nice Linux client, but it was blackbox/closed source, so problems came up from time to time that were hard to debug. So its nice to have more control of my data as well.
> uploading large amounts of data to [Backblaze B2] is very slow
All reports we (Backblaze) hear is that if you only use one thread, Backblaze B2 is slightly slower than S3 (like maybe 90% of the performance). If somebody has better numbers I would LOVE to see them!
If clients use multiple threads, this issue goes entirely away. Using 500 threads can provably be 500 times as fast with Backblaze. This is because the Backblaze B2 architecture means there are no "choke points" like Amazon S3 has. Each thread will most likely be talking to a completely different "Backblaze Vault" maybe even in a completely separate Backblaze datacenter. Since they don't share any network switches or load balancers in common, there is no way they will slow down.
But again, I would love any measurements or reproducible tests showing differences so we can chase them down and improve Backblaze B2!