Hacker News new | past | comments | ask | show | jobs | submit login
Backblaze Hard Drive Stats Q1 2020 (backblaze.com)
266 points by gnabgib 13 days ago | hide | past | web | favorite | 96 comments





I always love the Backblaze hard drive stats, even if I'm not in the market.

> During this quarter 4 (four) drive models, from 3 (three) manufacturers, had 0 (zero) drive failures. None of the Toshiba 4TB and Seagate 16TB drives failed in Q1, but both drives had less than 10,000 drive days during the quarter. As a consequence, the AFR can range widely from a small change in drive failures. For example, if just one Seagate 16TB drive had failed, the AFR would be 7.25% for the quarter. Similarly, the Toshiba 4TB drive AFR would be 4.05% with just one failure in the quarter.

Backblaze should consider adding interval estimates in addition to point estimates. It might help the reader understand the uncertainty of the point estimate.


More important than interval estimates would be survival curves to know how the drives fail as they age, which could give you a sense of when they should be replaced.

I look forward to these reports with the same zeal and enthusiasm as I used to reserve for comic strips in Sunday newspapers.

This data is significant and appreciated. Thank-you so much, and please keep up the good work.


While I don't _quite_ share your enthusiasm. I sure do share your appreciation. Nobody else at their level appears to be sharing this data. This is not something they had any obligation to share with us.

It certainly is useful and interesting.


Every time I read through their update articles I'm reminded that Backblaze is a service I could possibly buy! I only buy a hard drive once every few years, but I enjoy seeing the data. The time it takes to write up and share what they are no doubt tracking anyway is probably quite worthwhile, I'd guess this is an effective piece of marketing. I'd be quite interested to see how many new sign-ups they get each time they post these stats.

Disclaimer: I work at Backblaze.

> I'd guess this is an effective piece of marketing.

Yes, it really has been good to us. :-)

It is data we would collect internally for our own decision making and tracking even if we didn't release it, and it is only a small amount of work for us (mainly for Andy the author of that blog post) to format it up every 3 months and write some observations down. Since it isn't what we "sell", it would have just gone to waste being hidden. And it results in our name getting "out there" and inevitably every quarter somebody asks "hey, what do these people do that requires 130,000 hard drives anyway?" and then we get a little bump in customers. Cost vs. benefit has been WELL worth it to us.

Along the way (unrelated to making money) it makes us happy that people get some use out of it. Like we find out somebody does their PhD thesis on the raw SMART numbers of the drives or something, we enjoy hearing about it.

Until my dying day I will never understand why Google Storage and Amazon S3 and Microsoft Azure don't release their drive failure statistics. I just don't get it. Those companies have SO MANY DRIVES and they employ people with PhDs in statistics, they MUST have the same info internally but at 100x the scale. I don't get why they don't release the numbers?! But hey, if they want to continue to give us this marketing gift of exclusivity, I'll take it. Maybe they like us and are just doing us a solid.


A long time ago, WANOPS put a couple racks of machines in the parking lot of the (former) Building 11 on Microsoft Campus.

We had a lower failure rate across the board, for all components. I think the only protection that was in place was some blue tarps and a chain link fence.

I can't find a public reference, sorry.


>they MUST have the same info internally but at 100x the scale. I don't get why they don't release the numbers

Could be that on that scale they're dealing more directly with manufacturers, and they fear sharing failure data would affect it negatively?


Great work! I wish you have Asia region DC.

.. it is however, very good cheap marketing

Some argue this data has mainly historical value. By the time it's available vendors may have changed many aspects of respective models, from geometry to controller chip to motors, let alone firmware. You might end up bying a very different product under the same SKU, to which this data has no relation at all.

At this point, Backblaze is a serious "influencer", and HDD companies would be smart to make sure they get the best possible drives, even if the rest of us get junk.

I mean "doing it well for an influencer and shafting everyone else" is far cheaper and easier than just making a solid and reliable product consistently. That's such old-fashioned thinking these days. :)


> At this point, Backblaze is a serious "influencer", and HDD companies would be smart to make sure they get the best possible drives, even if the rest of us get junk.

I don't know if they're as budget-conscious as they used to be, but Backblaze used to "shuck" drives from externals because they're cheaper [1].

If this is still the case, then their stats should be similar to hobbyists who are doing this too.

[1] https://www.backblaze.com/blog/backblaze_drive_farming/


I think part of their shucking issue was from the floods in Thailand in 2011 that put a huge constraint on OEM drives. There were plenty of external drives in the channel so BackBlaze ordered a ton and shucked them. While the shucked drives might not have had the warranties or even the specs of OEM drives they might have ordered the aggregate nature of their storage meant they were a good replacement.

Disclaimer: I work at Backblaze and was there during the Thailand flooding.

> While the shucked drives might not have had the warranties

We haven't shucked drives in a while. The main advantage was when this artificial price difference that occurred in the Thailand drive crisis where a "raw drive" cost $400, but if it was put inside a $7 USB enclosure the same drive including the USB enclosure suddenly only cost $200. At the time, our business model literally wouldn't survive the cost increase, it was shuck drives or go out of business. :-) We assume the issue was that mostly "rich companies" bought "raw drives" and could afford the increase in price, but "poor consumers" bought drives in USB enclosures and couldn't afford the increase.

We still watch the prices though. You can think of it this way: if 3% of the drives fail but don't have a warranty because they were shucked, then we have to save more than 3% (plus the cost of the "shucking" process) to make it worth it. That rarely happens anymore. I think partly because as the volume of drives that we purchase has gone up we have the ability to get better "bulk discounts" than we had in the earliest days. Also, the process of "drive farming" where the retail stores would limit it to "2 per customer" or whatever would get much harder at our current scale.


You might end up bying a very different product under the same SKU, to which this data has no relation at all.

Exactly, as we just saw with the Western Digital debacle.

The unfortunate reality is, only a fortuneteller can say what the best hard drive to buy right now is.


I don't think there were any non-DMSMR WD Red disks that had the same SKU as the DMSMR Reds. It was just a particular consumer SKU that didn't mention whether it was DMSMR or not (and was.) But anyone ordering that SKU was going to get a DMSMR drive. They weren't sending enterprise customers ordering that SKU one drive and sending consumers ordering the same SKU something else. Enterprise customers had their own catalogue of enterprisey SKUs to order from, that had no overlap with the consumer-level SKUs.

The best hard drive is the one with a backup :)

I agree. I circulate it every quarter in my company's Slack channel to show the importance of publishing open data and reports, not just as a steward of technology, but because it is actually useful marketing.

I concur - I used to work with a set of clusters totaling ~45k drives and it was always fun to compare failure rates (especially on a per model basis).

Did you end up with significally different values sometimes ?

Our failure rates were consistently much higher compared to Backblaze, usually 25-30% higher. I never did any detailed analysis, but I'd chalk it up to a couple items. First, the stock of drives were not rotated out as frequently as I understand Backblaze does. The overall workload on the clusters was extremely heavy, but not consistent across all drives in the cluster, resulting in localized 'bursts' of activity. Finally the datacenter where the clusters lived used evaporate cooling and was located in a hot, dry environment surrounded by very fine soil. During the summer the machine room was Georgia level humid, and powering on any piece of hardware would usually yield a small cloud of dust.

Disclaimer: I work at Backblaze.

> Our failure rates were consistently much higher compared to Backblaze

That's interesting.

> The overall workload on the clusters was extremely heavy

That is the most likely explanation. We see higher failure rates when we are writing to the drives, for example as a new vault fills with customer data. Backup (our oldest business) has a fairly easy work load, it is not as punishing as a database that is pummeling the drives.

In certain circumstances when we are down more than 1 drive in a 20 drive Reed Solomon group we stop putting new data on that drive group until the parity is restored explicitly because this lowers the chances of an additional drive failing in the group. That gives us more time to rebuild the parity with less stress in our lives. When that last parity drive fails and one more drive failure means customers lose data the fun drains right out of this job. Red alerts are thrown, pagers go off in the middle of the night, datacenter techs start driving towards the datacenter at 3am to replace drives. We prefer the world nice and calm and relaxed with a good night's sleep.


> We prefer the world nice and calm and relaxed with a good night's sleep.

/me goes and checks BackBlaze's careers page...

(I suspect driving time to any of their datacenters from Sydney puts me out of the running... At least their 3am emergencies would be a much more reasonable 8pm emergency from here.)


How fast can you restore such an enormous dataset anyway? Drives have become so big a 20 drive group could hold almost 1/4 petabyte, it must take a long time to read data, recalculate parity and write.

Turns out that some outdoor environmental dust is ferromagnetic.

A particular vertical-axis wind turbine project was destroyed by buildup of magnetic dust on the generator magnets.

I'm not saying that's what was killing your drives directly. Normally HDDs are fully sealed. But the amount of dust you mention is awfully suspicious. It might change your investigative perspective a bit when you consider that some small percentage of all that dust is ferromagnetic.


Another Backblaze stats release, another time I wish someone on the inside at Seagate would drop in and tell us what the deal is with their drives.

I have a lot of old Hitachis that still work, but every last one of my Seagates died years ago. Yes, they've gotten better since then, but they're still reliably trounced by Hitachi.

Where are the failures coming from? What did they cheap out on? Has the C-suite decided it's not financially worth increasing reliability? What are the internal feelings on being the outlier, year in and year out, in these stats? Does anyone care?


They will get sued if they say why. Hence they will never admit why.

They sell a lot of drives, do they have to care? Someone has a spreadsheet that tells them when it is time to care.

Interesting data. Overall surprised at how few failures they saw.

From an interview last year, it sounds like they don't use SMR drives[1]. I would be interested if there is any good source for failure rates of those.

[1] https://www.backblaze.com/blog/how-backblaze-buys-hard-drive...


Yev from Backblaze here -> Yea, we've tested them but found they didn't play nice so we aren't deploying them in droves. If you find a good place for data on SMRs send it our way, would be fun to read up on it!

Did you try host-managed SMR? Seems more promising, although significantly more effort.

Not sure on that one actually. I think one of the main things for us was the rewrite performance, since we're constantly moving/copying/deleting/changing data we need that to be fairly performant - but not sure about the host-managed drives!

SMR performs like crap and doesn't save much money on density, so it's a poor play for anything except tape-replacement.

Personal anecdote: F2FS copes so well with SMR that for consumer use, there's not much of a difference (assuming you can run Linux, etc). No write pauses observed here.

Any articles or writeups anywhere on this? Is F2FS stuitable for production use?

> Is F2FS stuitable for production use?

It's widely deployed on Android, if that counts.


I recently bought 4x WD HGST WUH721414ALE6L4 512e 14TB. The 4Kn's were the same price but on lengthy backorder/dropship from WD, so that wasn't going to work. Also, I absolutely refuse to buy the WD Gold (WD141KRYZ) that is effectively the same product but at a much higher price ($480 vs. $346). Marketing people can take a long walk from a short pier.

I'm (unfortunately) boycotting WD and their subsidiaries; https://news.ycombinator.com/item?id=22935563

Who are you buying from? Seagate also had undisclosed SMR, and I believe Toshiba did as well.

Seagate and Toshiba are sane compared to WD because they never adopt SMR on NAS drives.

I was actually looking at WD Gold for the next workstation. Do you have any good resource / introduction-page about their HDDs price/performance/misc (e.g. which drives are just rebranded). Very hard to compare specs for us uninitialized =)

Sorry, nope. Do your own research using open sources, i.e., forums, review sites, and reviews, before you buy. Caveat emptor.

Still wondering what happen to those HDD roadmaps. HAMR and MAMR and those 40TB promised by 2023. And as far as I am aware even the 18TB and 20TB coming in late 2020 / early 2021 are still CMR and SMR.

If it wasn't for Helium Sealed tech moving more platters into HDD, we would have zero capacity improvement in the past 4-5 years.


I'm expecting a resurgence of quantum bigfoot style drives.

So, while the idea was ridiculed in the press, the advent of SSD's and the usecase for harddrives these days actually means it makes a lot of sense given the area (and therefor the capacity) increases with the square. Combined with the slower spin rates also increasing the density, and you get a device which is far more generally useful for bulk storage than SMR.


I miss my full-height Micropolis drives (said nobody, ever)

If you consider a 2x area increase going from a 3.5->5.25, and another 4x going from a 3.5" HH to a 5.25 FH, your talking a 8x capacity increase. That is ballpark over 100TB in a 5.25" drive bay. 5 of those standing vertical would fit in a 19" rack for ~600TB in the front panel. Of course a 3U is exactly 5.25", so you would have to modify the original form factor a bit to make it fit or use 4U and waste a half inch.

Bottom line, assuming 20T 3.5" drives, you have doubled the front capacity of a 3U case.

OTOH, if you really wanted to play games consider how much capacity could be stored on a 19" platter like the old IBM mainframe disk units.


> or use 4U and waste a half inch.

... or use 4U so you can use the half inch clearance to try and help cool them enough...

:-)


The reason I asked is because we are not driving down cost. And judging from how HDD maker react they are going to milk this for as long as possible, all while our Data usage and requirement continues to grow rapidly. i.e Our total cost for Data is actually increasing.

NAND prices fluctuate up and down and the current lows are not very sustainable.

It seems the days of computing getting cheaper every two or three years are gone.


I've heard they are actually closer to production, but yeah.. HAMR's been "a couple years away" for over a decade at this point, I think.

I'm thankful for these reports as well, despite them being trailing edge/etc. For a while they mirrored some problems we were having at work with a particular vendor (and helped to justify switching products a year or two into the nightmare).

What I really wish is that they would make an effort to go beyond just reporting their experiences and see what they are doing as also providing a service in the form of model reliability data. AKA toss a few pods of WD's/etc in there even if they are slightly more expensive/whatever. If the data were broken out by production location and manufacture date, its likely it would be something that they could sell on the open market for a small fee. I know I would have gotten the company I worked for to pay such a fee for a somewhat scientific look at the failure rates of certain models/etc.

AKA, pay a bit more for a broader set of drives, summarize the data for free, and then get people to pay for the detail data. If you work for a company buying a thousand or so drives a year, avoiding a 10% AFR is going to be worth a lot of money. AKA too small to have a good view of the state of things, but big enough that buying 1k bad drives and fighting daily RAID rebuilds for the next three years is real nightmare. Think of it as a bit of insurance, or at least validation of a problem when things start to go south.


Disclaimer: I work at Backblaze.

> AKA toss a few pods of WD's/etc in there even if they are slightly more expensive/whatever.

If you see "low numbers" of one particular drive model in the stats, that is usually us trying out that drive model because at some point in the future the price might drop making it worth purchasing it in bulk worth it, or the price is ALREADY worth it but we're being careful in the rollout to make sure that drive model performs in our particular application - nobody else's application, just for us.

> make an effort to go beyond just reporting their experiences

We are VERY careful to report what we are seeing in our datacenter for our particular application, and no more. This isn't a scientific study, we aren't Consumer Reports, we're just publishing data we would collect whether or not we released it. We have a core business we super happy focusing on, somebody else is WELCOME to sell drive testing and drive predictions and we won't compete with them or get in their way. Heck, we would totally subscribe to that service!


When there were still stats being published for WD, they were always significantly worse than HGST, even long after WD bought HGST.

I'm wondering why they keep maintaining two product lines that are sufficiently different to result in one basically being consistently the best, the other being consistently the worst, in terms of reliability.


I actually knew the company from those reports. Migrated from S3 to b2, not the same featureset, but good enough for me.

Over the same features I find b2 to be way simpler and cheaper.

Thanks


Yev from Backblaze here -> Nice! Welcome aboard :)

Edit -> what are the feature's your missing most? We're collecting feedback at: b2feedback@backblaze.com if you want to send some notes over!


Edit -> My grammar was terrible in the above sentence. Yikes.

It's surprising to me that they use actual boot drives and don't just boot their machines off of USB sticks or via PXE or like those fancy Dell dual redundant internal sdcard modules.

But perhaps the boot drives are just leftovers that are too small to be used any more for storage?


Yev here -> The boot drives are helpful for log storage/collection so the extra capacity is nice - plus they're not "large" drives so the costs associated are pretty minimal!

Is the output of what you'd normally write to these boot drives significant enough to impact the network? I'm surprised you can't just stream it elsewhere to consolidate the cost.

Interesting question! I don't know but I'll ask the dev team.

*Edit -> asked around here's whats up -> we do currently stream some logs/data off the device but we also like it to be written to disk - there's something about having multiple copies that we like ;-)


> there's something about having multiple copies that we like ;-)

With an attitude like that, you should consider running a backup as a service company! ;-)


I guess that's fair. If anything happens to the network, you don't want to lose your logs in the mean time. :)

local logs also make troubleshooting much easier, especially if you have to bring a system up offline for forensic analysis.

That's fair; I can totally understand wanting to have an on-machine bodybag for certain kinds of failures. I know there are a lot of differences, but speaking from the point of view of having spent ten years working with fleets of mobile robots, there's no way that exclusively off-board logging would be sufficient for what I do.

Might just be convenient for logs and other random junk

Once again Seagate putting on a poor show reliability wise - although, with only three companies really making hard drives these days I guess Backblaze doesn't have much choice but to also include their drives in their service.

Why doesn't Backblaze then switch to more of the models of the other companies. It almost seems like the more of a particular model they have the higher the associated failure rates.

My understanding is that though they're less reliable, it's cheaper to buy more of them and expect some to fail instead.

Disclaimer: I work at Backblaze.

> though they're less reliable, it's cheaper to buy more of them

Exactly. Each month our buyers go out and get bids for more drives. The cost is input into a little spreadsheet, and the SPREADSHEET tells us which drive to buy. It isn't about picking the most reliable drive, we have a software layer and redundancy for that! It is about picking up the least expensive drive.

The spreadsheet isn't complicated. If drive model X fails 1% less but costs 2% more then we don't buy it that month. It might change the next month. If you want to win our business, just look at the failure rates and under bid the competitors by the failure rate of your drive.

There are some other things in the spreadsheet I should mention. If a drive is twice the density (let's say a 16 TByte drive vs an 8 TByte drive) it is still the same physical size and we pay for physical space rental in datacenters. And another thing is that drives that are twice as dense uses approximately the same amount of electricity, and power costs are an ENORMOUS amount of our overall cost of operation. So the spreadsheet will choose a more dense drive even if it is slightly more expensive per TByte just because of the other savings it implies. Again, the spreadsheet isn't complicated, but the spreadsheet tells us which drive to buy.

If you are only going to purchase one drive, and you aren't going to back it up, you should sort by reliability of that drive. You are also insane -> always backup your drives!! And as long as your drive is backed up and you trust the backup, then who cares if the failure rate is 1% or 2%?


> And as long as your drive is backed up and you trust the backup, then who cares if the failure rate is 1% or 2%?

I would choose to pay a premium to reduce the chance of me needing to restore from backup as often. Not a big premium, but one that makes sense at the one or two drive purchase scale that might make a whole lot less sense at a one or two thousand drive purchase.

I'd pay an extra $20 or $30 on a ~$300 drive to drop from a 2% to 1% failure rate... (As it turns out, for home I pretty much buy drives in pairs and mirror them for all my not-in-a-laptop storage - so I guess I pay a 100% premium for reliability...)


But even a restore would cost you work/energy, right? So should that not introduce a supra linear premium for more reliable drives, rather than just the linear failure percentage?

(I realize in practice you probably do, but in the explanation here you said "just look at the failure rates and under bid the competitors by the failure rate of your drive")


Little offtopic, but the new S3 integration works for you guys?

I tried to connect with s3cmd but it wasn't working. Unfortunately their support was not very helpful neither...


Yev here -> we're working on it! We've heard a lot from folks using s3cmd and I believe we have some changes coming shortly. If you want to chat with the PM team directly, we're grabbing feedback at: b2feedback@backblaze.com!

Where should I keep an eye out to know when you've resolved this? Will you (or Gleb) announce this on your blog? Or is there somewhere else I should look?

Hey! Best thing to do would be to write to that email address and ask for an update when there is one! We're trying to get to everyone there! :)

I love these. It'd be quite nice if they made some attempt to publish confidence intervals.

Andy at Backblaze here: Good to know. We used to do this with the lifetime stats, but that got lost somewhere along the way. I'll look at getting them back. Thanks.

Can you publish the survival (Kaplan-Meier) curves? I think it's more useful to analyze hard drive failure rate as a function of drive-age and/or lifetime-hours and not how many hours the drive was used this year. The annual failure rate of a drive is different between a new drive and a 10-year old drive.

I really appreciate it. Thanks, Andy!

Unrelated, but anyone use B2 in SEA? How is the dl speed? I'm on the market to try to move away from cloudfront + s3 (I use cdn77 in front of cloudfront now, but have like 10% cache misses in a key area everyday that would theoretically hit b2)

For anyone that comes across this: they only have 4 data-centers, and the first out out side of the US was in Europe asof EOM august 2019[0].

[0] https://www.backblaze.com/blog/announcing-our-first-european...


I wonder how long it will be before backblaze moves to SSDs. Obviously a lot more expensive right now, but with the lower power consumption/heat and ability to cram way more in, might be sooner than we think?

Andy at Backblaze here. We use SSDs in our core servers and more recently in boot drives as they both need to speed. To store data in our case we don't need the speed so its not worth it yet. Given the amount of data growth, most predictions have HDDs still with about 50% of the storage market in 2025.

As a thought experiment disregarding the cost: How much SSD based storage could you effectively pack in a single pod (including wiring/distribution/power/cooling etc.) today?

Seems SSD has potential for higher density though, no?

Though cross-over point is still some time away, a decade or so?


In terms of volumetric density, flash is already much higher. A 4TB m.2 2280 SSD is ~70 times smaller than a 16TB 3.5" HDD.

Yes, I was imprecise, I meant at roughly the same cost.

Just seems that the spinning rust guys have to jump through some pretty extreme hoops to get density up, while the flash guys "just" have to reduce cost, which gives them more angles of attack (ie either further density increase or process optimization).


When SSDs are comparable in price to HDDs. Which is... not soon.

It's more than just $/B of drives. It's $/B all-inclusive.

Backblaze deploys their storage as storage pods (a server holding drives) in their vault architecture (many pods form a vault, with data sharded across pods in a vault).[0]

So you would want to look at $/B of a vault. A vault includes the rest of the server hardware. The cost of the vault includes not just the server hardware, but also floor space in a data center and electricity to run the hardware. Flash storage is denser than HDDs and more power efficient, so you should expect a monetary breakeven at some point before SSDs are priced at similar $/B as HDDs.

[0] https://www.backblaze.com/blog/vault-cloud-storage-architect...


Also, I wonder about availability of off the shelf hardware for non HD alternatives?

It's reasonably well understood how to plug 45 harddrives into a server, I wonder if you can buy hardware to drive 45m.2 2280 SSDs into one server/motherboard as easily?


You could probably see benefits with 2.5" drives, just due to the form factor and lower power draw compared to 3.5" HDDs. If you wanted to put in a bit more work, you could shuck the 2.5" enclosures, as the PCB space of an SSD is usually significantly smaller than the volume of the enclosure.

I see Seagate is still tops the list of failures after all these years. I wonder what we would see if we put all those stats together which company actually comes out as the worst.

One company will always be worst. Using the worst drives often makes sense for a business. It all depends on price, availability, and deployment plans.

Seagate's drive longevity has improved a lot. Compare today's report to Backblaze's 2015Q1 report:

https://www.backblaze.com/blog/hard-drive-reliability-q1-201...


Is there a SSD version of this sort of report?

And if so, would the TL,DR version of it still be, "Buy Intel?"

For a while, Samsung was ahead.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: