Hacker News new | past | comments | ask | show | jobs | submit login
What is the best hard drive? (backblaze.com)
446 points by tolien on Jan 21, 2015 | hide | past | web | favorite | 140 comments



Always love reading HDD reliability stats from Backblaze -- but this demonstrates one of the reasons why post dating is so important, especially when the information in the post is time-sensitive. Nowhere on the page does it say that the post date is today, unless you click the "latest posts" tab by the author below.

I had originally though it was a repost of the many older articles from Backblaze until seeing a reference to Dec 31 2014. While not terribly ambiguous now, the ambiguity will only grow as the year marches on.

If someone from Backblaze happens to see this: you don't need to put it in your URL, but please date your post near the top or bottom of the text.


Yev from Backblaze here -> it's an internal debate as to whether we should put dates on everything. It used to be that they were part of the URL (because of the way our blog was designed) but that is no longer the case. We decided to leave them off for a while to see if that made posts more "evergreen", but we definitely see where it can lead to some confusion. We'll keep chatting about it internally, there's likely a good middle-ground.


As a general rule, I get really annoyed when I come across a web page and I can't tell how old it is, especially if it is information that is likely to age rapidly. I don't care if putting a date on the page makes it less "evergreen", hurts someones SEO stats or makes the flowers wilt, I just want to know if I can trust what I'm reading.


>> "makes it less "evergreen""

The evergreen no date thing is fine with me - provided the content itself is actually evergreen. That's the bit people seem to forget. Not dating content doesn't magically make it last forever. If the information is unlikely to expire then I don't need a date. If there is a chance that the information will go out of date, especially if the dated information will not be useful to the reader, then it should be dated.


In five years will the failure rate of 3-4tb drives that are harvested from external enclosures matter?


I have no idea why this is downvoted at all, let alone downvoted so heavily. It's correct. The content will be out of date in five years, so it's not evergreen and should be dated.


A thousand up-votes for you. Date of information is one of the most important contexts in IT. I can't count the times somebody has said "This says this and that about such and such", and I have to say "Yeah bro, when was that written? Oh, three years ago? What's the story now?".


Agreed. FWIW I basically don't read articles where I can't find the date, unless it's clearly something that doesn't need one (which is rare). Especially anything technology related, truth turns to myth in no time at all.


Yes, dates are really important.

If anybody thinks, readers will skip their article, if the date is too easy to find, why don't you sign the article with the authors name and then let the author decide, if a date has to be added. Like "joe - Nov. 1, 2014" or just "joe".

This article certainly needs a date.


When I'm reading articles about tech, I almost always skip the ones without a date, unless they're the last or only source of knowledge on the topic I'm looking for.

An article without a date is about as trustworthy as a scribble on a bathroom stall.


I agree. I actually submitted a bug report to DuckDuckGo based on this. Perhaps a slight abuse of the reporting system, but I feel one of the primary reasons I end up typing !google at the end of my searches is that I don't see dates on searches by default in ddg.


This. No debate necessary, just add the dates.


My goodness! This comment upped my karma points by about 50%! I guess I hit a nerve... Thanks for all the up votes.


Well you inspired change as well ;-)


BREAKING NEWS -> There are now dates on all of the individual blog posts. The landing page is "date-free" but is in chronological order, if you open a post, the date will be below the title...AS NATURE INTENDED!


That's amazing - I'm reading the post right now (as in, 11:28 AM pacific)- and I switched back to the tab, and it doesn't show the date. But I opened it less than 10 minutes ago. They couldn't have changed it that real time could they.

Hit Refresh. Lo and behold - there is the date.

Now that's an agile organization. Thanks very much - I really appreciate the date on these posts as well.


Our pleasure :)


That's WordPress for you ;)


It certainly helped :)


Backblaze is an excellent service and your blog posts are a favour to the world. Thanks for being awesome. Also, thanks for adding dates to your blog posts :)


Evergreen, now there's a euphemism. :-)

I appreciate, in the end you have to do what gets you the most views, however, from my perspective missing out the date means you prefer to waste my time as I have to scan the article until I get a feel for how old it is.

I would read anything written within 6 months and would consider up to 12 months if the information was high quality. It's not like we're overburdened with quality, independent, information.

If your going to be business minded, why not have the date on for the first few months and remove it when the first flush of green has gone. ;-)


>Evergreen, now there's a euphemism. :-)

It's not a euphemism. It's a standard term in web content for articles that are not time-sensitive. An article on how Quicksort works is an evergreen article. One on predictions for tomorrow's stock market moves is not.


Note to the point, it shows that Backblaze cares more about content marketing SEO than HDD reliability.


Yev from Backblaze here -> We've actually done a lot of experimenting and content/SEO marketing has helped us grow and expand quite a bit over the last year. As for the HDD reliability...our software works around HDD failures, so they don't affect is too much, but we do like reporting what we've seen in our environment. Hopefully other companies will do it too.


It'd be better if they put the date on, but I don't see Amazon or Facebook or Google sharing their reliability data. As long as backblaze is more open than the industry standard, I think that's as a good thing regardless of their motives.


They aren't "evergreen" by nature though, right? Many of the models you list will probably be gone in a handful of years, and the reliability numbers on different companies and different disk sizes could have done a complete 180. Why, just contrast the great numbers out of Hitachi with the Hitachi DeathStar of old...


I'm not sure if it's been edited, but the current post seems fine to me. It doesn't have a date on the post per se, but it's very clear up front when the data is from: the table is labeled "Failure Rates Through December 31, 2014" in bold in the table header, which leaves little room for confusion.


We haven't edited the current post! It does state that a year ago we published the first set, but that assumes you read last-year's post :)


Update -> We have NOW edited the posts, so they all have times :)


You definitely should put a date on everything. Without a date, your post loses much of its context: http://www.observationalhazard.com/2014/09/put-date-on-your-...


hey Yev, how about something secure like TLS 1.2 on the blog? RC4 and 3DES are great if you are playing hackme's at defcon to let people break your 'secure' connections on the fly using laptop GPUs, but not on supposedly Secure Cloud Storage company blog.


One minor point since I see a lot of people get this wrong: 3DES is, to the best of anyone's (public) knowledge over the last like 35 years, secure to its intended security level (112 bits, which is basically fine). The only real problem with it is that it's ridiculously slow; there are plenty of faster stream ciphers with higher security levels. 1DES, meanwhile, is weak, but only because a 56-bit security level is useless today.

That said, yes, TLS 1.0, RC4, and SHA1 are all starting to smell, and it's probably time to toss them down the drain.


Why does it matter here? Are you so worried that someone will eavesdrop on your blog reading?


Please put dates on things that are published. It can be important information for readers. Leaving off dates is frustrating and doesn't make the content any fresher in any way.


"... Yev from Backblaze here -> it's an internal debate as to whether we should put dates on everything. ..."

@Yev, writing about specific technology is time-bound and doesn't exactly have the half life of DNA.

Having said that, I appreciate all that Backblaze writes about hardware. It's interesting and easy to read. This takes time and thought, you (at Backblaze) must be doing something right.


Yea, for posts like this it certainly made sense to put dates on them, since the data has a shelf-life, but for other "entrepreneurship-centered" posts we figured it would be best if they seemed "fresh" forever, even if they'd get chronologically moved down on our own site, the search traffic would still have the posts appear "newish". We've since added all the dates on all the posts...as nature intended ;-)


Websites don't go yellow as they get older, and it's really common for data from 10 years ago to seamlessly update so it looks like it was posted yesterday.

If you don't date your articles, they're of casual interest only, because I can't be sure they're current enough to use them as the basis for decisions.


One of my preferred middle-grounds (what I use for my own writing) is to add a "Last updated: <date>" at the bottom or in faded font at the top of my posts. This way the information is there, and useful, but that doesn't mean the post cannot still be relevant today.


This (and I'd argue most tech posts) is not at all evergreen content though.


You could do:

- Show last update date on all posts; - The evergreen posts you just update every 3-6 months to keep them updated and still green; - The not evergreen posts can be considered not useful therefore let them be old.


Perhaps a good middle ground is to add a date to these types of posts - but put the date in a image - so its user readable but most likely skipped by search engines


Hopefully the end-goal of all these posts and pages is somewhat real-time (maybe updated daily or weekly) data on hard drive reliability. Then the blog posts just serve as a layman's write-up of the interpretation of the data.


Unless they just added the date all over the first couple of paragraphs, the content itself is very clear immediately as to exactly what time period it conforms to, and is not hard to see at all.


It would be cool to have a browser plugin that could overlay the posting date onto a page. The approximate post date should be inferable from archive.org or Google cache.


It is just under the title at the top of the article: " What is the Best Hard Drive? January 21st, 2015 "


Important News Bulletin -> Dates are now back on the individual blog posts!


Is it only because they did a similar post last year that you were confused? Maybe this just needs "2015" in the title?


They recently did an article on 6TB drives. My first thought was that it could be misnamed repost and I started by searching for a date.


I've been listening to the discussions of data recovery technicians for many years. As such, I took a guess at the results before the page loaded.

And I was right. Seagate bad. Hitachi best. WD almost as reliable as Hitachi.

I can recall one or two bad WD models over recent years, the one at the top of my memory is the 500GByte WD5000AAKS. There was a flaw in the WD10EACS that made it park too frequently, but at least it had a parking ramp.

Hitachi has been a good vendor since the "deathstar" glass platter fiasco blew over. Now: Deskstar 7K's forever!

On the other hand, Seagate seems to ship flaw after flaw. Lately they tend to have spindle bearings that die if the drive gets bumped. If that doesn't happen, surely a head will go bad.


It's funny how things can turn around so completely. Around the time of the Deathstar fiasco, you couldn't pay me to use a Hitachi. Seagate was the brand to buy. Now it's the complete opposite.


I still cringe a little bit when I see "Deskstar", but apparently they've turned it around in a big way. I'm still irrationally scared, though.


Me, too. I put one in my RAID, with part of my strategy is to use a variety of manufacturers and drive types so the drives don't fail all at once. This one is 50x the capacity of my cursed 75GXP...

http://www.computerhistory.org/groups/storagesig/media/docs/...


Same here. I'm surprised they haven't rebranded the line to try to shake the association (like Microsoft seems to be doing with IE).


I'm going to be mad forever at WD for pioneering the approach to distributing drive firmware & tuning parameters over both stripe 0 and the drive's on-board ROM, making board swaps impossible for small shops.

But I'm really disappointed at what's happened to Seagate. I've gone from total confidence in recommending their products to actively recommending against the brand in just a couple of years, and our experiences roughly match Backblaze's. (Though on a much much smaller scale.)


In the older days some of these disks had a discrete rom for this, but recently they have been integrated into a larger chip. It seems that if you have the rework skills, you can transfer the chip with integrated memory to the new board. It's certainly gross when the spindle motor driver on one of these boards explodes.

However if you take the lid off a WD drive, the head arm pivot goes out of alignment, since it is secured in place by one of the lid screws. So, if you have to swap internal parts for data recovery, you should own or make a jig to hold the top of the pivot and make fine adjustments, or you will never be successful. Depending on your skills, this may present more difficulty than reworking a board for a swap.

Luckily I have never had to try a DIY data recovery.


> However if you take the lid off a WD drive, the head arm pivot goes out of alignment, since it is secured in place by one of the lid screws.

Oh god, how did I forget about that? Yeah, the guys on hddguru had things to say about that a few years back. A few folks made some really good side money making a tool specifically for realigning the heads on those WD drives.

I used to do the very occasional DIY data recovery for people that had the need but not the money for a professional recovery. My last one was well over 5 years ago, WD and the others made it impossible for people like me to do it anymore. (We still can do recoveries on failing drives under very limited circumstances using a custom server/software we patched together.)

You really have to pay for expensive yearly training and piles of equipment to do data recoveries now.


The hddguru article was most memorable for this picture: http://hddguru.com/articles/2006.02.17-Changing-headstack-Q-...

Edit: article itself is here if you're curious about the context of that picture: http://hddguru.com/articles/2006.02.17-Changing-headstack-Q-...


Seagate started tanking, not surprisingly, shortly after they acquired Maxtor. Generally the last good stretch of Seagate drives was just before 750GB was the top size.

When the drives started reaching 1TB capacity... There were models where they started throwing Seagate branded drives into Maxtor external cases... And that was when it all went to shit. The infamous 7200.11 series.

Dealing with the 7200.11 drives was like the capacitor plague. They seemed to work well long enough to get other people to buy loads... then the firmware problems... then the sudden head crash problems... then the sudden won't power up anymore after a reboot problems...

All the same kind of shit that people saw with Maxtor post 2GB that caused people to stop using those drives en masse. Those old 2GB Maxtor drives were reliable as fuck, though.

I'd normally end something like this with a bit about how history repeats itself and I fear for the next drive company that will get bought out by Seagate, but when you look at it... We're down to two with a small smattering of subsidiaries that no one can really be sure are rebrands or proper made drives.

Case in point. Hitachi is owned by WD and Samsung HD is owned by Seagate now. Toshiba owns some WD assets and bought out Fujitsu's HD division. Beyond that, not much left.


And they put out a ridiculous amount of heat too. I found a one that was too hot to touch in a 24 drive array which heated up the drives adjacent to the point that they too refused to work. So what looked like an extremely unlikely double drive failure ended up being only one.


> There was a flaw in the WD10EACS that made it park too frequently,

I don't know that specific model but frequent parking is not a bug, it's a (highly annoying) feature of (most?) 2.5" and green 3.5"WD drives for some years.


In fairness to Hitachi, the "Deathstar" was developed on IBM's watch, no?


Let me start off by saying I am a Backblaze customer and I LOVE these posts and use them as my bible for buying drives for my NAS/Media Server.

With that said this is going to sound a little nitpicky but why isn't the publication date anywhere on this article? I've noticed a growing number of blogs not putting the publish date anywhere on the blog page which makes it really hard to know if what they say is still valid. In this case you can infer from the dates of the drives they tested that it was published in 2015 but a number of blogs don't have those clues in the blog to let you know when it was posted. Normally you can look at the URL to see the date but BB uses slugs off /blog/ so you can't get the date from there either.


Backblaze engineer here, the blog folks are trying to fix it right now. I also hate "evergreen" blog posts that don't include a date, I wasn't aware we were doing that. :-(


Backblaze blog folk here -> change is hard :-(


Thanks! You guys rock, I love your posts and product.


Important New Bulletin -> There are now dates on the individual blog posts.


I'm feeling foolish for not reading it and purchasing four Seagate Barracuda 7200.14s.

I had no failures so far but with a 50% failure rate it's bound to happen soon


As someone with 8x3TB Seagate's I feel your pain. My new build will only use 4TB+ Seagate's or HGST (But they are pricey).


I love the data, but I'm a little mad that Hitachi had to die as a company and got sold into pieces.

I'm not sure if I trust what you guys did, calling the old Hitachi Drives as HGST. Technically, the 3.5" drives from Hitachi got sold to Toshiba.

For example, the Toshiba DT01ACA300 is allegedly the same design as the Hitachi 7k3000.

In the following link, note that the "HDS723030BLE640" is the model number to the 7k3000 with ridiculously low failure rates in the Backblaze study.

http://goughlui.com/2013/02/26/toshiba-dt01aca300-aka-hitach...

I don't know if HGST drives are related to the old Hitachi models at all.

Again, I hate to criticize such awesome research that you've given away for free in your blog. But I'd definitely would like some research and clarification on the Hitachi -> Toshiba or Hitachi -> HGST situation.


Backblaze employee here - we're also going to release the raw data in a few weeks, which I believe will have every last drive serial number (that uniquely identifies the hard drive in the universe) plus a metric ton of other stuff, so hopefully you can mine that for the info you want.


> I'm not sure if I trust what you guys did, calling the old Hitachi Drives as HGST.

> I don't know if HGST drives are related to the old Hitachi models at all.

FTA:

> Some of the HGST drives listed were manufactured under their previous brand, Hitachi. We’ve been asked to use the HGST name and we have honored that request.


>We’ve been asked to use the HGST name and we have honored that request. //

Intriguing - that suggests some sort of advertising deal? Which calls in to question the impartiality of the report.

Who asked them? In what context? Why accede to a third parties interference in the report?

Are Backblaze getting reduced price HGST drives now?


I'm a little confused why we expect people to be hostile to friendly requests in the absence of back-room dealing... what's the harm in saying "sure" and including that footnote? And don't we expect enough human decency for them to mention if there were financial considerations in the reporting?


The "harm" is in misinformation, and damage to the brand.

The Hitachi 5K3000 and Hitachi 7K3000 are amongst the most reliable hard drives in the report. Those drives are 3 years or more old and yet have some of the lowest failure rates in the study.

Unfortunately, Hitachi doesn't exist anymore. Hitachi as a company was split and sold in pieces. It is either HGST (owned by WD) or Toshiba.

I want to buy the 7k3000 today. Which hard drive manufacturer is making the modern equivalent? Is it the Toshiba DT01ACA300, or is it the "HGST Deskstar" ??

Despite sharing the same branding, I don't think HGST Deskstar is in fact from the same factory as the Hitachi Deskstars. When you look at the HGST Deskstars, they have WD Technology in them ("Coolspin" RPM).

The Toshiba DT01ACA300 has very similar performance characteristics to the old Hitachi 7k3000 drive.

So its a confusing situation. Which is why I'd like clarification. I bet that the Toshiba DT01ACA300 is actually the super-reliable hard drive that I want, but I admit that I'm a bit ignorant on this front.


Switching factories could certainly happen without a corporate structure change, though, right?

If it's important to track this information, I think it's relevant to mention it specifically, and I'd expect an honest review to discuss important changes like this (if they're aware of them), regardless of the brand names mentioned. If HGST requested that the review not mention a change in factory or technology, or refused to answer questions about it, then yes, I'd suspect malice somewhere.


The manufacturer of the hard drives was listed incorrectly by Backblaze. It's as simple as that.

Hitachi (owned by Toshiba) and HGST (owned by WD) are two different companies. You can't say a drive was manufactured by Hitachi/Toshiba when it was actually a HGST drive.


If the drive has|had the Hitachi brand on it you absolutely can say that's what brand of drive it is|was.

As I understand: WD bought HGST (Hitachi Global Storage Technologies), then sold on the manufacturing of 3.5" drives to Toshiba to comply with EU and FTC requirements (http://www.anandtech.com/show/5635/western-digital-to-sell-h...). WD retained the 2.5" side of the HGST manufacture.

I doubt anyone wants to buy a WD HGST branded drive on the strength of the past manufacture in what are now Toshiba owned factories. WD would want the past drives, sold as Hitachi, to be called HGST as it boosts that brand - but to some extent that would be misinformation.

So back to this:

>Some of the HGST drives listed were manufactured under their previous brand, Hitachi. //

Those Hitachi brand drives were it seems made in facilities that Toshiba own now. HGST is now a WD brand (though they were not quite one company as of Dec 2014) - unless the "Hitachi" drives were made in WD facilities the calling them HGST gives quite the wrong impression.

It's quite convoluted and I don't really follow HDD manufacture news that much; please correct me if I'm wrong.


This is my understanding as well, but as you note its very convoluted.

Which is why I'm sure the HGST label on the graphs is an honest mistake. The hard drive world was severely shaken up as Hitachi got bought out by WD, and then FTC / EU split that company up on monopoly grounds.

There's no need to use a conspiracy theory. Nonetheless, it seems like the super-reliable "Hitachi 7k3000" of old is in fact a Toshiba drive today.


That is exactly how i understand it as well. HGST, or all HDD from Hitachi are now basically under Toshiba. Those 2.5" HDD capacity that sold to WD are all under WD brand.


I do realize that is what was asked of them. But that doesn't necessarily mean that was the correct way to go about it.

The Serial Number / Model Number information on the other hand should work if enough details are provided.


The winner of this post, HGST (formerly Hitachi Global Storage Technologies) is a wholly owned subsidiary of Western Digital. Their NAS are well rated on Amazon & Newegg.

Personally, I'm more concerned about the lifetime of laptop HDD (500GB range). Wonder what's the time frame around which I should swap it out (& what brand)


Given that you can get 512GB M2 SSDs now, I'd say go for that if you're worried about reliability. From what I've read, SSDs are more reliable unless you're trying very hard to wear out their endurance (on the order of 2PB according to http://techreport.com/review/27436/the-ssd-endurance-experim... )


Early SSDs used to die quickly from power failures but then the manufacturers improved them by using better capacitors.


but there can be some interesting not-a-failure modes, I know because I suffered one and the recovery was a nightmare due to overly-protective UEFI. Fortunately, anything on my local SSD is only in cache and not to be worried about ;) (and I managed to put the UEFI in dumb mode so it wouldn't interfere)


For the "not-a-failure" modes, are you referring to things like flipped bits? I had that happen on a USB3 high-speed thumb drive, where several of my source code files had random capitalization throughout the file (bit 6 was flipped on) and other weird corruption that indicated specific bits got either flipped on or off. But the drive was at 98% full when this happened too.


SSDs also have a wide variance in failure rates by model, the highest I've heard is 8%.


Oof. I've got a NAS with 4 3TB Seagate Barracudas, same model number as the drives with the 46% failure rate. One's already failed, and I replaced it with the same model drive.

Anyone have a recommendation for how to replace all the drives with more reliable models without wrecking the RAID? Downtime is okay, but I don't have spare space for the contents to make a giant backup. It's currently set up as a "Synology Hybrid RAID" that looks an awful lot like a RAID 5.


Without a backup, you need a temporary, identical NAS device with the 4 new disks. Then do a file-level copy. Then put the 4 new disks in the old NAS and import the array. Do you know of a store with a very lenient returns policy?

An online capacity expansion (replacing one at a time) will mean a very long time in a degraded state, possible multiple disk failures, increased chances of data loss (power goes off, etc)


Are you looking to replace all 4 disks at once?

If you have a spare PC lying around, check out the XPenology forums [1] for the software (specifically, NanoBoot or XPEnoboot) required to create your own Synology clone.

You'll be able to create a new SHR volume with new disks. As suggested in another post, do a file level copy to the new volume. You should then be able to replace all the disks in your actual Synology NAS with your new disks at once, and perform a migration [2] so everything works with your Synology hardware.

(This is possibly a better solution than replacing the disks one by one, since there's the risk of another disk failing while the RAID is being rebuilt.)

[1] http://xpenology.com/forum/viewforum.php?f=2

[2] https://www.synology.com/en-us/knowledgebase/tutorials/484


HGST drives are slightly larger than Seagate of the same advertised capacity. Replacing Seagate drives with HGST ones is therefore easy, while it's often impossible the other way around (fortunately :)


How full is your NAS? If you're not that full and your data could fit on one non-RAID hard drive, you can add a 4th drive as a second disk group, copy folders manually in Diskstation to the 4th drive, then pull the other three, create a new 3-disk disk group there, copy back once that's built, then wipe the 4th and add that. That's convoluted but for me was the easiest and most reliable way. With the RAID max_speed tweaks, it was all done in about 72 hours end to end.


If the NAS uses linux RAID you just need drives that are AS LARGE or LARGER than the current drives. Precisely, not what's marked on the box. If you want to be on the super safe side and are uncertain, get somebody to help you DD(or dcfldd) each old drive onto a new drive and then just pop the whole new set into the NAS and let it rip.

It would also be a good idea to check the NAS vendors forums.


While I love these recurring blog posts, it would be even nicer if Backblaze could wrap a public API and/or pretty interface around their live monitoring.

I'd happily pay a small fee for access to an up-to-date snapshot when I'm in the market for harddrives.


Yev from Backblaze here -> Not a bad idea, though that might take a lot of work on our end. In the mean time, we're going to be releasing some raw data in the coming weeks, so stay tuned for that!


The Seagate Barracuda 7200.14 3 TB drives are another story. We’ll cover how we handled their failure rates in a future blog post.

I'm really looking forward to this... could it be a similar problem to the one that caused the huge amount of 7200.11 failures a few years ago?


Probably just more sensitive to vibration in long term use. 40% AFR in normal conditions would be insane, and I doubt we'd need Backblaze to point it out.

I have six ST3000DM001's and seven ST31500341AS's, all with about 2 years on the clock (~20,000 power-on hours). With 40% and 25% AFR's respectively, the odds of all thirteen surviving are:

1 / ((0.6 12) * (0.75 14)) = 1 in 25,782.

I doubt I'm that lucky.


Are you referring to the botched firmware ones (I think the firmware revision was SD15)?

If so, at the time (~2008) I had an HTPC with 10 1TB-1.5TB drives with 6 of 10 affected by the firmware issues. I was able to successfully flash all my drives at the time with no data loss.


i built a pc for a family friend around that time... i'd spec'd it with a different drive, but they showed it to the IT guy at their work who recommended seagate. 6 months later the drive stopped working, and it had to be shipped to the other side of the world to be reflashed


Sadly you could only flash them if they were still working, you had to ship them if they bricked. I saw the bulletin and promptly flashed all mine before failure. Big pita tho.


I've got a sample size of 1 on those things. Glad it's not more, cause it made for an unhappy thanksgiving time when it failed.


I have five Seagate branded 3TB external disks, so have no real idea what's inside.

Three of them are dead. I stopped using the others.


agreed. I think I had around 7 different barracuda 7200's.. not a single one remains operable, all failed, though they did survive >1yr, they just didn't have the longevity of WD's or Hitachi's which ran on the same infra (and in most cases, continue to run).


One failure from one drive bought for me too.


Looks like Seagate drives are at a very good price point for them, enough to compensate for the higher failure rates.

It looks like Blackblaze managed to build a system with good fault tolerance, such that they can control the $-price/failure-rate ratio as they need.

In summary: - HGST, good - WD, expensive (in bulk) - Seagate, we buy them despite failures


I think there has never been more personal piece of technology than your full of important data hard drive, especially pre-cloud era, when now people just don't worry much of a dying hardware.

But back in the ole' days, every one of us have lost very personal data due to hardware failure at some point. And it's interesting how the opinion about brands are being formed.

For example, my first drive that died was Seagate. Surprise surprise my opinion about said company is very low now. But in most part, it's very bias opinion, especially given the fact I have never owned another Seagate for the reason of crash. I love WD and never had one failed, currently about 20 full of data. But I met bunch of folks who hate WD and had them crush and they lost important data. And they love Seagate.

It's interesting how opinions about hard drives are being formed throughout our lives.


I had the opposite issue. Bought WD Black for my first PC build. It died within ~3 weeks. Bought another to replace it. Again dead within a month. Have about 8 Seagate drives now and have not had an issue at all. Will avoid WD at all costs now.


I can't load this site because blackblaze only supports 2 TLS ciphers. TLS_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_3DES_EDE_CBC_SHA


https://www.ssllabs.com/ssltest/analyze.html?d=https%3A%2F%2...

RC4 and 3DES, top of the line secure Cloud storage service indeed.

I browse with old insecure ciphers disabled, and I seem to remember having no problem reading backblaze blog before, either they downgraded their security (Occulus rifts website used to downgrade to rc4 whenever you wanted to pay them any money, but that was before FB buyout), or I disabled rc4 later than I thought.


Those are old ciphers, so they should be easy to enable in your browser.


Have you tried using Google Chrome?


I think his point is that neither of those are secure.


I'll try to find the URL again, but in the mean time, I've read a comment on a thread from a veteran technician saying that hard drives failures were mostly due to heat fluctuations (material expansion) and that without this information, failure rates were not valuable enough.


Love and appreciate that Backblaze presents this data, but there are some obvious issues: comparing the annual failure rate of a pool where avg. age is 1 year to a pool where avg. age is 3 years isn't a great comparison.

Rather than annual rates, I'd really prefer to see failure rate after six months, one year, two years, etc. This would make it easier to answer questions like "was drive X always bad? Or just when new? Or just when old?"


Backblaze employee here - we're going to release the raw data in a couple weeks, so everybody can do their own analysis.


Can't wait for this, but it'd be great if you considered having "real-time"-ish data feed for everyone to hook into. Then we don't have to wait for the annual blog post ;-)


The top two rows seem to be contradictory. How can you have a smaller annual failure rate AND a smaller average age?

  Name/Model	    Size    	Number of Drives 	Average Age in years	Annual Failure Rate	95% Confidence Interval
  HGST Deskstar 7K2000
  (HDS722020ALA330)	2.0 TB	4,641	3.9	1.1%	0.8% – 1.4%
  HGST Deskstar 5K3000
  (HDS5C3030ALA630)	3.0 TB	4,595	2.6	0.6%	0.4% – 0.9%


Reading the model numbers and sizes, the top row is 2TB drives, the second row is 3TB; the 3TB were launched more recently. So, they have a smaller average age. And, apparently, they also fail at a lower rate.


Interesting, I built a home FreeNAS a few years ago, with Seagate 3TB drives, 7200.12 iirc, and saw 3 of them (out of 12) fail well under 2 years... at this point, I've had so many issues, it's pretty much sitting there. I put 4x4tb WD reds into my older Synology NAS and have been using that without issue.


It's a shame single platter 1TB drives were not in that test.

I bet they are the most reliable because of single platter.


This sums up my experience with Seagate. I bought my first Barracuda 3TB two months ago, it died out 20 days later with the notorious two-click sound.

I can't even be bothered to push warranty for replacement, terrible. Sticking with WD from now on.


This same URL was submitted an hour earlier and got 4 upvotes. This one has 222 as I write this.

https://news.ycombinator.com/item?id=8923016

Timing is everything, I guess.


Much like comedy!


Dup of https://news.ycombinator.com/item?id=8922996 submitted almost two hours earlier.


You've posted a broken URL: "Post not available".


I guess I have Backblaze to thank for that. It was a valid URL when I posted it. sigh


Yev from Backblaze here -> we saw that and couldn't track it down or figure out where it came from, the post itself didn't change since we published it this morning. Sorry Notacoward :(


I'm super excited for the new Seagate 8tb drives @$260/each.

Please order a carton of them and let us know ASAP how they work for you!


The data is not accurate. The stats are literal numbers, Which means these guys are not mentioning how many hard drives each vendor sold. But instead they are just comparing the Faulty drives numbers. If you consider the number of HGST and Seagate pieces sold then only these stats or demographics can be considered reasonable.

Punchline: This review or analysis could be biased.


They're not just comparing the faulty drives numbers. Maybe you misunderstood the way they presented the data: in the first table, the "Number of Drives" column refers to the total number of drives they studied for each model, not just the ones that failed. So they give the sample size and the rate of failure for each model.

Or are you saying that the difference in worldwide sales for each drive model somehow renders Backblaze's failure rates invalid? The size of the population is irrelevant unless the size of the sample exceeds a few percent of the total population you are examining.


By definition, it's going to be a true analysis of the drives when obtained the way Backblaze obtains them and when used in the environment Backblaze uses them. Both of those may be different from Joe Random consumer.


Is there any idea which drives were shucked or not shucked from external enclosures?


Does anyone know if HGST's 1TB hard drives follow this reliability trend?


7K1000's are fantastic in my experience


Hum... I think this explain my one dying drive at home. (4 Seagate drives)


The Western Digital Velociraptor is pretty tough to beat. Near SSD performance (depending on what you are doing) and reliable. For compiling code it's definitely the best choice.


Ehh; not yet trusting SSDs (or back in 2011 when I did my last computer builds), I prefer 15,000 RPM SAS enterprise drives. 50% greater rotational speed, SAS should be faster than SATA, and even Seagate (true) enterprise disks should be more reliable than WD consumer grade disks.

Now/next time I'll seriously consider SSDs.


Unless you could use an SSD? (most people who compile code are valuable enough that their time warrants the fastest thing available, no?)


Is "compiling code" an IO intensive (IO bound) task?


It certainly can be. Theoretically your OS should be pretty good about caching files in memory, but in practice I've measured significant differences when compiling large C++ projects on a fast SSD.


so on a given day, or by week, how many drives do you replace?




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: