
How long do disk drives last? - vayan
http://blog.backblaze.com/2013/11/12/how-long-do-disk-drives-last/
======
wazoox
There are extremely large difference in reliability between different drives
makes and models. Here are a couple of numbers (we're building storage
servers, running 24/24 in various environments):

* my company installed about 4500 Seagate Barracuda ES2 (500 GB, 750 GB and mostly 1TB) between 2008 and 2010. These drives are utter crap, in 5 years we got about 1500 failures, much worse than the previous generation (250 GB to 750 GB Barracuda ES).

* After replacing several hundred drives, we decided to switch boats in 2010 and went with Hitachi (nowadays HGST). Of the roughly 3000 Hitachi/HGST drive used in the past 3 years, we had about 20 failures. Only one of the 200 Hitachi drives shipped between 2007 and 2009 failed. Most of the failed drives were 3 TB drives, ergo the 3 TB HGST HUA drives are less reliable than the 2 TB, themselves less reliable than the 1 TB model (which is by all measure, absolutely rock solid).

* Of the few WD drives we installed, we replaced about 10% in the past 3 years. Not exactly impressive, but not significant either.

* We replaced a number of Seagate Barracudas with Constellations, and these seem to be reliable so far, however the numbers aren't significant enough (only about 120 used in the past 2 years).

* About SSDs: SSDs are quite a hit and miss game. We started back in 2008 with M-Tron (now dead). M-Tron drives were horribly expensive, but my main compilation server still run on a bunch of these. Of all the M-Tron SSD we had (from 16 GB to 128 GB), none failed ever. There are 5 years old now, and still fast.

We've tried some other brands: Intel, SuperTalent... Some SuperTalent SSDs had
terrible firmware, and the drives would crash under heavy load! They
disappeared from the bus when stressed, but come back OK after a power cycle.
Oh my...

So far unfortunately SSDs seem to be about as reliable as spinning rust.
Latest generations fare better, and may actually best current hard drives (
we'll see in a few years how they retrospectively do).

~~~
atYevP
Yev from Backblaze -> We love Hitachi drives. They make us really happy.
Unfortunately they are also more expensive than WD and Seagates, and since our
#1 factor in drive purchasing is price, we don't get them very often :(

~~~
pavs
But isn't it cheaper on the long run if your drives fail less often?

~~~
atYevP
Technically yes, especially from a manpower perspective (Takes folks to
replace the drives), and it gets factored in to our "what are we willing to
pay" model, so far, cheaper drive still win out over longevity. Now, this is
different from other people with large data farms, we operate a bit
differently, but thus far cheaper drives make more business sense. If that
ever changes, we'll switch to the good stuff :)

------
drzaiusapelord
Sysadmin here. My experience:

1\. Infant mortality. Drives fail after a couple months of use.

2\. 3 year mark. This is where fails begin for typical work loads.

3\. 4-6 year mark. This is when you can expect the drives that haven't failed
earlier to fail. By this point, we're looking at 33% fail.

Interesting that my experiences roughly match up with Chart 1.

My experiences are 10 to 15k SAS drives. Slower moving 7200rpm drives? No
idea. Haven't used them in servers in a while. They seem more of a crapshoot
to me. SSD's, thus far, are even more of a crapshoot and we don't use them in
servers and only hesitantly in desktops/laptops and only Intel.

~~~
rsync
Agreed RE: SSD drives ...

It is very disappointing how flaky and unreliable SSD devices have been when
their promise was just the opposite, due to lack of moving parts.

Back in 1999/2000 I had a habit of building some personal as well as
commercial servers in datacenters with compact flash parts (plain old consumer
CF drives) as boot devices with the goal of fault tolerance in mind. There was
a price to be paid in that these devices needed to be mounted, and run, read-
only.

But they ran forever. I never had one part fail. Just plain old CF drives
mated directly to the IDE interface.

Now fast forward to 2013 and new servers we deploy for rsync.net have a boot
mirror made of two SSDs ... things have gone well, but our general experience
and anecdotal evidence from other parties gives us pause.

One thought: an SSD mirror, if it fails from some weird device bug or strange
"wear" pattern would fail entirely, since both members of the mirror are
getting the exact same treatment. For that reason, when we build SSD boot
mirrors, we do so with two different parts - either one current gen and one
previous gen intel part, or one intel part and one samsung part. That way if
there _is_ some strange behavior or defect or wearing issue, they both won't
experience it.

~~~
baruch
They get the same writes but not the same reads, so depending on the bug
source it may not hit both at the same time. The read pattern itself may
affect the way the writes are performed to the flash (delaying or speeding up
writes pending for commit) that it may have a butterfly effect on the rest of
the behavior and removing the disks from being in sync with regard to firmware
bugs.

If you'd still follow up on your idea of using a read-only root like you did
with CF cards and figured a safe place for the logs you could still use the
SSDs in the same mode. Why not go that route?

~~~
rsync
Yes, depending on the bug source. But the bug source might be related to
reads. Nobody knows. Splitting the risk across two different vendors /
implementations seems to be good insurance.

~~~
baruch
I mostly handle server appliances and the read-only boot disk is bread-and-
butter for anything I do. Bonus points for using initramfs and never hitting
the boot disk after the initial boot is completed.

But if you stick to boot SSDs that are read and written to using different
makes sounds like a good strategy.

~~~
e12e
It would of course be hard to avoid read-only flash no matter what you did -
both bios and pxe rom from the ethernet card would presumably be read only
flash today (that is writeable, but in practice only used for reading).

------
velodrome
The hard disk drive quality has dropped over the last few years.

* Most consumer drives over 2TB have extremely poor reliability. Just check any Amazon or Newegg review (DOA and early mortality show up with more frequency). Yes, I know using reviews are not accurate but since there is no public information of drive failure rates there is not really much to go on.

* The reduction of manufacturer warranty since Thailand floods. Surprise, they never changed it back to the original 3 year warranty.

If you have a large array of disks, there is nothing to really to worry about.
If you have a small set of drives, spend a little extra and get the "Black" or
RE drives with 5 year warranty. Avoid any "Green" drive.

~~~
pedrocr
Why avoid the green drives? I assumed those were less power hungry and
spinning slower so more reliable. I've been ordering them for RAID5 arrays and
not had too many issues yet.

~~~
jws
Greens have had problems with aggressive head parking. If you have an idle set
of them you can go through their design limit of head parks in a couple of
months and start to get failures shortly after. Done that.

Check your S.M.A.R.T data. Look at the head park number. (Load cycles I think
it is called, can't look it up now). If it is a six digit number, you are in
trouble. For a server you want if to be in the same order as to power ups.
Anything else and you have to explain to yourself "why?"

Edit: adding. The 1TB and smaller greens were disasters. I ruined a lot of
them. I was told all of the 2TB and up greens didn't have head park issues,
but spent part of last week replacing a storage unit populated with 2TB greens
when a spindle failed (>200 unrecoverable blocks) and found that some of the
2TB greens were load cycling into the 200000 range, others weren't running up.
They were all identical models purchased at the same time. Maybe they had
different firmware? I replaced hem with REDs. They aren't supposed to park and
they won't try to recover a bad sector for more than a few seconds so the
don't hang your RAID when they get bad sectors.

~~~
nisa
As someone who inherited 240 24/7 running WD-Greens:
[http://idle3-tools.sourceforge.net/](http://idle3-tools.sourceforge.net/)
works fine but disabling the timer has negative performance impact. 3000
seconds is fine through. But you need a complete powercycle before the changes
take effect. No more parking. Does make a difference in longevity in my not
very scientific opinion.

I can second the 200> bad blocks. Sometimes they still work fine after using
badblocks -w a few times on them and raising the timer.

~~~
teach
Good to know. I JUST bought a green WD drive (still in transit from Amazon) so
my future thanks you.

------
j45
Managing harddrives, especially in redundant setups can be helped in one small
way if you're sure to:

1) select the make and model of drive you want

2) buy the same model of drives from multiple vendors which have different
serial and build numbers.. even if you're buying two drives, buy each from
separate locations or vendors.

3) mix up the drives to make sure they don't die. place stickers of purchase
date and invoice number on each drive to keep them straight.

This all.. because when one drive goes due to a defect or hitting a similar
MTBF, other ones with a close by serial number or build number can tend to die
around the same time for similar reasons.

From owning hard drives over 8 or 9 generations of replacing or upgrading
since the 90's on all types of servers, desktops and laptops: The day you buy
a new piece of equipment is the day you buy it's death. Manage the death
proactively as it gets more and more tiring to deal with it each time.

------
smoyer
Backblaze may not know because they are "a company that keeps more than 25,000
disk drives spinning all the time". After 3-5 years, you'd better have a back-
up of a drive you choose to spin down. Every drive I've lost (in the last
10-15 years and ignoring two failed controllers subject to a close lightning
strike) failed to start back up when I had powered the machine off for
maintenance.

~~~
barrkel
I've bought perhaps 50 drives in the past 20 years, and maybe 10 of them died,
the others mostly becoming obsolescent. I only started taking serious logs
about 6 years ago.

Drives have died for me both in 24/7 powered systems and through power cycles.
Drives have reported intermittent failures for many months, but still lived
for years without any actual data loss. The oldest drive I still have spinning
is a 200G IDE containing the OS for my old OpenSolaris zfs NAS; must be
getting on for 9 years.

I advise having a back-up of every drive you own, preferably two. I built a
new NAS last week, 12x 4G drives in raidz2 configuration; with zfs snapshots,
it fulfills 2 of the 3 requirements for backup (redundancy and versioning),
while I use CrashPlan for cloud backup (distribution, the third requirement).
Nice thing about CrashPlan is my PCs can back up to my NAS as well, so
restores are nice and quick, pulling from the internet is a last resort.

~~~
mikevm
The one thing to know about cheap consumer cloud backup solutions like
CrashPlan and Backblaze is that they only have one copy of your data. So if
their RAID array where your data is stored dies and cannot be rebuilt, it's
all gone. You can Google for a few disaster stories about both companies.

~~~
barrkel
Like I said, they are only one third of my backup strategy. My house burning
down, or someone breaking in and going into my attic to steal my 30kg 4U
server, should be the only two realistic scenarios in which I will need to
rely on CrashPlan.

~~~
Sami_Lehtinen
I do same, but in reverse. I use cloud servers, with versioning backups, but
still beam additonal backups back to office. Just to survive total data center
destruction disaster.

------
nknighthb
These numbers line up nicely with what I've experienced on much smaller scales
(I've never personally cared about more than a few hundred spinning drives at
once), which is that in a nice mix of old, middle-aged, and new drives, 5-10%
go kaput each year.

Incidentally, about "consumer-grade drives", the last time I looked into this,
I was led to believe that if it's SATA and 7200RPM (or less), there's no
hardware distinction. It's just firmware. Consumer drives try very hard to
recover data from a bad sector, while Enterprise/RAID drives have a recovery
time limit to prevent them being unnecessarily dropped from an array (which
will have its own recovery mechanisms). That's it.

~~~
jve
Well Intel tells us different story[1] that promises 1\. More performance 2\.
Less vibration (improves performance) 3\. ECC Memory

There is a long feature reference that mentions things like: higher RPM, more
quality, larger magnets, air turbulance control, dual processors, etc.

I'm not a spec in hard drives, just that I remember reading this stuff when
trying to figure out do I need it. In the end, For my small-scale corporate
file server, I chose zfs raidz with consumer grade disk drives.

[1] Enterprise-class versus Desktop-class Hard Drives:
[http://download.intel.com/support/motherboards/server/sb/ent...](http://download.intel.com/support/motherboards/server/sb/enterprise_class_versus_desktop_class_hard_drives_.pdf)

~~~
nknighthb
A _marketing_ team at Intel tells us a vague story about what is either a very
vague or very specific set of drives, or may be about an entirely hypothetical
set of drives. It's not clear.

They even admit to the problem themselves at the end:

"Some hard drive manufactures may differentiate enterprise from desktop drives
by not testing certain enterprise-class features, validate the drives with
different test criteria, or disable enterprise-class features on a desktop
class hard drives so they can market and price them accordingly. Other
manufacturers have different architectures and designs for the two drive
classes. It can be difficult to get detailed information and specifications on
different drive modes."

That PDF tells me nothing interesting. It's marketing crap for clueless
executives, not a technical analysis. (Given their absurd obsession with
"Higher RPM" as some sort of defining characteristic, it's not even relevant
to the statement I made in the first place.)

------
brongondwana
I wonder how many will die before they age out of usefulness. If you're still
spinning 250Gb hard disks which are using the same power and space that a 4Tb
drive could be using - it might not actually be economically sensible to keep
running them.

Certainly the old 9.1Gb SCSI disks that were so popular 10 years ago are well
past be justifiable to give power to now.

~~~
outworlder
That may be true for blackblaze.

But these drives will still be useful. What about, say, shipping them to ONGs
located in Africa?

~~~
skriticos2
That's a horrible idea! The logistics to collect and ship the old tech to
Africa alone would probably cost more then just buying it bulk from China and
ship in one batch.

But there are other considerations:

* This would also result in a big pile of waste in Africa, as their recycling infrastructure is limited.

* They need food, shelter, stable politics and functional education before they can make any use of computers.

* They have limited energy supply. Low powered tablets / laptops are much more useful.

~~~
JoeAltmaier
Agreed. Better yet, keep them in a Google data center. Its far more efficient
to make cloud storage available at a reasonable price. How about a diskless
laptop that boots from cloud storage? That would be a sweet spot economically
(depending on network costs I guess).

~~~
ancarda
>How about a diskless laptop that boots from cloud storage?

While this is a cool idea, how much bandwidth would you need to boot at
roughly the same speed you do today? Some SSDs have 500 MB/s (~4 Gbit) read.
You'd need to have gigabit networking with almost zero latency for that to
perform well.

I suppose a smaller OS like Chrome OS would be perfect for that. Even if this
worked on fiber, how would you boot over a cellular network? Aside from
costing you a ton of money, it would take forever to download.

~~~
JoeAltmaier
Time is a fungible commodity. E.g. boot time may affect bottom line (rhyme!)
in the EU; it can be different in Africa.

------
mgraczyk
Microsoft did a metaanalysis on general hardware failure based on the error
reports sent by literally millions of consumer PCs. Although the results
weren't particularly interesting (Hard drives fail the most, with rates
consistent with what backblaze observed in the posted link), I was impressed
by the sheer volume of data available to the study.

[http://research.microsoft.com/pubs/144888/eurosys84-nighting...](http://research.microsoft.com/pubs/144888/eurosys84-nightingale.pdf)

~~~
ComputerGuru
Thank you, this is a real gem. I'm very grateful for MS Research in general,
as they do some very interesting things that are only possible when you're the
size of Microsoft, et. al. but I really do wish more academic papers came out
of huge institutions. This knowledge really is worth sharing!

------
mavhc
tl;dr: Here's some statistically significant data on how 25000 drives have
worked over 4 years, please now comment on how the 3 drives you've owned died.

~~~
atYevP
I enjoyed this comment.

------
confluence
You can use default warranty information to figure out the lower bound on
useful life. Companies price the value of the warranty into the product and
perform statistical QA to ensure that 95%-99% of all products released will
work correctly for the length of the standard warranty. Also added warranties
aren't worth the cost. Just replace the product when it breaks.

~~~
JoeAltmaier
Agreed. Pretty much they assume you won't return a failed drive either. Since
they last about as long as the warranty, you have only a few weeks at best to
remember to send it in for replacement.

I've been hit-and-miss, gotten a few drives replaced, had a few warranties
expire. But pretty much every disk drive fails eventually.

Think about it - its a commodity. If it lasted much longer than the warranty,
they spent too much on robustness for the price.

------
toyg
_If some of them live a long, long time, it makes it hard to compute the
average. Also, a few outliers can throw off the average and make it less
useful._

Proper statistical analysis would help you there.

~~~
marcosdumay
> Proper statistical analysis would help you there.

Yes, if you know the probability distribution. If you don't know the
distribution, you can not calculate your confidence, and thus can not do a
proper statistical analysis.

And, guess what, nobody knows the probability distribution of hard drive
failures. That's exactly what they are trying to find out.

~~~
xixi77
There are actually many methods in survival analysis -- just as in the rest of
statistics -- that do not impose strict distributional assumptions, and
account for the fact that many drives are still operational. But as someone
else mentioned, the median is also a good statistic to report :)

------
thatthatis
Articles like this one are the reason I went with backblaze over carbonite. It
may not mean their tech is any better, but it does 1) increase my confidence
in them and 2) teach me something interesting each time. Both of those are, in
my book, good reasons for giving them my money.

~~~
atYevP
And we love your money! Please tell more people to give us your money ;-)
Seriously though, we're glad we can entertain you and help you back up. We
figure being open about this stuff leads to awesome discussion and sometimes,
like in the case of our storage pods, we learn a thing or two from the world
at large! It's a win/win :)

------
JoeAltmaier
Its complicated. Here's a link to a paper modeling disk drive failure in data
centers. tl;dr: its about half a percent per 1000 hrs operation.

[http://www.ssrc.ucsc.edu/Papers/xin-
mascots05.pdf](http://www.ssrc.ucsc.edu/Papers/xin-mascots05.pdf)

------
jankey
Approximately 40 PB raw storage in our datacentres here, half of them
Supermicro servers with whatever disk that came, half HP Proliant with $$$ HP
Enterprise class disks, all < 5 years old, so quite comparable to the
Backblaze situation.

80% drives surviving after 5 years seems right, this is what we're seeing as
well. The hardware is decommissioned faster then the drives fail.

------
nether
Does ANYONE know what hard drives Google, Facebook, and Dropbox use at their
datacenters? This 2006 article says Google buys Seagate:
[http://tech.fortune.cnn.com/2006/11/16/seagate-ceo-google-
we...](http://tech.fortune.cnn.com/2006/11/16/seagate-ceo-google-web-service-
giants-upgrade-storage-almost-yearly/).

~~~
wmf
There are only three disk vendors now and I would assume that large customers
buy from at least two of them. But knowing this information won't help you,
because a new model from company X may be a dud even if the company's previous
models were reliable. By the time any model has enough accumulated reliability
data that you can tell whether it's reliable or not it's obsolete and you
don't want to buy it.

------
ck2
Part of the problem is the move by manufacturers to have consumers basically
burn-in test their products for them as cost reduction and shift the expense
to retailers.

~~~
svantana
Well ultimately it's the consumer/user who does the work. I've considered
buying second-hand disks just to get around this problem. Although that
usually comes with other issues...

------
WatchDog
Do they plan on sharing any data on which vendors and models have the highest
failure rates?

~~~
nisa
Backblaze wrote about it here (2011):
[http://blog.backblaze.com/2011/07/20/petabytes-on-a-
budget-v...](http://blog.backblaze.com/2011/07/20/petabytes-on-a-
budget-v2-0revealing-more-secrets/) and here (2013):
[http://blog.backblaze.com/2013/02/20/180tb-of-good-
vibration...](http://blog.backblaze.com/2013/02/20/180tb-of-good-vibrations-
storage-pod-3-0/)

We are constantly looking at new hard drives, evaluating them for reliability
and power consumption. The Hitachi 3TB drive (Hitachi Deskstar 5K3000
HDS5C3030ALA630) is our current favorite for both its low power demand and
astounding reliability. The Western Digital and Seagate equivalents we tested
saw much higher rates of popping out of RAID arrays and drive failure. Even
the Western Digital Enterprise Hard Drives had the same high failure rates.
The Hitachi drives, on the other hand, perform wonderfully.

In the second article they say that WD-RED are 2nd in reliability (WD-RED did
not exist 2011). I'm happy that I've got a cheap Hitachi Ultrastar. But who
knows.

As a personal anecdote: WD-Green failure rates are huge here. 24/7 Desktop
machines, 240 drives. I've replaced in last 12 month at least 20 drives.

~~~
chemmail
While WD has taken over Hitachi, they had to give Toshiba Hitachi's desktop
3.5" HDD operations in part of the deal of taking over Toshiba's (TSDT)
Thailand factory. So if you want Hitachi's good 'ol reliability, buy Toshiba
desktop drives.

------
nsxwolf
My oldest drive currently in use is the original 20MB disk on my IBM PS/2
Model 30.

------
cmer
Anybody knows how long floppy disks, diskettes and their respective drives
last?

Odd question, but I've always been wondering. These things just seem to hast
forever.

~~~
lgeek
According to this article[0], the data from old floppy disks is pretty much
gone.

[0]
[http://ascii.textfiles.com/archives/3191](http://ascii.textfiles.com/archives/3191)

~~~
stormbrew
I recently went through all my old 3.5" floppies I could find and most of them
were still quite readable. This seems like a bit of hysteria to me. I didn't
preserve as many of my 5.25" floppies, but I also don't have a drive for them.
My 3.5" drive is an old USB one that came from a Toshiba laptop circa 2000. It
was in a drive bay that was replaceable with a cdrom drive.

------
jonlucc
Here's Google's 2007 survey.

[http://static.googleusercontent.com/external_content/untrust...](http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/disk_failures.pdf)

------
dmourati
Ugh. Backblaze is one of those companies with an extraordinarily poor design
that they flout and "open source" as if anyone would follow their lead. Take a
look at the physical design of their system and combine that with the
published data. Consider that to remove any harddrive from their setup
requires physically removing a 4u rackmount storage pod from the rack.
[http://blog.backblaze.com/2011/07/20/petabytes-on-a-
budget-v...](http://blog.backblaze.com/2011/07/20/petabytes-on-a-
budget-v2-0revealing-more-secrets/)

Also, no hardware raid, battery, or cap.

Source: worked at Eye-Fi, built 2PB storage

~~~
brianwski
Disclaimer: I work at Backblaze but I'm on the software side, I barely ever
touch the storage pods anymore.

It is not true that the pod team must remove the 4U server from the rack. It
is slid out like a drawer (no tools required, takes maybe 10 seconds). The
drive or motherboard is then replaced, then you slide the drawer back in. So
the 4U server must slide 18 inches one way, but zero cables have to be
unplugged or replugged when done. This only takes one technician and no
"server lift", the drawer supports all the weight.

I'm not defending this design, just correcting a mistake. Backblaze frankly
"makes do" with this design because nobody will step up and make anything that
fits our needs better. The number 1 criteria is total system cost over the
lifetime of the system INCLUDING all the time spent on salaries of datacenter
techs dealing with the pods. "raw I/O performance" is not that important for
backup, so trying to sell us an awesome EMC or NetApp that costs 10x as much
and has 10x the raw I/O performance is not very compelling to us. But if you
came up with a design making it faster for our datacenter technicians to
replace a drive faster while not significantly increasing overall costs in
another area, we SURELY would listen.

~~~
dmourati
Thanks for the clarification. That the PODs were on rails was never made clear
to me. Still, I count that as "physically removing a 4u rackmount storage
pod." Those suckers cannot be light. 10 seconds sounds rather fast. I don't
imagine you could do it that fast for any of the upper pods.

While I don't recommend them outright, we settled on 3U boxes from SuperMicro.
[http://www.supermicro.com/products/chassis/3u/837/sc837e26-r...](http://www.supermicro.com/products/chassis/3u/837/sc837e26-rjbod1.cfm)

We somewhat affectionately dubbed them "mullets" as in business in the front,
party in the rear.

They make 4U devices as well. Cost was about $1000. We added LSI Megaraid 9280
controllers, about another $1500 and ran min-SAS back to a controller node
responsible for 4 JBODs.

~~~
secabeen
It's a different trade-off. The Supermicro boxes use drive trays, so swapping
a hard drive requires a datacenter tech to handle the tray mounting and
unmounting. The PODs just drop drives right in. They've traded off tray
mounting work for chassis sliding.

------
pyGuru
Just out of curiosity, I wonder what the savings would amount to if this
company used something like SpinRite to fix / recover failed drives? Although
I've never used it from what I hear its pretty good at saving drives...

~~~
jlgaddis
[http://attrition.org/errata/charlatan/steve_gibson/](http://attrition.org/errata/charlatan/steve_gibson/)

------
maratd
It depends on the warranty period of the drive. Your hard drive manufacturer
knows precisely how long the drive will last and sets their warranty period to
expire right before your drive gives up the ghost.

~~~
gambiting
Yeah, if that was true that would be completely against EU regulations,so you
could sue manufacturers and win millions. So if you have any proof that they
are doing that, you are practically a millionaire, I don't know why are you
not on your way to the court yet.

~~~
driverdan
How so? That is how warrantees work. They're designed to protect you against
premature failure. Unless it's a necessary competitive advantage no company is
going to warrantee something past its expected lifetime. And in that case it's
going to be included in the price.

~~~
gambiting
Designing a device to fail after a pre-determined amount is very much against
the law. Expected lifetime is different, but this is not what the previous
poster was suggesting.

------
headgasket
We dont know, but backblaze sure knows how to: 1\. print backblaze's brand as
often as linguistically possible in the same article. 2\. Get to the top of HN
over a 50 yrs old technology's failure rate without clustering for brand spin
speed density etc. (read not much)

Am I unaware that there are new paid spots on the first page of HN? (it would
make sense I guess, from a business perspective)

TIA to anyone that can be of help on this, cheerio, (and good luck to
Blackblaze, backblaze a path to a backblazing success!)

~~~
atYevP
Yev from Backblaze | One might say that we are "Backblazing a trail"? To my
knowledge we've never paid for a HN top article. People tend to like our pod
and storage related stuff and we're always thrilled to see it on here, so we
come and chat about it along with everyone. The discussions are awesome. I'll
chat with the writing team to see if we can take out one or two Backblaze's
from future posts ;-)

~~~
headgasket
Hey, it was tongue-in-cheek. Your service looks awesome, and as an awareness
piece it was bang on. Although I don't recall hearing about backblaze before,
the name has been ringing in my ears all day. Keep up the good work and good
luck!

------
Ellipsis753
Damn, my last two harddrives have failed in around 3 years exactly. Did I have
bad luck or am I being too mean to them? My computer is on mostly all the time
and is often reading files throughout the night (for slow uploads for
example). Does it make a difference how often you read/write a drive or only
the spun-up time? One died suddenly without any warning in the SMART data and
the other got badblocks and started to struggle reading data.

~~~
cbhl
The data presented in the article only makes sense if you buy a large quantity
of hard drives; if you only have had a handful then you were just unlucky.

I suspect the reason why people do "burn in" tests on hard drives is to make
drives that suffer from early failure ("infant mortality" as described in the
article) fail early enough that you can RMA them with the manufacturer. Apart
from that, I don't think there's much you can do to improve your chances.

------
theandrewbailey
Extremely useful info here. Most of my HDs have been running for years and
still work fine, but you go online, and all you read about is that HDs are
horrendously unreliable and all fail after a year or two. (manufacturer
propaganda, anyone?)

The optical drives I've had, on the other hand, are actually unreliable. They
all seem to break down after about four years, and I don't use them all that
often!

~~~
baruch
Your disks are going fine, do you feel the urge to report about it in the
reviews on amazon or newegg or where you bought it from?

There is an inherent bias in the reviews. Which makes the backblaze report so
interesting, they have less of a bias though they do not report actual disk
vendors and models to really draw direct inference only the general trend.

------
leapius
I think the floods has probably been a factor in reduced reliability. It took
forever for prices to come down to where they were and the manufacturers are
probably cutting corners everywhere to save costs. Why ramp up factories when
the tech itself is on the way out?

I think this is most evident in the reduced warranty periods compared to
before when 5 years was quite normal.

~~~
atYevP
Yev from Backblaze | Yes, prices still haven't come back down to pre-flood
levels unfortunately. They used to drop about 4% per month in price (over
time) but now it's going down more slowly.

------
Semaphor
One pure storage disk I use has been alive with top S.M.A.R.T. values for over
8 years now. The one with more regular reads & writes is 5 years old now. And
while I have local backups I'm now finally in the progress of uploading 800GB
of (semi-) important data to backblaze. I'll probably be done in another
month.

------
bobbles
Since there seem to be backblaze staff posting in the thread, is there anyway
as a 'personal' (non-business) user to have multiple PCs configured with one
account? Think of something like a family plan.

Can't seem to find relevant information on the website anywhere for this.

------
wrongc0ntinent
It'd be very useful to have more detailed information about read/write volume
and capacity, MTBF should vary a lot depending on those. Until then I'll keep
being paranoid.

------
ffrryuu
Has blackblaze budgeted for the cliff failure rate that is coming?

~~~
crpatino
Based on the linear extrapolation they make over the third year failure data
into the future (to calculate an estimated half life of roughly 6 years), I
would say they probably have not.

A little statistics is a dangerous thing.

~~~
ffrryuu
Well, as long as they IPO before that happens...

~~~
atYevP
Yev from Backblaze | I know, right?

~~~
ffrryuu
I'm just jaded, you can ignore me :)

------
aaronz8
I was going to say, I learned about this in class! Then I read the article and
the link to "CMU’s study." ... My professor was one of the authors. Go Garth
Gibson!

------
mrottenkolber
Spot on, I have experienced my drives to either fail quite soon, or "never". I
am still running one of my very early 40GB hard disks. It must be like 8 years
old.

------
kenneth_reitz
Not long.

~~~
yen223
Surprisingly long, all things considered.

------
kimonos
Wow! Great info in here!

