- These were in 8 different machines in 8 different locations.
- 7 were in desktops, 1 was a laptop
- All were running Windows
- All purchased from NewEgg
- Most of these were gifts for other people.
After this saga, I've concluded that there are probably two root causes leading to high failure rates:
(1) Something about installing an after-market SSD in a desktop, probably related to power fluctuations, increases the likelihood of failure.
(2) NewEgg only offers a 30-day warranty on their SSDs. You can't even purchase a third-party extended warranty. So I suspect that you'll see a higher failure rate from NewEgg purchases than from a merchant who offers a 2- or 3-year warranty. (But NewEgg's prices are so damn good!)
I should also say that the SSDs that I have purchased with laptops - 2 from Dell (2008, 2009) and 1 from IBM (2009) have not failed.
Other than that, there's no difference if the SSD is plugged in by "a trained professional" or yourself.
If you're not sure if your PSU is good or not, pick it up and see how heavy it is. If it has a fair bit of heft, it's probably good. If it feels light, it's definitely bad.
You could even make an script that analyzes if the wave shape is too bad, and if so store the surroundings of the power glitches. You can do that with a beagle board DSP, almost free.
Maybe you shut down or power on a lot of computers at the same time, start engines, whatever. It does not make sense your SSDs failure rate(15 days, come on!!).
So something specific to my environment is not a factor.
Most SSD crashes are because the firmware managed to corrupt something, and thus you can fix most failed SSD's by reflashing their firmware, assuming the drive has a method of doing this without special equipment. (doing so would, of course, erase the entire drive)
It's too much to hope for that Windows is the common factor here, isn't it?
If you have an SSD on OS X, I discovered that in many cases you can "reset" the drive and regain performance. Basically, if your drive implements the "security erase" command, you can use the following procedure by taking the cover off the back of your Macbook, removing the hard drive, leaving the SATA cable in reach, and booting with the GParted Live CD.
The tricky part: You have to plug in the SSD drive >after< your machine boots, so it's not in "frozen" state. The other tricky part, is that you first have to set a security password to do the erase. Then doing the secure erase also erases the password.
I know first-hand that this works on Crucial C300 RealSSD and I've seen lots of reports that this works with OCZ drives. This is also useful for doing new installs on an SSD if you've gotten a new machine. (My situation)
WARNING: DO NOT try to use a Crucial C300 SSD with an i5 or i7 2011 13" Macbook Pro. The problem is in the hardware, namely electromagnetic interference in the SATA cable when trying to use 6GB/s SATA. Wrapping it with aluminum foil doesn't always work. Buying an upgraded SATA cable doesn't always work either. Buying a 2010 Macbook Pro instead always works. You have been warned.
I don't care how fast a computer is if I have to reinstall it from scratch every few months.
Sure, all my working files are safely on Dropbox (documents) or GitHub (code), but that doesn't help when I have to reinstall my operating system and software every three months!
(A complete Time Machine restore of my laptop over Firewire takes 2-3 hours. I sleep better at night, knowing this.)
Such a backup might not be especially fresh, but once it's up you can sync over files from a fresher, automated backup.
Anything important is part of a pool.
Rebuilding, with cobbler + puppet, should take 10 minutes depending on your hardware and network, plus app files.
This is all trivial when weighed against the speed and to a lesser extent, power savings you get with SSDs.
2. Restoring of Windows machine with lot of software very annoying and takes a lot of time (clicking dumb "I agree", and reboot the computer now, entering serials etc...) but with ubuntu or os x it's less annoying experience. Good package manager calculate dependencies for software. The only exception is when you need to install some proprietary software (less frequently than on windows) or you need edge version.
1. I only tend to update it after running it and finding that something I was expecting to be installed wasn't (really I should switch to using puppet/cdist/similar for this)
2. I have to set up Firefox manually (downloading extensions etc)
Actually, come to think of it, what is the failure mode for a drive when it's all worn out? Is it catastrophic data loss or is it just an unwritable drive?
MLC flash is also around 5K erase cycles (and about the same number of reads until you need to do an erase-and-rewrite). Multiplied out and properly wear-leveled this is not a big deal.
We might be seeing crappy wear leveling, or badly written firmware, or the need for more ECC bits (yes, MLC has these, and they are /not/ optional) than the EE types think they can get away with.
I've been writing flash file systems since 1991. They're fun as hell to work with.
Could you recommend any good resources for someone new to file system technology that is interested in their inner-workings? Thanks!
NT File System Design (Rajeev Nagar) -- you can find this used. I believe his web site had a PDF copy for a while, sans pictures.
My favorite book on transactions is Bernstein's Principles of Transaction Processing (may also be available as a PDF somewhere, from the author).
Read the v6 Unix file system code. That will date me, but it's /simple/ and it works. You can move up from there.
btrfs is neat. I haven't looked at the code.
If you run smartctl -A on your SDD device then for e.g. Intel you can see attribute 0xE9 or 233 -- media wear indicator, starting at 100% (mine's at 98% after 14 months, so this should hopefully indicate 2% of the wear out from writes). You can also see how many of the reserved blocks it's used (when it fails to reflash a cell when rewriting).
Even if you can avoid catastrophe with regular, automated backups (I use two Time Machine drives myself), what about bitrot? If the SSD you've been diligently backing up over months has been slowly rotting, can you have any faith at all in the backups?
PS: If you want to get fancy you can set up a bootable partition on your HDD and then backup the SSD to that partition.
I keep regular Time Machine backups, Backblaze off-site backups, and much of my "current" work is done in Dropbox, so it's synched quickly, but a hard drive loss still means downtime and the loss of whatever I was working on. Of all the computing users I know in every day life, I am the most backed up amongst them. SSDs are rapidly going mainstream, and the impact of hardware failure for the "mainstream" user isn't something that should be marginalized.
An earlier failure may even be an advantage here because that way you have less time to accumulate important data before learning your lesson...
To be recorded the VAS had to be made directly through the merchant, which is not always the case since it is possible to return directly from the manufacturer: however, this represents a minority in the first year.
- Maxtor 1.04% (against 1.73%)
- Western Digital 1.45% (against 0.99%)
- Seagate 2.13% (against 2.58%)
- Samsung 2.47% (against 1.93%)
- Hitachi 3.39% (against 0.92%)
Hitachi is plummeting, which was first in the previous ranking! Western Digital retained its second place despite a failure rate increasing, while Maxtor is occupying the first place.
More specifically the failure rate for 1TB drives:
- 5.76% Hitachi Deskstar 7K1000.B
- 5.20% Hitachi Deskstar 7K1000.C
- 3.68% Seagate Barracuda 7200.11
- 3.37%: Samsung SpinPoint F1
- 2.51% Seagate Barracuda 7200.12
- 2.37%: WD Caviar Green WD10EARS
- 2.10% Seagate Barracuda LP
- 1.57%: Samsung SpinPoint F3
- 1.55%: WD Caviar Green WD10EADS
- 1.35%: WD Caviar Black WD1001FALS
- 1.24%: Maxtor DiamondMax 23
Hitachi is logically the less well placed, what with two separate lines! What about the 2 TB version?
- 9.71%: WD Caviar Black WD2001FASS
- 6.87% Hitachi Deskstar 7K2000
- 4.83%: WD Caviar Green WD20EARS
- 4.35% Seagate Barracuda LP
- 4.17%: Samsung EcoGreen F3
- 2.90%: WD Caviar Green WD20EADS
Overall, failure rates recorded are bad. That does not really want to entrust to 2TB of data to these discs alone: a mirroring will not be too much for securing data. Logically 7200 rpm disks are less reliable than the 5400/5900 rpm, with almost 10% for the Western model!
For the first time, we also integrate SSDs in this article type. The rates of failure recorded by manufacturer:
- Intel 0.59%
- Corsair 2.17%
- Crucial 2.25%
- Kingston 2.39%
- OCZ 2.93%
Intel stands here with a failure rate of the most flattering. Among the few models sold over 100 copies, displays a rate of no more than 5% VAS.
Intel 0,3%, Kinston 1,2%, Crucial 1,9%, Corsair 2,7%, OCZ 3,5%.
Edit: and on last page of the previous link there is a list of the current models sold between 10/01/2010 and 04/01/2011 with the worst fiability track record:
6,7%: OCZ Agility 2 120 Go. 3,7%: OCZ Agility 2 60 Go. 3,6%: OCZ Agility 2 40 Go. 3,5%: OCZ Agility 2 90 Go. 3,5%: OCZ Vertex 2 240 Go
What this tells me is that there are probably some differences in the environment that you use your SSD in that can have disproportionate effects on its lifetime.
I admit that I tend to skim Jeff Atwood because his occasional tasty flakes of insight are thickly coated with delicious but useless fluff.
Here, there's none of that context. Drives, like women, are irreparably either hot or crazy and the only question left to users (or men) is whether they're "worth it". That's just sexist, sorry. The only people who have an excuse for thinking its not are the ones who get the sitcom reference.
If a woman wrote this about men, would it be sexist? (I expect you'll say yes.)
If a lesbian woman wrote this about other women, would it be sexist? (I feel like you're forced to say yes since you are committing to the idea being sexist independent of the sex of the speaker.)
Is it possible for a man like Barney to be honestly and accurately analyzing trends of the women in his life, given that he only ranks women shallowly?
Careful now. It's not necessarily sexist for Barney to _only care about looks,_ or at least: since we all care about looks to some degree, it is dangerous to imply that caring about looks is sexist. And if his analysis of his desires is based on his decision to only try for attractiveness, how is that analysis sexist instead of revealing the frailty of being so shallow?
So the comparison is basically that there are two orthogonal traits, one negative and one positive. It is not "Drives, like women"; it is "Drives have orthogonal traits, and evaluation of them therefore proceeds along the Barney Analysis."
It's easy for a typical man in our society to shrug off a comment equating his looks with his worth. It's much harder for a woman. "Should" it be? Of course not, but that's sort of the point: let's try laying off things for a while before demanding logical equality, OK?
The problem I have with it is that it is then used to marginalize _other_ local subjectivities like r/mensrights (which admittedly has its share of ludicrous opinions, do _not_ admit you don't care you were circumcised) for the purpose of serving the "most important" 'ism.
In reality there are a whole bunch of inequalities in a whole bunch of different directions, and it doesn't make sense just to target the one that _some_ people have decided is the most important. (In all honesty, for instance, I think racial inequality is a much bigger problem than sexism in our society now that women are becoming much better educated, whereas we still have a lot of 'bad part[s] of town.')
In other words, we are not escaping heteronormative forms until we can actually work with logical equality, and the quickest way through the forest is the straightest. Why not just admit the problem is with beauty and our valuation of it--and yes, men might be guilty of this more by percentage, but that's not the source of the problem--instead of with gender?
The way I see it, Barney only judges women by how hot/crazy they are (or he judges them by more than that, but those are the two primary factors in his judgement). Given that everyone judges everyone based on some traits, and these traits have different weights attached to them, why is it sexist for Barney to do that? He's just a shallow person, but not qualitatively different than anyone else (only quantitatively).
Also, he is not saying "this is how you should judge women", only "this is how I judge women". I would be fine with a woman saying that about a guy, or a woman about a woman. It wouldn't be a woman I'd like to date, but I don't consider it an attack on my gender as a whole.
it was objectifying a human, reducing her to a series of inputs and outputs, no better than a microwave oven. it would not have been acceptable whether the victim was male or female.
Imagine how the article would read if the quote was from a fictional female or gay character on the topic of what type of male they were attracted to. Just one data point, but it would make me wonder why that's there at all and what it adds to the discussion of SSD reliability.
Then I stumbled upon this on the Apple store when configuring your disk options: "Your MacBook Pro comes standard with a 5400-rpm Serial ATA hard drive. Or you can choose a solid-state drive that offers enhanced durability."
So, why do they say that? What SSD do they built in? Is that a particular special SSD with enhanced durability? Even more enhanced than regular harddisks (like they are claiming)?
What I describe above is just an excuse they can (and probably) use. The reality is that it's just marketing text to up-sell you something. I bet it works in many cases.
SSDs have long been marketed as being more durable than mechanical HDDs, and are quite often used in environments subject to sudden shock or vibration.
The particular underlying marketing going all the way back to the classic "solid state" marketing and the removal of vacuum tubes (valves) from computing decades ago, and this irrespective of the acceleration sensors and other schemes intended to reduce the numbers of head crashes with HDDs.
With LCDs, SD/CF/flash/downloads and now SSDs, the remaining non-solid-state devices are being removed from computing, and following the path as the Cathode Ray Tubes (CRTs), floppies and CD/DVD drives and HDDs. As rugged as CRTs and HDDs have become, vacuum tubes and flying heads still do not deal with shock and vibration particularly well.
I wonder if, as flash memory gets cheaper, we'll see SLC SSDs getting more popular.
My experience with regular consumer-grade hard disks indicates that regular backups of any data I want to keep is always a good idea. The X25-M mentioned in the article are using MLCs and so might some of the other SSDs he has used up. Another potential factor could be heat, malfunctioning hardware, etc. - there is not enough data given to even guess what the problem could be.
If excessive writes are a problem, an idea have had is to protect a large MLC-disk by using union fs to write only to a smaller SLC. No idea how much overhead using union fs introduces by itself but to me it appears to be a good way to use SLC/MLC drives, keep down costs, and don't rely too much on "magic" MLC-firmware.
I do make regular (image) backups from all my SSDs, so when one fails I can quickly replace it and restore the image. Apart from that I keep all my sourcecode and work-related stuff in a source code versioning control system hosted in a datacenter.
Capacity: 80.03 GB (80,026,361,856 bytes)
Model: INTEL SSDSA2M080G2GC
Serial Number: CVPO0042012S080BGN
smartctl 5.40 2010-10-16 r3189 [FreeBSD 8.1-RELEASE i386]
Model Family: Intel X18-M/X25-M/X25-V G2 SSDs
Device Model: INTEL SSDSA2M080G2GC
Serial Number: CVPO0054037B080BGN
Firmware Version: 2CV102HD
User Capacity: 80,026,361,856 bytes
Device Model: INTEL SSDSA2M080G2GC
Serial Number: CVPO951000ZJ080BGN
Firmware Version: 2CV102HA
User Capacity: 80,026,361,856 bytes
- Brand does matter - the failure rate on Intel drives is a lot lower than other brands.
- The system you put it in also makes a difference. For example, certain generations of Macbook Pros are just not happy with certain SSDs. We've had customer that had two or three failures with a certain brand / model / chipset, and then changing to a different SSD they haven't had problems.
- The failure rate for SSDs overall is higher than that for HDDs, but lower than the failure rate for some other products (such as Graphics cards).
- Every SSD we have sold went into a custom built system or was installed afterwards into a laptop, so I don't buy the argument that problems are caused because it's installed by an end user and not at the factory.
- They are blindingly fast. In my opinion, it's better for 90% of users to put a new Intel SSD into their existing system, than it is to buy a new system.
Long answer is long and I'm most certainly not an expert. It's just a consequence of how NAND memory works. You can read more about this in the links below (one is to a review of a particular ssd drive, but a couple of sections explain how SSD works in great detail):
Also note that mechanical hard drives fail a lot, too.
And if a hard drive starts to go, sometimes you hear it failing. I actually backed up a 30gb hard drive where I had to wobble it up and down while being backed up.
While the speed of these things can vary a lot, so does reliability - and that won't show up in a benchmark. I noticed that a lot of the mentioned SSDs are GSkill. Apparently those are cheap and unreliable.
(probably still won't be very useful, but at least it will be another data point)
Swapped it out, been fine since (about a year and a half now). Also no problems in my mac.
But both machines run regular backups so that should an SSD fail, it's just a matter of imaging the new drive and replacing it.
I'd think a reason for quick wear out may be swap memory. I've never put a swap file/partition on mine.
That's very interesting. I've had a swap file on all of my SSD systems. What makes you think that could be a culprit?
I have a swap partition on my SSD but I also have swappiness=0, so that it only writes to disk if there's actual memory pressure.
More info: http://en.wikipedia.org/wiki/Wear_leveling
went to the vertex 3 page on newegg. there was at least 5 reviews on the very first page about it dying from 0 to 24h.
my cheap SSD on my 3yr old eeepc1000 is still kicking. it's now serving games to my wii via a $5 enclosure that does NOT need external power. and the eeepc has a bigger and faster one that's also working for 1yr+
...too bad asus used a mini e-pci interface that is as odd as records you listened when you were 15.