
Are Solid State Drives Worth the Money? - patrickk
http://lifehacker.com/5616023/are-solid+state-drives-worth-the-money
======
macemoneta
It really depends on the platform and requirements.

On a desktop, a three-drive RAID0 provides about the same performance, and
gives you nearly 40x the storage for a given price-point.

On a mobile platform the physical space, vibration/motion, and power
constraints coupled with the increased performance may make SSDs worthwhile.

~~~
moe
_On a desktop, a three-drive RAID0 provides about the same performance, and
gives you nearly 40x the storage for a given price-point._

Quite a terrible idea unless you enjoy backup recovery sessions.

If the noise of 3 drives doesn't bother you then you can just as well get 4
and setup a RAID10, which is nearly as fast but much less likely to fail.

~~~
macemoneta
I back up several times a day, automatically. I've run RAID arrays for years
with no problems, but all drives eventually fail. An SSD is internally a RAID0
that can be up to 16 wide (that's how they get their performance). Used and
managed properly (including proper backup and recovery), RAID0 works very well
with good quality drives. If you have noise problems, then you likely have
vibration issues that will shorten the life of your drives. I use anti-
vibration mounts for all hard drives, and I hear my (quiet, anti-vibration
mounted) fans more than I hear my drives. I also spin down the backup drives
when they are not in use, so they add no additional noise.

~~~
moe
_An SSD is internally a RAID0 that can be up to 16 wide (that's how they get
their performance)._

Erm. Actually they get their performance because they write to NAND instead of
spinning platters. SSD controllers are also quite a bit smarter than plain
RAID0.

 _RAID0 works very well with good quality drives._

A RAID0 over three disks has about 1/3 the MTBF of a single disk.

That can still be a worthwhile trade-off if you need the extra capacity, but
if you're mostly after performance and reliability then a pair of SSDs, or
even a single SSD, is the better choice.

~~~
macemoneta
You'll probably find this interesting reading on the internal architecture of
SSDs:

    
    
         http://www.denali.com/wordpress/index.php/dmr/2010/02/02/ssd-interfaces-and-performance-effects
    

Also, while RAID0 reduces the MTBF, it's not linear. Drive life is not
magically shortened as a result of the drive being in a RAID array (if you
take care to isolate synchronous vibration). The life of the array is equal to
the shortest drive life. In other words, if a drive would have failed after
25,000 hours in standalone operation, it will still fail in 25,000 hours in an
array. The other drives may run to 100,000 hours, but it's a "weakest link"
failure mode.

~~~
moe
_Also, while RAID0 reduces the MTBF, it's not linear._

Well, it is inverse proportional.

 _The life of the array is equal to the shortest drive life._

Erm. To be clear: Your risk of having a RAID0-set (over 3 disks) fail during a
given timespan is 3 times higher than having a single-disk-"set" fail in the
same timespan.

 _In other words, if a drive would have failed after 25,000 hours in
standalone operation, it will still fail in 25,000 hours in an array._

That calculation makes no sense. If you have a single drive then that will
fail, on average, after 25k hours. If you stripe over three of these drives
then your array will, on average, fail after 8333 hours.

~~~
macemoneta
While the probability of failure is nearly a function of the number of drives,
the MTBF/MTTF calculations do not work that way.

For example, if there were a probability of 5% that the disk would fail within
three years, in a three disk RAID0 array, that probability of failure would
be:

P=(1-(1-.05)^3)=.14263

In other words, 14.3% probability of failure within three years. That doesn't
mean it will fail in that time frame. It means if you have a large population
of that configuration, that is the rate you would be dealing with for drive
replacement planning.

The MTBF and MTTF calculations apply to populations of drives (e.g. a given
model) not to a given drive. The values provide no predictability for the
failure of any specific drive. Using the values for that purpose is a common
misapplication. A drive with a MTTF of 1,000,000 power-on hours can fail in 15
minutes or never during its useful life.

As a result, a three drive array will have a higher probability of failure
over a given interval, but the MTTF/MTBF of the drives is essentially
unchanged.

Think of it this way... The probability of winning the lottery is one in
20,000,000. The probability that someone (anyone) will win the lottery in a
given week may be one out of ten - 10%. In other words, some person wins the
lottery, on average, one time in ten weeks. That doesn't mean that your
probability of winning the lottery is 10%. It also doesn't mean that the
average probability of winning the lottery is 10%. It also doesn't change the
probability of winning the lottery; it's still one in 20,000,000, even if
three people win in a 10 week interval.

~~~
moe
Hm. Thanks for repeating what I just said, I guess. But what was your point
again?

~~~
macemoneta
tl;dr: For RAID0 arrays there is a non-linear increase in the probability of
failure, but the MTTF/MTBF doesn't change much.

~~~
moe
Could it be you're just arguing for arguments sake?

My original point was: A RAID0 over 3 disks is about 3 times more likely to
fail than a single disk running standalone. Fail means "total data loss". You
confirm that point with your own math, yet still _seem_ to be trying to argue
that there was no difference. Sorry, that makes no sense to me.

~~~
macemoneta
Your statement was:

"A RAID0 over three disks has about 1/3 the MTBF of a single disk."

This is incorrect, the MTTF and MTBF are not significantly changed. Assuming
you meant failure probability, my issue with the probability variance is the
linear relationship you imply.

If the variation were linear, a RAID array composed of drives with a 5%
failure probability would reach certainty of failure (1.00 probability) within
the interval at 20 drives. In actuality, it takes 225 drives to reach that
probability.

The difference is a real world consideration for capacity management. What it
means is that RAID0 arrays are not as failure prone as people think they are.

~~~
moe
_> "A RAID0 over three disks has about 1/3 the MTBF of a single disk." This is
incorrect, the MTTF and MTBF are not significantly changed._

Wikipedia disagrees;
[http://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_0_fai...](http://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_0_failure_rate)

array_MTTF = avg(drive_MTTF) / number_of_drives

~~~
macemoneta
Which is at odds with the (correct) definition of MTTF as a rate-based
calculation:

<http://en.wikipedia.org/wiki/Failure_rate>

The person that wrote the Wikipedia article you referenced read the same
mythology you did; repeating it doesn't make it true. The plural of anecdote
is not fact.

Think about it yourself for a moment. If two cars are traveling 50mph, does
that make their average speed 25mph (50/2)? Applying a divisor to a failure
_rate_ based on the number of devices is nonsensical.

~~~
moe
If you are so convinced then why don't you correct the wikipedia article?

Perhaps also call up LSI and Adaptec, who use the same formula in their
documentation.

[http://storageadvisors.adaptec.com/2005/11/01/raid-
reliabili...](http://storageadvisors.adaptec.com/2005/11/01/raid-reliability-
calculations/)

But what do they know, they only build raid controllers...

~~~
macemoneta
You're right, there's no reason to try to correct the 20% of the population
that believes the Sun revolves around the Earth. It's a lost cause; you win.

~~~
moe
_You're right, there's no reason to try to correct the 20% of the population_

Erm wait, didn't I just suggest the exact opposite?

If you really think everyone has been wrong about this all the time then
please, by all means, correct wikipedia or write a blog post about the matter.

This "false" formula has been out there for quite some time and you find it in
pretty much every write-up on the topic, including those from RAID-vendors who
(I'd hope) have spent some thought on these things.

On the flip-side I haven't found a single source to support _your_ thesis.
Thus I'd say the burden of proof is on you.

------
AndrewDucker
The Momentus XT seems to be a good compromise (4GB SSD that caches frequently
used parts of a normal HD).

I know a couple of friends with them, and they say that applications load
vastly faster, and their machines also boot faster.

It doesn't help with sustained throughput, but that's not normally a problem.

------
trustfundbaby
Totally worth it with one major caveat.

SSD drives don't actually delete stuff when you 'delete' them, it just tells
your OS that that it can write to that space. Normal hard drives do this too,
the problem comes when you actually go to write stuff to the space that is
'marked for deletion'.

Unlike normal hard drives that just overwrite the disk space, SSD drives can
only write to disk in groups called 'blocks'. So when the drive is filled up
with stuff (or you've been using it for some time), it has to corral the
block, make sure its safe to write to, if it isn't, it wipes it, before
finally writing to it ... which is really slow.

This manifests itself in your system basically freezing on you from time to
time after you've been using it. Its fully explained here
<http://www.anandtech.com/show/2738/8>

It can be REALLY frustrating, but even with that ... I would NEVER NEVER go
back to a normal hard drive. Why? My photoshop opens in 7 seconds, Netbeans
(bloated Java IDE) in 10, Windows XP boots in bootcamp in 2 minutes, and OS X
is fully loaded in 30 seconds. So yeah ... you can pry my ssd from my cold,
dead, mutilated fingers.

There is a TRIM OS command that allows your OS to basically clean out the
space on your hard drive that is marked for deletion while you're not using
the computer, so that this problem does not occur, but OS X does not support
it ... Windows 7 and Linux are the only OS's I know that do.
<http://www.anandtech.com/show/2738/10>

The other thing to consider is that SSDs are very fast on sustained writes ...
copying a 1GB file from one location to another. However, modern OS'es employ
frequent but small writes in their operation ... and certain drives that look
good on paper stink up the joint in this department (things have changed
recently though, since anandtech called them out on it).

What you want to do when you're looking at specs is find out what the
throughput is (MB/s) for writes on 1KB, 4KB pieces of data, and compare it to
the sustained write speeds, to see how big the difference is. If you want a
shortcut ... buy an intel SSD ... and thank me later.

------
chrisbolt
I'm spoiled by SSDs now. I've had an X25-M in my primary computer for almost a
year now, and other computers feel so slow that I can't imagine going back.
Even in servers, one X25-M can replace 4 15k RPM disks in RAID 10 for I/O
load, not to mention the power savings.

------
ahoyhere
I would never go back to regular HDDs. Quiet, cool, sipping battery life… go
go gadget SSD!

