
Best SSDs For The Money: August 2011 - nesbot
http://www.tomshardware.com/reviews/ssd-solid-state-nand-reliability,2998.html
======
cowmixtoo
I have a lot of personal success stories with SSDs but here's my current
favorite.

A few months ago I had to help one of our scientists (the company is called
5AM Solutions.. they the awesome) run a bioinformatic job written in Perl and
R. As it turned out, for long stretches of the processing the job required
around 20 GB of memory. The one server that had all the required dependencies
installed had only 8 GB at the time.

When I let the job run the first time, it started to page out memory to hard
disk. The job ran for about four days, was only about 25% complete and during
that time frame the server was un-useable for any other functions. Pretty much
everything came to grinding halt.

Between that first run and the time our new RAM would be installed, just for
grins, I gave the system 30 GB of swap space on the locally attached SSD. With
that configuration the job finished in 19 hours and during that time the
server was still responsive of other tasks.

When we finally added the appropriate amount of physical RAM the job took only
15 hours to complete.

It is the first time I have ever seen virtual memory be useful.

~~~
scott_s
"Virtual memory" is not a synonym for swap space:
<http://en.wikipedia.org/wiki/Virtual_memory>

Virtual memory is what lets us write programs pretending that we own the
entire address space, and it is _very_ useful.

Swapping pages to disk, though, has been useful for a very long time. Yes,
once your high-performance application starts swapping all the time, your
performance is going to suffer by several orders of magnitude. But
occasionally swapping pages in and out of disk is part of what makes modern
operating systems useful. You left a large PowerPoint presentation open for
several days, but never got around to working on it? Not a problem, since if
the OS needs that memory, it will just swap out the pages. Without that
ability, the OS would need to go around killing processes. (Which it will do
if it has to, but it's a rare event because it can swap out pages.)

~~~
cowmixtoo
Hmm... I still think its valid to interchangeably use "Virtual memory" and
swap space. Swap space is just where your "virtual memory" lives, right?

~~~
scott_s
On modern systems, there are two kinds of addresses. "Virtual" addresses and
physical addresses. Virtual addresses are tracked by the operating system, and
they can span the entirety of the addressable address space. So, on a 32-bit
system that isn't playing any high-memory tricks, that's 0 - 2^32, or 0 - 4
GB.

But your system may not have 4 GB. So the operating system has a data
structure called a _page table_ that has the virtual to physical mapping for
each process. The processor accesses this table (it caches it in something
called a TLB) so that it can convert the virtual address to the physical
address.

An example using small numbers. Your program has a pointer to data. That
pointer may have value 800. Let's assume that the amount of memory on your
system is only between 0 - 400. So the processor has to convert the value 800
to a value between 0 - 400. It's the operating system's job to maintain that
valid mapping.

Why does this matter, and why is it so tied up with paging to and from disk?
Let's say the OS pages out the page containing that data. Then, later, it's
paged back in, but in a _different_ physical location in memory. Your program
still has the pointer value 800, but your program still works correctly
because the operating system keeps track of where in physical memory 800, for
your process, maps to.

People in the Windows world often say "virtual memory" when they mean "swap
space" because Windows would call the amount of swap space "virtual memory
size." But virtual memory is the technique described above. Read the Wikipedia
entry from above, or read an operating systems textbook for a full discussion
of it.

~~~
pdubs
That's not entirely correct. The MMU generally handles virtual to physical
memory address translation and the OS is only ever involved if there is a page
fault. Outside of OS architecture and very specific and intended application,
virtual/physical memory is completely transparent. When I hear "virtual
memory" I assume reference to swap space unless otherwise noted because the
technical meaning has such a specific domain.

~~~
scott_s
That's why I noted that the CPU caches the mappings in the TLB. On modern
processors, the MMU is integrated with the rest of the processor, so I didn't
see the need to introduce another TLA. It's a part of the processor just as
much as, say, the floating point unit is. The whole point of my discussion
with small pointer values was to demonstrate that the virtual to physical
mapping is transparent.

When I hear "virtual memory," I think of the computer science meaning.
However, I am a researcher in high performance computing systems.

------
lawnchair_larry
Just a warning, a lot of these SSDs "cheat" by using compression. I bought the
best drive that I could find for my macbook pro - OCZ Vertex 3 Max IOPS, and
was rather disappointed to find out that the posted speeds are based on
benchmarks with compressible data. The reason this is an issue is because if
you use disk encryption like you should be doing, encrypted data is not
compressible. As a result, my speeds are 1/3rd to 1/2 of that which is
advertised, and it was not worth the extra money.

~~~
0x12
> if you use disk encryption like you should be doing

I've never found that I had a really good reason to encrypt a drive yet, I'm
kind of surprised to see a suggestion that this is the way things are done.

Why should you be encrypting your disks?

~~~
colonelxc
The OP specifically mentioned a MPB, a laptop, which are pretty easy and high
value targets for theft. If my laptop was stolen, I would be relieved to know
that my personal data was safe.

There are additional reasons for full disk encryption too, like ensuring that
important system files have not been tampered with. Whether or not you want to
go that far depends entirely on your level of paranoia.

For a home desktop, the cost/benefit may be a bit different, because the
computer is exposed to less places and people. As with many things in
security, you would need to calculate what is an acceptable risk to you versus
the cost of mitigating that risk.

------
angrycoder
I am surprised to see the OCZ drives recommended so frequently in the article.
So far I've owned 2 from crucial, 1 intel, and 1 from OCZ. The first SSD drive
I purchased was a crucial, and it is still running like a champ 3 years later
in a macbook pro. The OCZ drive failed within 90 days. This wouldn't be a big
deal in and of itself, things break. However, their customer service is
terrible. I got nothing but a 2 week long run around with them when I tried to
RMA it.

~~~
zzuser
I am also very surprised to see OCZ drives recommended. I have 10 OCZ Vertex 3
SSDs in a single server. In the past two months, 6 of these 10 Vertex 3 drives
have failed so far. OCZ is utter and bitter crap.

~~~
sbierwagen

      bitter crap
    

That particular colloquialism does not appear to translate into English very
well.

------
acangiano
I recently bought a Crucial M4 256GB SSD, and I have been extremely satisfied.
Blazingly fast and no issues whatsoever. Even better, on my late 2008 MacBook
Pro model, I get SATA II speeds (3Gbps). Most other drives (e.g., OCZ Agility)
will only be recognized as SATA I (1.5Gbps) on my Mac. This makes the Crucial
drive literally twice as fast in best case scenarios.

As an example of the effects it has had on my computer performance, building
my upcoming book (which invokes rake and JRuby) used to take 1m 30s on a 7200
RPM drive. Now it takes 15 seconds. Also, productivity apps like Office open
in a split second.

~~~
niels_olson
Upgraded my Cr-48 to a 40 gb Intel drive over the weekend. An SSD with room is
a marvelous thing. So, I have a late 2008 macbook (white), looks like a very
similar processor (core duo 2.4 ghz), and 4 GB RAM, running Lion. I've been
looking at the Crucial 256. That or the Intel 160. How long have you been
running this drive, and are you running Lion? How did the upgrade go? Did you
need anything besides Time Machine? Do you use macports, postgresql, or
apache, and if so, any problems after the upgrade?

Also, have you looked into 3g inside your macbook? Is there any way to make
that work? A dual function wifi/3g mini PCIe?

~~~
acangiano
> How long have you been running this drive,

About a month.

> and are you running Lion?

Yes.

> How did the upgrade go? Did you need anything besides Time Machine?

I did a clean install and then moved over my data. It was time for me to do a
clean up anyway, so I took the opportunity to do so.

> Do you use macports, postgresql, or apache, and if so, any problems after
> the upgrade?

I use all the typical dev tools with no problems, but I have not installed
macports after the switch to this new drive. (I use brew instead.)

> Also, have you looked into 3g inside your macbook?

I haven't.

~~~
niels_olson
thanks!

------
sciurus
We use the 600GB Intel 320's as second drives in Macbook Pros for running
virtual machines. They're the highest capacity 2.5" SSDs available, and they
aren't cheap. However, we can put multiple virtual machines with 100GB+
databases on them, travel to countries with poor internet connectivity, and
teach workshops where the students run intensive queries. It's as if we've
shrunk a $100,000 storage array and stuck it in our carry-on luggage.

~~~
cowmixtoo
I don't understand why SSD are not deployed in EVERY RMDBS server right now.
They benefits are truly out of this world.

~~~
riobard
'cause SSDs are not as reliable as they appeare to be to withstand the server-
side workload. Enterprise-grade SSDs are significantly more expensive than
consumer-grade ones. You are not looking at $1~2/GB price, but $10~20/GB.
Given the capacity required for most use cases, SSDs are hardly good choice
for critical servers as primary storage.

In addition, most RMDBS are optimized for mechanical disks. Optimization for
SSDs becomes interesting only recently when the price of SSD drops to be
barely reasonable.

However, SSDs absolutely rocks as big cache.

~~~
masklinn
> In addition, most RMDBS are optimized for mechanical disks.

Since SSDs blow the hell out of platters no matter what the workload or access
pattern is, you'll still get significantly improved performances, even without
SSD-specific optimizations.

The one "optimization" I'd like to see out of SSD's rise is deoptimization:
since access patterns becomes less important (or at least naive access
patterns become less costly), I'd like to see systems simplified and
"optimizations" removed rather than new optimizations added.

~~~
jamwt
We (bu.mp) use a lot of SSDs at our datacenter.. we've probably used ~100 64GB
x-25e, and recently we have added 20+ Micro P300 disks.

The first thing we used to do is try to convince the hardware raid controller
not to do anything clever, like readahead etc, b/c seek times are practically
meaningless. Despite our efforts at disabling every optimization we could
control that was tailored for rotational platters, we still found that
software raid (linux md) outperformed a classically great hardware controller
--perhaps by virtue of being "stupider".

So that is our go-to configuration now: Micron P300 SLC, 200GB drives, with md
raid.

------
peteforde
I'm surprised that I'm not seeing even a casual mention of the OWC SSDs in
this review. From everything (else) I've ever read, they tend to be way ahead
of the curve in terms of innovation and performance. Sure, things change
quickly stats wise... but they aren't even on the chart.

[http://eshop.macsales.com/shop/SSD/OWC/Mercury_Extreme_Pro_6...](http://eshop.macsales.com/shop/SSD/OWC/Mercury_Extreme_Pro_6G)

Subjectively, I would describe my OWC 120GB drive as "blisteringly fast".
Previously I had a 100GB OWC with extra redundancy for server loads (overkill
in my iMac 2010) and the first time it booted it was like being personally
greeted by the Flying Spaghetti Monster.

~~~
wmf
OWC = SandForce = OCZ

[http://www.anandtech.com/show/4604/the-sandforce-roundup-
cor...](http://www.anandtech.com/show/4604/the-sandforce-roundup-corsair-
patriot-ocz-owc-memoright-ssds-compared)

------
nesbot
If you use your computer for extended periods of time every day then their
performance payoffs outweigh the $. They have come down quite a bit in price
lately as well. As the article talks about there are many options available
today from at least getting your OS onto a smaller speedy boot drive to
housing everything on a larger one.

I for one initially had picked up a 60 GB OCZ Vertex1 awhile ago and then
about 10 months ago moved up to a 120 GB Vertex2. Will never look back.

------
rektide
Judging SSD's by their little performance bits is a kind of amusing endeavor:
the latest SandForce already saturates a 6Gbps SATA III link, and others are
catching up real fast. This pretty standard unit of measure is hitting the
limits of the interface, not the drive.

What other criteria are there? GB/$, performance/watt, watts at idle, IOps,
and warranty or lifecycle costs. Personally, I find something "big enough",
ignore power consumption and iops (neither is going to make a huge enough
difference for me to concern myself), and then get whatever I can find that
has the longest warranty.

~~~
sliverstorm
_the latest SandForce already saturates a 6Gbps SATA III link_

Considering the dinosaur pace of new interfaces, and the fact that SATA III
isn't even fully "rolled out" yet, I wonder if we are going to see a new wave
of hackish custom solutions by manufacturers. Dual SATA ports on your drive,
anyone?

~~~
bho
Or, direct plug-in to PCI-E like the revodrive.

------
saturdaysaint
I'm pretty happy with the 256 GB drive in my MacBook Air. I would have
considered this limited until recently, but wireless networked storage is
cheap and easy to implement(I basically just connected a cheap 3 terabyte
drive to my router) and Thunderbolt or USB3 storage gives us a lot of options
when more high-performance storage is necessary. Also, cloud services
(Facebook/Flickr for photos, Rdio for music) have made me much less of a data
packrat. So I increasingly consider the hard disk "working storage" for
applications and the most crucial files.

------
mxavier
I knew this would happen. I bought a Thinkpad T420s which has a non-standard
(or new standard?) 7mm hard drive caddy instead of the much more common 9.5mm.
I bought a 64GB Crucial M4 because it could be modded to 7mm and the price was
right. I'm happy with the performance. It feels noticeably faster than a
standard magnetic drive, but the 64GB is in tier 10 on this comparison. I
guess as long as I'm happy with the performance it doesn't really matter. I
wish this article was around 2 weeks ago.

------
saturn
I recently grabbed a 512GB crucial m4 for $730 delivered, from here
<http://www.bhphotovideo.com/> (no affiliation) (price has risen slightly
since). For some reason I'd been putting it off but seeing it there for less
than $1.50 per gig, at the capacity I wanted for my macbook, suddenly seemed
to be a no-brainer. Hell, I remember paying $100 per gig back in the '90s.
Somehow my mental model of "reasonable prices to pay for storage" has just
been totally biased by years and years of dirt cheap HDDs.

Frankly it hasn't been the jaw-dropping entering-hyperdrive performance boost
I had kind of hoped for (I'm a rails dev). While a definite improvement, it
seems that for many of my most common tasks (read: tests) I have merely pushed
the bottleneck back onto the CPU. But while it hasn't sped up all that much,
it _never_ slows down, which you don't notice at first but over time has a
subtle confidence-building effect. Application launch speeds are much
improved, for those who spend a good part of their day launching apps, which
is not me. I tend to launch a few and then use them for the next two weeks
before I restart. I also like how the drive does not make whining sounds when
I move the computer before it's gone to sleep.

Recommended, anyway, they're cheap enough now that it's not a luxury, even if
like me you use most of it for your work music collection.

~~~
rawsyntax
From what I read at the time, (january 2011) sandforce based SSDs got much
better performance on mac systems. I think this was to do with OSX's lack of
support for TRIM.

On the other hand, right after I upgraded my SSD, I went to ruby 1.9.2 and
rails 3, which are much slower due in part to a bug that makes requiring files
take forever.

~~~
r00fus
Not sure about others, but in my case (10 MBP 13" w/ Vertex2 in optibay), I
had occasional stalls (anywhere from 2-30 sec) until the TRIM Enabler (
<http://www.groths.org/?p=308> ) showed up and I installed it (OSX Lion
supports TRIM on this drive natively, it seems).

I think this is because, although sandforce drives have native garbage
collection, it can sometimes happen at a very inopportune moment (like say, a
20s "pause" during critical moments of a SC2 game).

~~~
ugh
By the way, _do not_ use that Trim Enabler. If you want to and are aware of
the risks, use this one, especially if you plan on using it with Lion:
<http://digitaldj.net/2011/07/21/trim-enabler-for-lion/>

What this does is actually pretty simple, it replaces the string “Apple SSD”
(used to identify for which drives trim is turned on) in a file in the
relevant kernel extension with zeros. It also creates a backup copy of the
original file.

The Trim Enabler you linked to replaces a whole kernel extension (meaning you
might end up with an older version of a kernel extension) which is obviously
monumentally stupid.

------
pointyhat
There's a lot of snake-oil in SSD's until you hit the $400 mark. I'm not going
near them until the lies and half-arsed chipsets stop.

------
sid0
I've been using an Intel 320. It's a dream. No more hitches _at all_.

