I wish I'd read this before buying my SSD for my workstation, an impulse buy while at Fry's with my boss. I've largely avoided the write stuttering that is problematic for these low end MLC SSD's by using my old HDD for all my data and the SSD for OS/Programs so writes are fairly rare to the SSD; it still kicks the shit out of the old HDD.
For a database server however, buying a non intel MLC SSD would have been a huge mistake that I'm now well informed enough to avoid.
Nobody's going to use it in their workstation, much less one running Windows, and there's no way to stuff it in a laptop.
Many shops will be able to fit the most frequently used 20% of their toolset onto a small disk drive like this. if the most frequently used 20% of their toolset represents 80% of their development wait, this is still a big win.
* Offer two different lines with the same hardware but differing firmwares optimized for each.
* Sell the IOPS-targeted one as 'Enterprise Grade' for %50 more.
* Release regular firmware updates for both, and ensure that only a trivial (but *WARRANTY VOIDING*)
modification keeps you from flashing your consumer drive with the 'enterprise' firmware.
* Bask in the glow of heavily-dugg tutorials about how to 'mod' your consumer drive for low-latencies
* Have vendors purchase your cheaper-than-intel 'Enterprise' drive for CYA reasons.
In order for me to get around the simplistic marketing of MB/s, that leaves me paying 50% more or voiding the warranty.
I'd rather they fought simplistic marketing with honesty rather than profiteering.
Being able to extract more money from PHBs is just gravy.
However I bought a 160gig Intel X25m in its place. Going back to a spinning disk was unacceptable (almost 10x slower). The drive f*$&ing rocks, even if it is unreliable (hopefully that was a fluke)
The graceful failing is where there are no free sectors left for wear levelling or bad block remapping. When it fails this way you should still be able to read your data off it.
EDIT: I've just received an anecdotal comment from a friend who works at Intel that high temperatures affect MLC flash life rapidly in an negative way. I can't find any data to back this up.
If you create an imaginary baseline SSD with the X25-M's 80 GB and the Vertex's 9836 PCMarks and then compare both drives against the baseline you get:
Intel: 20% faster
OCZ: 50% larger
Both cost $350. Given that the slow one is faster than the fastest desktop HD but the big one is only 1/3 the size of a standard laptop HD, I think I prefer the extra space.
The only non-Intel drive that's not awful at random writes is the OCZ Vertex, and it's a few times better than an HD. The X25s are still an order of magnitude quicker than the OCZ Vertex once used.
Even if the X25 was overall 100% faster than the Vertex, and according to Anand it isn't, I would still trade that to have 120 GB rather than 80 GB.
This is Anand's conclusion:
"with the Vertex I do believe we have a true value alternative to the X25-M. The Intel drive is still the best, but it comes at a high cost. The Vertex can give you a similar experience, definitely one superior to even the fastest hard drives, but at a lower price."
Similar experience. I can't find the part about massively better.
-Performance had to be equivalent or better than a WD Velociraptor across the board (the OCZ Vertex achieves that with firmware updates, the Intel is slightly slower on max sequential writes, but makes up for it in every other benchmark).
-It had to be at least 120GB for my personal laptop.
-It had to be around $300.
I assume any SSD purchase made now is going to be replaced in 1-2 years, so purchasing anything more than necessary is hard to justify.
I guess the best thing to do right now is to have your OS (or multiple OS's) in an 32GB SSD for instance, and a big HDD as second drive for multimedia and general storage. Right?
Edit: Like other people say, thanks for linking to print version :-)
I switched to Ubuntu and most of my problems went away.
Nice to hear that it worked as it should with Ubuntu
Just as a completely useless data point, I suffered two different disk crashes in rapid succession a couple of months back, and out of frustration got one of the Transcend SSDs that were cheap and available at the time (ATA for an old PBG4).
It works great, I love it, and I no longer have to fear data loss when stupid software drives me to banging my fists on the desk.
Have some free space on the drive. When you need to write to the drive, write it to the pre-erased, free space. Then merge in the pages from a different block, and then erase that block.
Wouldn't this ensure that you would always write to pre-erased area? And avoid the slowdown completely?
From what is described, it sounds like the problem is with the poor block provisioning algorithms that the drive controller uses. Where is the operating system in this? Why can't frequently updated files be put on blocks of their own? Why can't block deletes be done online? Why must file modification modify the block(s) where the file currently resides and block for delete instead of writing to empty space and deferring delete until the controller is idle?
Assuming the "10,000 writes" commonly claimed for SSD's is accurate, 5 year lifespan = 4.37 hours between writes on one page.
There's no fucking way I'm going to link to something that requires 62 clicks to page through the article.
Because Super Talent put a bunch of benchmarks in their whitepaper, comparing very well to Intel X25-M.
Not sure if its FUD or not, as I havent experienced any problems. But its a potential red flag