

IoDrive, Changing The Way You Code - jconley
http://www.jdconley.com/blog/archive/2009/03/12/iodrive-changing-the-way-you-code.aspx

======
jconley
My main point was not really this particular device, but parallel SSD's in
general. It's changing the way we think about persistent storage for write-
often data.

------
cdr
From a quick google, it looks like the 80GB model starts at $2400. Hard to
call that competitive with SSDs, except soley on performance.

Should be very interesting for "dream machine" desktops once you can boot from
one.

~~~
aaronblohowiak
No, it is competitive with the _24_ SSD monster. That is, it is competitive
with putting 24 of the SSDs in RAID, which is estimated to be... $12k IIRC.
Granted, you get less storage space, but this is for pure IO goodness.

Edit: From the Tom's Hardware article at (
[http://www.tomshardware.com/picturestory/493-8-x25-e-fusion-...](http://www.tomshardware.com/picturestory/493-8-x25-e-fusion-
io-iodrive.html) )

The database I/O benchmark results are quite interesting. Intel’s X25-E
reaches 6,500 to 10,000 I/O operations per seconds at small command queue
depths, but drops to a little more than 4,000 I/Os at deep command queues. The
ioDrive is different. While it reaches 2.5x more performance than the X25-E at
executing individual commands, it drops to a bit more than the Intel’s
performance at longer command queues. Switching the ioDrive to one of the
faster write modes, which results in reduced capacity, results in more than
doubling I/O performance. In such a case, a RAID array of Intel X25-E SSDs
could not catch up for the same cost of an ioDrive.

~~~
cdr
My point was you get (comparatively) a heck of a lot of storage plus pretty
decent performance with a SSD. You better /really/ care about IO to be paying
that much per GB - and some people do.

------
ShabbyDoo
So, thinking about the "standard" MySQL scale-out approach of master/slave
coupled with memcached, what's the new best practice if you have a stack of
iodrives sitting around?

~~~
jconley
Well, I think you're probably solving a different problem with the
master/slave. If you're scaling out and caching because you're CPU limited
then nothing changes. If you're scaling out because your disks can't keep up
with the parallel read/write traffic then I'm thinking you can drop the cache
and master/slave in many cases (unless you want the hot server for HA).

~~~
ShabbyDoo
I worked on a pretty big site where we read from the slaves and wrote to the
master, but we were still on 4.1 with MyISAM. Given the current size of the
iodrives we would have to shard a lot more than we were, especially for a few
large tables. But, it would drastically reduce deployment complexity.

------
jwilliams
Or you could simply use lots of RAM?

~~~
blogimus
I think the point of the article is they needs a lot of _persistent_ storage.
you'd have to constantly write that RAM back to disk, and then you've either
bottlenecked again or have to come up with some queue system where you drop so
many changes because of the disk bottleneck. This is a clean solution that
does really change how you think you can interact with persistent storage. All
those little reads and writes without the performance hit.

~~~
jwilliams
I guess I say that because they talk about game-based workloads... I'd assume
that only a limited number of events are persisted indefinitely (and/or need
to be atomic).

------
lsc
heh. IOdrive. Like striping 3 intel X25-X SSDs, only more expensive.

