Hacker News new | past | comments | ask | show | jobs | submit login
IoDrive, Changing The Way You Code (jdconley.com)
38 points by jconley on March 13, 2009 | hide | past | favorite | 13 comments



My main point was not really this particular device, but parallel SSD's in general. It's changing the way we think about persistent storage for write-often data.


From a quick google, it looks like the 80GB model starts at $2400. Hard to call that competitive with SSDs, except soley on performance.

Should be very interesting for "dream machine" desktops once you can boot from one.


No, it is competitive with the 24 SSD monster. That is, it is competitive with putting 24 of the SSDs in RAID, which is estimated to be... $12k IIRC. Granted, you get less storage space, but this is for pure IO goodness.

Edit: From the Tom's Hardware article at ( http://www.tomshardware.com/picturestory/493-8-x25-e-fusion-... )

The database I/O benchmark results are quite interesting. Intel’s X25-E reaches 6,500 to 10,000 I/O operations per seconds at small command queue depths, but drops to a little more than 4,000 I/Os at deep command queues. The ioDrive is different. While it reaches 2.5x more performance than the X25-E at executing individual commands, it drops to a bit more than the Intel’s performance at longer command queues. Switching the ioDrive to one of the faster write modes, which results in reduced capacity, results in more than doubling I/O performance. In such a case, a RAID array of Intel X25-E SSDs could not catch up for the same cost of an ioDrive.


My point was you get (comparatively) a heck of a lot of storage plus pretty decent performance with a SSD. You better /really/ care about IO to be paying that much per GB - and some people do.


The other thing to consider is the power consumption on the X25-E RAID vs. the ioDrive. I'm not sure which is better, though I would guess the ioDrive. Someone care to look it up? ;)


So, thinking about the "standard" MySQL scale-out approach of master/slave coupled with memcached, what's the new best practice if you have a stack of iodrives sitting around?


Well, I think you're probably solving a different problem with the master/slave. If you're scaling out and caching because you're CPU limited then nothing changes. If you're scaling out because your disks can't keep up with the parallel read/write traffic then I'm thinking you can drop the cache and master/slave in many cases (unless you want the hot server for HA).


I worked on a pretty big site where we read from the slaves and wrote to the master, but we were still on 4.1 with MyISAM. Given the current size of the iodrives we would have to shard a lot more than we were, especially for a few large tables. But, it would drastically reduce deployment complexity.


Or you could simply use lots of RAM?


I think the point of the article is they needs a lot of persistent storage. you'd have to constantly write that RAM back to disk, and then you've either bottlenecked again or have to come up with some queue system where you drop so many changes because of the disk bottleneck. This is a clean solution that does really change how you think you can interact with persistent storage. All those little reads and writes without the performance hit.


I guess I say that because they talk about game-based workloads... I'd assume that only a limited number of events are persisted indefinitely (and/or need to be atomic).


You can keep all your data in RAM, replicate it for fault tolerance (still in memory, but on another machine, another rack, another datacenter) and just dump the entire thing on disk once in a while which would be fast.


heh. IOdrive. Like striping 3 intel X25-X SSDs, only more expensive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: