Hacker News new | comments | show | ask | jobs | submit login

You're under 500 IOPS per HDD, period

This is only significant if one is limited to a trivial number of spinning disks. 20 years ago, with separate disk controllers, this was the case.

If you run some benchmarks, I expect you'll find that, for random I/O, N disks perform better than N times one of those disks.

SCSI provided (arguably) an order of magnitude for number of disks per system.

Now, SAS provides another. $8k will buy 100 disks (and enclosures, expanders, etc). How many IOPS is that?

ETA: The Fujitsu Eagle (my archetype of 20ish years ago disk technology) had, IIRC, an average access time of 28ms. If its sequential transfer rate was one 60-100th of modern disks, what fraction of a modern disk's 4k IOPS could it do?




Yes, I agree that the solution is to throw more spindles at the problem.

PL/SQL, though, with global data reach and advanced locking states for every single transaction, make it really hard to move off of a single host. So it's more and more work to get more disks attached to that host, and CPU is a hard upper limit.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: