Hacker News new | past | comments | ask | show | jobs | submit login

A modern Xeon is 20 lanes of PCIv3, or 273MB/sec per drive just to get data into RAM, assuming the bus was exclusively dedicated to disk, which means at full utilization you get about 69k 4k IOPs per unit rather than the rated 500k.

Then you need to shunt that back out over the network - using the same bus it just arrived on, most likely after a trip via the CPU to add protocol headers. With a 40 Gbit NIC the per-drive bandwidth drops to absolute max 204MB/sec and available network bandwidth per-drive is only 86MB/sec (22k IOPs), meaning at full whack and in the best possible scenario, and assuming no other overheads, you'd still only ever see 4-5% of rated performance. In reality and after accounting for things like packetization (and the impedance mismatch of 4k reads vs. 1500 byte network MTU) and the work done by the CPU it's probably safe to knock that number down by another 20-40%, depending on the average IO size.

Sure they'd kick ass for reads from a small number of clients and latency would be amazing, but it'd be so much simpler, cheaper, and flexible to just have the same disk split across 4 or more boxes. What happens if the motherboard dies? That's a megaton of disk suddenly offline, etc.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: