Hacker News new | past | comments | ask | show | jobs | submit login

But then you have to write the data N times, and writes are expensive too, especially to flash, as someone else pointed out in this thread.

To me the main argument was that the algorithms don't scale, and not even SSDs can help to alleviate that. This is true since you are always bound to the worst time. By spreading the load as described, you aren't anymore, so the problem which the SSDs were supposed to solve simply isn't there anymore.

And even if you would like to avoid those SSD latency spikes, said algorithm would work wonders (given that blocking due to erase occurr randomly). You can then wait for first successful write for example.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact