Hacker News new | comments | show | ask | jobs | submit login
The Battle Agaist Any Raid Five initiative (2016) (baarf.dk)
27 points by awiesenhofer 33 days ago | hide | past | web | favorite | 18 comments



Where is the 2016 from? This looks like it’s from 2003, when RAID levels were something you had to think hard about. I agree RAID5 is unsuitable for most database uses... but I also haven’t had to think much about it in years. RAID5 was all about maximizing drive space while maintaining some level of reliability. It came at the cost of write speed but for many use cases was the ideal mix. For write intensive applications, RAID10 or equivalent across several pairs of disks is a no brainer. But these days in the cloud the physical disk arrangement is not something I spend much time worrying about.


I learned that RAID5 has a hidden gotcha in it's implementation: it maintains just one parity drive, no matter how big the array is. This is a bit counter intuitive; you might expect it to instead maintain 1/3 of the available drive space in parity. I found this out when I ran across a customer maintaining a 23-member RAID5 array. It had tons of usable space of course, but could only tolerate a single disk failure, and worse, rebuilding that disk required fully reading all 22 remaining drives to recalculate parity across the whole set. Somehow it hadn't caught fire yet when we discovered it, but it was a disaster waiting to happen.


> just one parity drive

That's not RAID5, that's RAID4.


Technically true, but with RAID5, you still only get one drive's worth of redundancy, no matter how many drives are in the array.


That is the trade off. It’s not exactly a revelation given the definition of RAID 5.


> Where is the 2016 from?

From the bottom of page: This page was last updated on 26 Mar 2016.


I'm using RAIDZ2 with 8 drives. It's even more fault tolerant than traditional RAID1 (when you lose <= 2 drives at once) and still only eats 2/n x capacity (n=8) for parity.


How amusing. I wonder if the original founders have the same objections to modern implementations of RAID-6 and such.

Disk storage had gotten cheap enough, fast enough, for RAID-1 to be sufficient for my organization's storage needs. We've briefly flirted with RAID-5, but it proved troublesome.


Phew! I guess RAID-6 is good.

Probably not.


I'd argue that is is. In their arguments against RAID5 ( http://www.baarf.dk/BAARF/RAID5_versus_RAID10.txt ):

> OK here is the deal, RAID5 uses ONLY ONE parity drive per stripe

In RAID-6 you have multiple redundancy, so the probability of complete failure can be arbitrarily reduced. However, most people go with RAID-5 because they haven't got enough disks for RAID-6 to make sense, whereas if you have loads (like Backblaze for instance using 3 parity out of 15 disks) then it does make sense.


It’s fine, especially with solid state drives.


> we’ll arrange various BAARF Party Conventions where we’ll proudly display our logo, which says “I’m NOT talking anymore about RAID-5.

Hilarious.


I'm going to go out on a limb and bet that this is because too many people use RAID as a backup, instead of something in place to help prevent downtime.


They are specific about the RAID levels. RAID 5 is troubelsome because your data is smeared across disks. That has all kinds of negative consequences when it comes to performance (write amplification) and recovery (unlikely).


>RAID 5 is troubelsome because your data is smeared across disks.

So is RAID0.

>negative consequences when it comes to performance (write amplification)

If you're using SSDs, not really a concern with spinning rust.

>recovery (unlikely)

Really? I've successfully rebuilt literally thousands of RAID5 arrays successfully over the years, with maybe a two dozen failing over that time period, and in most of those cases the rebuilds were not kicked off shortly after the initial disk failure. (I used to work in the hosting industry. Many a customer opted for RAID5 in the servers they leased from us)

I've opted for RAIDZ3 at home, and don't know that I would recommend anything under RAID6, but I think it's pretty silly to claim that recovery is unlikely.


>>negative consequences when it comes to performance (write amplification)

>If you're using SSDs, not really a concern with spinning rust.

This is not true. Raid5 has the same problem with "you must write a unit of x at once" that SSD have. If I've got a 64K chunk size on your raid5, assuming that the disk has been written to, if you are writing a whole 64K chunk, you are good. but if I'm writing a 32K chunk to a stripe that already has data (and you have the same 'I wrote the data and deleted it but maybe my underlying block doesn't know that it's deleted' problem you have with ssd) you need to read all the data in that chunk you want to keep, recalculate parity (the fast part) and then write that whole 64K chunk back out. It really is a lot like the problems you have with SSD, raid or no, only you have more choice over the chunk size when building the array.

All that said, I have also managed a lot of raid5 on spinning disk in my day... it absolutely screams for read-only workloads, and write back caching can usually mitigate the write amplification issues (of course, adding in more data consistency issues when you lose power) and while yeah, there is danger of losing a few sectors during rebuild, people also leave the hardware write back caches enabled on their disks. Given the choice, I'd trust data on a raid5 over data on disks with write-back cache enabled in your typical 'unexpected power loss every three years' situation. Either way, technically, yeah, it's not 100% safe, but most of the time it's good enough if you need the performance per dollar.


For smearing read striping.

I think 'recovery (unlikely)' means a problem during recovery is unlikely.

I think RAID 3/4/5 can be better that nothing, they're not completely worthless, but come with significant down sides compared to better schemes that make them undesirable these days. The I in RAID was put there back when disks were many orders of magnitude more expensive than today, it's just a false economy not going to RAID10 or some other better scheme.


The reliability of RAID level 5 rebuilding on disks depends on regular checking. If you never read most of the data on the disks, which many people do, you stand quite the chance of discovering part of it is unreadable the first time you try to read it in years, which will be during a rebuild. And then, even if all of the data may well still be available, the rebuild will fail.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: