
Linux RAID is different from Windows for sound technical and historical reasons - dredmorbius
https://plus.google.com/+AlanCoxLinux/posts/QWnM6s2zKWm
======
jamiesonbecker
You can do really cool things with Linux RAID, like RAID stripe across 8
virtual (i.e., EBS @ EC2) volumes and then layer dmcrypt on top of that or
other things for use cases that it was never even designed for. Linux RAID
(and volume management) were really designed The UNIX Way as modular, small
tools that can be applied like Lego bricks, and no where is that more user-
visible than in the RAID/LVM/etc subsystems.

~~~
kalleboo
Or just the classic Floppy RAID [http://mac-guild.org/raid.html](http://mac-
guild.org/raid.html) (this is OS X but the same thing applies)

~~~
walrus01
Now do it with multiple 1.2MB 5.25" drives if you can find a system with a
BIOS old enough to support the hardware, yet new enough to run a modern
kernel.

------
kev009
Interesting if rosy glasses view; Linux didn't propagate I/O barriers until
2.6.33 on most common mdraid setups which is absolutely insane.
[https://monolight.cc/2011/06/barriers-caches-
filesystems/](https://monolight.cc/2011/06/barriers-caches-filesystems/)

I know this is Linux's Alan Cox but the post reads like a typical Windows
hater Linux desktoper and is quite amusing and even quaint if you have just
passing familiarity with ZFS internals to contrast.

As far as the layered approach I would also suggest study of FreeBSD's geom
which dips slightly below and above Linux's mdraid (i.e. you still use geom
with ZFS, zvols) but IMHO is a bit cleaner probably because it's a later
arrival.

~~~
creshal
mdraid will still, after some 10 years of the bug being known[1], happily
corrupt all your data if you mix hardware with different queue sizes.

How do you know you you have hardware with different queue sizes? Why, you
start to experience data corruption for no apparent reason, of course. There's
no other warning. And the only "workaround" is setting the global queue size
to the lowest common denominator of the hardware installed – and God help you
if you later install hardware with a lower queue size.

[1]
[https://bugzilla.kernel.org/show_bug.cgi?id=9401](https://bugzilla.kernel.org/show_bug.cgi?id=9401)

~~~
skissane
Is that bug actually fixed or not? The bug status is "CLOSED CODE_FIX", but
commenters in the bug say they still have the problem with kernel versions in
which the fix is supposed to be present.

(Aside: I think bug trackers should be configured in such a way that you can't
mark a bug as having been fixed unless you specify the ID of the commit in
which it was fixed.)

~~~
creshal
It's not fixed, I'm still seeing it on kernel 4.8.

------
dredmorbius
Submitter's comment.

Alan Cox, former #2 Linux kernel developer, at G+, on the differences between
Linux and Microsoft RAID support, related to recent Lenovo news,[1] and the
technical and historical basis for this.

I did some massaging of the fourth paragraph, which seems closest to a
lede/head, to fit within HN's 80 character headline limit. The first four
'graphs of the post:

 _" Unsupported models will rely on Linux operating system vendors releasing
new kernel and drivers to support features such as RAID on SSD"_

 _Good to see that the tech press fact check comments from companies as well
as the political press fact check politicians. I 'm reading this on a box with
RAID1 SSD. It's had RAID1 SSD for some years._

 _Linux has supported RAID on SSD for years, in fact it supported it from the
moment you could plug an SSD into a Linux PC._

 _Linux RAID is different from much of the Windows experience, for a mix of
sound technical reasons and historical ones._

________________________________

Notes:

1\. [http://www.gossamer-
threads.com/lists/linux/kernel/2352338](http://www.gossamer-
threads.com/lists/linux/kernel/2352338) (from Cox's post).

~~~
OMGWTF
After looking at the gossamer thread: Could this be _directly_ related to the
current BIOS problems? Lenovo's closed source driver runs into problems ->
Lenovo dev wants to commit workaround to Linux, fails -> force different PCI-
ID as alternate workaround by crippling the BIOS? Firmware gets copy&pasted to
non-servers?

------
dogma1138
Is this in reference to the whole "OMG Microsoft/Lenovo is locking down your
laptop!" scandal?

~~~
dredmorbius
Yes:
[https://news.ycombinator.com/item?id=12568010](https://news.ycombinator.com/item?id=12568010)

------
vt240
Hasn't Windows supported software RAID for quite a while now ?

~~~
Shish2k
Ish, and it depends on version -- like I created a RAID1 array for my gaming
box when I was using windows 7 pro; then upgraded to windows 10 home thinking
"I'm not using any of the pro features anyway"... turns out that RAID1 is a
pro feature.

What's worse: if you try and mount the drives, it doesn't say "this looks like
a raid1 volume, I don't have drivers for that, please upgrade to access this
data", it says something more like "data corrupt, would you like to format
this drive? [Yn]" :/

~~~
stephengillie
So RAID1 will someday be DLC in the Windows Store?

------
RubyPinch
> The Linux RAID history is different because unlike Microsoft the decision
> was made to integrate software RAID properly with the OS.

Well there wasn't really a choice was there?

------
cm2187
Does linux RAID supports TRIM? I believe most hardware RAID cards still do
not.

~~~
Qantourisc
Some RAID models support TRIM at this point (modern kernel version). I can't
find the commit or change log in a few seconds. But this bug confirms it:
[https://bugzilla.kernel.org/show_bug.cgi?id=117051](https://bugzilla.kernel.org/show_bug.cgi?id=117051)
(Very slow discard(trim) with mdadm raid0 array )

------
mike-cardwell
dmraid is pretty cool. My desktop has 4 disks configured in a "fakeraid" RAID
10 setup. My desktop dual boots Debian and Windows. I can use/share that RAID
10 volume under both Debian and Windows just as easily as I could a single
disk. I use half of the volume for my Windows Steam library and the other half
for my Linux Steam library. I didn't have to do anything special under Linux.
dmraid just made it magically work.

------
X86BSD
It's 2016. If you are still deploying RAID, you're doing it wrong. Seriously
guys, ZFS. Look it up.

~~~
fl0wenol
RAID isn't dead yet.

* outboard RAID provides write amplification across physical devices

* soft RAID1 (simple mirror) is nice for things like boot volumes and easy to fix when it breaks

But yeah, ZFS and similar strategies work much better than soft RAID-5/6 for
file store resiliency across visible LUNs.

~~~
kev009
You may want to do a quick search on what write amplification means. It's an
undesirable property of data structures.

I think you perhaps meant fan out, which is partially valid but addressed in
the submitted article - i.e. bus speeds have improved greatly. In fact to the
point where you can easily have no bus at all: point to point PCIe lanes to
each NVMe device.

A zmirror is simple too, and gives you boot environments so you can roll back
failed upgrades and other magic.

