Exactly. And therefore it needs a license permissive enough to allow that.
So the terms of the original software might restrict creation or distribution of binaries, or anything else.
On a system like that, what would be the point of a fancy hot-swap rack? Do modern large storage arrays do constant maintenance of failed or failing drives? Or does Facebook use hotswapping for something else?
All the high end arrays, both in terms of performance and availability (IBM DS8000, EMC Symmetrix / V-Max, etc) have done drive swaps for as long as I've used them.
Without drive swaps, you either have to abandon traditional RAID concepts, which might not be a bad thing, or ship with enough standby disks to cope with 5 years of failures.
1 - http://xiostorage.com/products/hyper-ise/
Whether they eventually get around to swapping out failed drives or not, I don't know. I assume they do, since a CPU and memory costs a lot more than a drive replacement, so you want to keep them up and running.
Makes more sense than trying to fiddle around with a defective server while it's still in the rack.
Well, to make this happen, you can't use conventional RAID (well, you could use conventional raid, and just set, say, five spares, but once you are out of spares, the array would be binned)
The thing is? it's very rare to take out an array when swapping a drive. (I've... actually done it quite recently, but that was a combination of me making an extraordinarily unwise choice (Oh, I can get by with a used chassis/backplane, even though it's a brand I don't normally use, and a design I'm probably not qualified to re-qualify, no problem!) and me being an idiot when it came time to force-reassemble the RAID (I still don't have good docs on this, but one of my people now understands the problem of what data is on what drive, and what order to list drives during the force very well. I just need to get them to document. Or, really, I should sit down with them, really understand it, simulate a similar failure, and then document it myself. For now we've instituted a policy that two people need to sign off before using force with any mdadm command.)
I mean, if you are using conventional RAID, you need to keep spares. heck, you could run a 36 bay raid, and just designate, say, 4 spares, and 90% of the time, you'd want to replace the whole chassis before you ran out of spares. But now we're paying what, another 10% for disks? (the disks completely dominate the cost of cheap storage arrays; really nice 36 bay chassis with room for a motherboard cost well under $1500. for a few bones more, you can get a similar chassis with 45 disks and no slot for a motherboard... And, of course, you can go way cheaper than that if you are willing to resort to disks that can't be swapped... but my point is that the cost of the chassis, even those nice supermicro chassis where all disks can be swapped without de-racking the thing, is already way below the cost of the disks you are throwing in it, so buying more disks in exchange for getting a cheaper chassis is likely a false economy.
I mean, you do have a point when it comes to labour... it is a big deal to get someone to swap a drive (and if you don't have a spare, the swap needs to happen right quick)
and yeah, swapping drives isn't without danger.
Now, the economics of this changes if you have some CEPH like system where you can fail an arbitrary number of drives in one chassis and still have the good drives function. But those systems are all relatively new, and have quite a lot of complexity overhead. I mean, in theory, if you have an array half full of good disks, with such a system you could migrate that data to a new array completely full of good disks, then remove and refurb the array that is half bad, but want to talk about complexity and chances to screw it up? yeah.
Also note, most drives I buy come with a warranty, meaning a bad disk is a token that is good for one completely free disk. 'enterprise' disks don't fall in price nearly as fast as consumer grade disks (yeah, go find me a 500gb 'enterprise' 3.5" 7200rpm disk for under $70 that isn't used or refurbished. Yeah, that's what I thought.) And often, if you warranty a really obsolete drive, you get back one that is fairly newish. I've warrantied a bunch of WD re3 drives and gotten back re4 kit (difference is that re4 has larger cache and fewer platters, meaning fewer/lighter r/w heads, meaning better seeks and, of course, fewer platters means less spinning weight and thus less power consumption.)
Of course, that's not a factor if you buy your drives without warranty. I don't know any way to extract even the shipping cost out of bad drives that have no warranty. (If you do, lemme know; I'm giving 'em away right now. Hell, half of 'em could be resold by unscrupulous folks; I discard drives once they start showing uncorrectable read errors, even if there is enough space to remap the bad sector, while some people don't replace the drive until they run out of space to see bad sectors (e.g. you will then start seeing /consistent/ bad sectors across badblocks runs.)
I mean, if I've got a RAID5 with one bad drive, and I pull one of the good drives, the thing is going to immediately hang hard. You can boot the thing into single user mode and force-reassemble it, and you get the data back (well, mostly. you certainly would get all the data loss you'd get when yanking the power)
That said, all my arrays are lit all the time, meaning that even without the red 'bad drive' lights (which I haven't gotten working yet with md) if you just avoid pulling drives with active activity lights, you are good.
Another trick I've tried is labeling the face of the hot swap caddy with the last 4 digits of the serial number of the hard drive. When I put in the ticket to pull the drive, I mention the last 4 of the serial.
(but that hasn't really gotten off the ground, mostly due to a lack of level 1 lackies.)
Right now I do most of the hard drive swaps myself, so it isn't a huge deal, but it is something I devote quite a lot of thought too; if I could use remote hands, I'd be ahead of the game, but most datacenter remote hands folks... well, lets just say that they seem to see 'foolproof' as a challenge.