Nobody sells a chainsaw with safeties disabled, remarking "you can always attach the chain catcher" if people report a couple of accidents, but Lennart thinks requiring almost every user to change the default install ((note that you can remount it readonly at boot, simply by adding an entry for it into /etc/fstab, that is marked "ro") is a better idea than mounting it read-only, and requiring those few who need to write these variables to do some extra work by remounting it read-write.
> Well, there are tools that actually want to write it.
It needs to be accessible in some manner.
> The ability to hose a system is certainly reason enought to make sure it's well protected and only writable to root.
Agreement it needs to be guarded, because it's a problem if it can hose a system.
> But beyond that: root can do anything really.
This is a truism for Unix, and altering this is much more of a problem than the occasional bricked machine. This is free software, and the ability to use it to control our hardware to the fullest extent possible is part of that freedom. Root cannot and should not be restricted from doing what it needs. It can, have hurdles put in place to make it harder to accidentally do something it doesn't mean to do though.
So, agreement on the problem, and agreement that something needs to be done to fix it, and a caution that access can't be entirely closed off without consequences, so root needs access. What's so horrible about that?
My personal opinion is that, for the use case of a single-user machine (i.e., "Linux on the desktop"), the traditional UNIX root isn't a useful concept. What's actually sensitive is your files, your saved passwords, your browser cookies, your emails, etc.; the ability to reconfigure drivers or replace OS components is only a problem because it allows all of those things to be compromised. Instead of throwing roadblocks in the way of root (remount ro, expose it as something more complicated than a filesystem so innocent users aren't tempted to look at it, etc.), let's take all the stuff that users should be able to do and make it accessible by nonroot, and take all the stuff they shouldn't be able to do and put it in the realm of automation.
UEFI has a pretty reasonable concept here, of a distinction between "boot" and "runtime" configuration; the BSDs have a conceptually similar idea of the securelevel. The idea is that if you're around during the boot, you might be interested in reconfiguring what the machine is doing, but as soon as you're out, you permanently abjure any further ability to reconfigure that. If we take a variant on that, that the OS doesn't abjure those capabilities as a whole but instead reserves them for itself, disabling all user-controlled ability to change things, I think we get very far. You can install new software, but not change the trusted sources for software. You can apply firmware updates via an existing cronjob, but not make arbitrary changes to firmware variables. If you want to do any of that, you have to reboot and enter a special setup mode. Apart from protecting against simple human error, this also serves as a very effective protection against malware: non-OS software simply has no ability to do unwanted things.
Then let's use UNIX users (which are a pretty solid isolation technique) to isolate things like your banking info from your Flash-requiring music service.
Chrome OS, incidentally, has this concept of things the OS can do that the user has no interface to, and Chrome OS is basically the most successful Linux on the desktop, so.
There's a difference between a system which, at all times, is one mistyped command from being bricked, and a system which allows you to brick it if you're explicitly doing something dangerous in the first place.
> let's take all the stuff that users should be able to do and make it accessible by nonroot, and take all the stuff they shouldn't be able to do and put it in the realm of automation
While theoretically reasonable, this is also a huge project, and while this project is slowly implemented users are still vulnerable to this bug. Additionally, there's always going to be Linux users who find the idea abhorrent, and thus it remains a bug for those users. Choosing to have more control should not generally mean that you can brick your machine so easily as accidentally deleting a few files.
Precisely how many user space apps do you think need access to it?!?
A better example is that `/` is a special case for `rm`.
(Besides: I'm not a fan of `rm -i`, precisely because it's non-default. You'll get used to it, and then someday you accidentally some files on a system where you didn't add the alias.)
I don't remember anything about my time with SUSE and Ubuntu, thats way back.
* I normally use zsh with grmlzshrc and nearly no customization (which is why I love grmlzshrc -- usable defaults), which warns for `rm -rf foo/*` but not for `rm -rf foo`.
\rm -rf somedir
Yet by default, if you don't include the "\", you get rm -i
.bash_history says I accidentally added `-f` on that host. I should have just checked with `which` instead of trying it out. Oops.
type rm is better than which rm, but it depends on what distro you're using. Some distros (Just checked on RHEL7) will fancify which as well ie:
$ type which
which is aliased to `alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde'
Yes, I know, but rm ignores -i if -f is given.
[root@example ~]# touch blah
[root@example ~]# rm -i -f blah
[root@example ~]# which rm
alias rm='rm -i'
[root@example ~]# touch blah
[root@example ~]# rm -f blah
Nowadays I spend most of my time in Windows and mobile OSes to remember it properly.
Maybe one there is some nasty Windows Virus out there that fries boards by deleting EFI variables these stupid vendors will get their shit in order.
Perhaps systemd could shield users from their motherboard's manufacturer's stupidity, but the amount of flack being received doesn't seem warranted; has the same amount of outrage been directed towards the manufacturers?
If you look at how probing for hardware works in the Linux kernel, quite a few older drivers (and the occasional feature subtree) have auto-probing disabled by default because it tended to crash some types of hardware. You see this again in lm_sensors, where the auto-probe script specifically asks permission (with a default "no") before it tries certain probes that risk crashing the hardware.
We don't see this as much in recent years as hardware has gotten better - listing the device_ids/vendor_ids on a buss is much safer than poking ISA IO ports - but the problem remains. Some hardware needs special hand-holding, especially in the initial probe or initialize stages. Hence, tools that handle those stages are always going to have special cases and other annoyances.
> the amount of flack being received doesn't seem warranted
The flack against Pottering (and the rest of the systemd cabal) is because this isn't the first time his idealism has caused serious problems for others. Yes, it would be nice if we could just mount UEFI areas (or anything else). In reality, as this bug shows, this is dangerous behavior. It should have been made optional, or the entire idea of exposing those UEFI areas as a filesystem should have been reconsidered - a special purpose tool that handled bugged hardware might have been more appropriate.
Robust systems must work in the real world not just in an ideal one.
Some boards do have something of a "factory reset" in the form of dual bios so that if one is bricked, the backup can still be booted. But it's not a bug that not every motherboard has this feature.
The UNIX philosophy of "everything is a file" has always been based around the conceptual 'contents' of these 'files'. The kernel doesn't (directly) care if you remove a device node (block or character-based), it doesn't care if you remove a pipe, and it doesn't care if you remove a socket - even though all of these things are interfaces to the kernel in one form or another. Removing these files only removes your ability to access that interface.
Simiarly with files in /proc - which, like /sys, is directly exported by the kernel. In many cases the kernel refuses to even allow you to unlink the 'files' in /proc, even when it might make sense. (For example, you can't kill a process by 'rm'ing anything in the relevant /proc directory.) Instead, you update the configuration values that the files in /proc represent by echoing into their contents, where applicable.
The case with the efivars 'file' actually causing the kernel to take action when it's removed is what's at issue here. The fact that it happens to brick some computers is particularly unfortunate, but does a good job with showing the problem here. This isn't what the "everything is a file" philosophy is about.
I think SystemD is in the wrong here. Operating Systems should be protecting users from bricking their hardware.
Harware manufacturers should not be shipping hardware that's nearly impossible to brick, because it results in hardware that is nearly impossible to update.
Quite a few boards have a backup EEPROM to initialize from if the primary EEPROM fails to boot, but who knows if those that do will bother to reinitialize the UEFI variables stored in NVRAM. Honestly, since they are just supposed to be stored in NVRAM what happens if your board battery runs out of juice, will it still brick?
Hosing a system is something that can be fixed by root by reinstalling from scratch or restoring from backup. Bricking hardware is not something a sysadmin can fix by himself.
Poettering attitude refusing to acknowledge his mistakes and fixing them is not something new.
Someone who doesn't get the difference between deleting the content of a drive and rendering hardware inoperable shouldn't be in charge of a critical system component.
Maybe it's not his responsibility and distro creators need to protect it, I don't know.
-Slackware fortune file
I tend to think the Linux on the desktop thing is a bit of a diversion, but this issue and Poettering's response certainly lend weight to the concerns many have raised about systemd.
What happened to defence in depth? Even if your users are all perfectly competent, with this sort of arrangement you're one vulnerability away from having bricked boxes, not just one vulnerability away from reimaging and restoring from backups.
What happened to making systems safe by default? There is no need to make this kind of system so fragile.
What happened to using the right tool for each job? The obsession with making everything look like a file that lives under the same global file system hierarchy causes endless problems, and it's never been entirely clear to me what benefits it's supposed to offer in return. It's interesting that Poettering remarked on exposing /dev/sda as root as well, because it's another excellent example of how broken this model is.
What happened to minimising trust? Of course it's crazy that any hardware/firmware actually allows itself to be permanently broken by a software error like this, but the software should not assume the hardware/firmware will take responsibility for providing any necessary safeguards.
Ultimately, this is just bad engineering and bad UI. It can result, and reportedly has resulted, in serious damage. No amount of dressing it up will change that, and the root cause of the problem should be fixed.
Ultimately access is needed to even configure the bootloader, and right now that means it needs to be mounted RW. Let's go yell at the kernel developers instead of taking it out on systemd for using a virtual filesystem instead of a syscall.
There's no point in mounting efivarsfs read-only; you might as well not mount it at all.
Despite the fact that they're primarily a systemd distro, Arch Linux strongly recommends that you mount efivarfs in ro mode and remount rw when doing kernel upgrades.
GRUB2 seems to use efibootmgr to write stuff to EFI, and efibootmgr appears to lack a "detect if efivarfs is mounted ro and temporarily mount it rw" switch, so you have to do that manually too.
Gentoo's Genkernel-next doesn't seem to do anything at all with EFI.
Dracut (one of RedHat's initrd management tools) looks like it reads from efivarfs, but does not write to it. (But there's a lot of code in Dracut, so it's entirely possible I missed something.)
It's not entirely clear to me that you can't use BIOS boot with a GPT-formatted disk if your bootloader supports that sort of thing. It's also not clear to me whether or not chainloading from GRUB into Windows is supported with a BIOS/GPT partitioning scheme.
Regardless, it's a pity that the UEFI design committee apparently thought that requiring one to write data into the motherboard's persistent storage in order to boot into a OS was a reasonable thing to do.
Why not leave it unmounted entirely, and only mount it when installing new boot entries?
Because there are -presumably- reasons to read EFI variables? If they're exposed through efivarsfs, then it's a convenient way to access them.
You'd think that efibootmgr would have a "--temporarily-remount-efivarsfs-readwrite" flag to handle that very thing, given hardware vendors' long history of getting UEFI dangerously wrong. :/
And of course, sticking to proprietary solutions in a long run would be rather a sick idea.
That's what distros are for. The "Linux" family of operating systems are used for a very wide variety of uses, some of which are orthogonal (and sometimes incompatible) with "desktop" features.
Something like "Year of the KDE desktop" or "Year of the Linux Mint desktop" would make much more sense than trying to shove all of Linux into one pigeon hole. In fact, even within "desktop" there is variation. While an easy to use, mostly automagic desktop is a good idea, that's not the only definition of "desktop".
Ultimately, this is the the biggest problem with the idea of "THE Linux desktop": one size never fits all. Fortunately, Linux distributions exist allowing for great variety - including "everything just works" desktop distributions. Just remember that the Linux ecosystem is larger than one type of distro.
 and systemd, and 15+ years ago the "desktop environments" that insisted there should be only one GUI toolkit, etc
 This assumes distros decide to keep their own identity instead of continuing on the homogeneity bandwagon that many joined when they converted to systemd. Turning into a clone of Red Hat is the path to obsolescence.
When that happens, I don't think the Canonical or Redhat will be able to compete, and only Google will be able to create an OS on top of Linux, and open-sourcing won't do any good because the most important services will be behind Google servers.
That the dominate platforms have any customers at all just illustrates how mediocre the entire software industry is.
If software were medicine we would still be leeching people and learning about the miasma theory of germs.
The desktop / personal user domain is looking like it's in the early stages of transformation, with mass data collection to support voice search and user intent prediction. While there's something to be said for privacy, I don't think consumers will care enough, and I don't think desktop linux systems outside of whatever Google may make is going to be adequately competitive.
This will likely not be a server concern. Engineers probably won't be interested in a service that opaquely collects data and sends it off to remote locations just so that they can say, "Cortana, run the Ansible script to redeploy my API server." But desktop users would find value in saying, "Cortana, move all email from unknown entities to my todo list."
And if my assertion that intention prediction and voice search will be competitive features for users, features that rely on mass data collection and expensive top talent, then I think organizations like Canonical or RedHat will fall behind in the desktop/mobile/user os space.
Apple's no M$, but they are certainly competitive.
When I think of what organizations will be competing on machine-learning backed features like intention prediction or voice search, I think of Apple, Microsoft, or Google. Those OS features won't be inside the OS. They'll be backed by a wealth of proprietary data hidden behind company servers.
I think user expectations for what a modern OS ought do will evolve, and I don't think Canonical or Redhat will be able to catch up to Microsoft, Apple, or Google.
Eventually, what we think of "Linux OS" will be Android (and if you want, OS X) and maybe some other offerings by these big giants, but the most user-demanded features won't be on the OS. Therefore, Samsung can't just fork Android and expect to compete sans Google, and Canonical or Redhat won't be able to compete either.
Apple can't take code from Linux as the code is GPL licensed. OSX is largely based on NeXT and BSD. Linux and OSX are both POSIXish though.
Offices most certainly could run linux only if they weren't chickenshit (I mostly kid, but after seeing how broken the desktop management model is now anyway, I don't see why Linux can't be used more in Corporate.)
Either way is going to break things and I'm sure there'd be people lining up to complain about them making it ro by default or adding some sort of safety layer in. Might as well leave it as it is and let the various distros do what they want with it. I do think though that someone should do something and this sort of thing shouldn't be possible, but I can understand the stance of leaving it up to the distros to decide as a reasonable compromise.
Last week after a storm I watched two blokes cut up some fallen wood. One held the chainsaw, the other ran the wood over the blade.
When you start using tools at a low enough level, there are some things you cannot guard against.
Mounting efivarfs read/write by default
can lead to accidental deletion of the
EFI variables. -- laloch
Anyone can "rm -rf" anything, that doesn't mean that they should.
I know, I was one of those before my bank got newer, smarter ATMs.
Actually, it's more like this. I remember when I used to blame my users for what my software could do.
Systemd hate is en vogue these days so they are an easy and common target. Why no invective towards the kernel that actually implements EFI-as-a-filesystem?
It also supports --one-file-system, which would prevent this and a host of other problems as well. That said, I don't really see a problem with Lennart's response. It's basically, "we should take steps to make this hard to do, but root is capable of doing anything, so don't expect it to be foolproof."
In any case, the only thing clear to me from his statements is that he agreed that there was a problem in need of attention, which makes the response here somewhat baffling to me (although not as baffling as it should be. It's fairly easy to see how a lot of the animosity comes from feeling about systemd in general and Lennart in particular, especially since some people state as much, as if that has any bearing on his response in this instance).
Everything is a file.
A filesystem is a perfectly reasonable way to implement access to the EFI vars.
To be fair, Lennart Poettering could probably end war and famine forever in a single day, and some people would still find something wrong with that.
On the other hand a certain amount of skepticism and criticism is very much in order. (I am saddened, though, by the way online discussions so easily deteriorate into name-calling and bitter ranting.)
Just to be clear, I was highly skeptical of systemd initially, now I have two systems running Debian Jessie, and quite honestly I haven't noticed much of a difference one way or the other. So at least now I am more confident that we are not all going to die because of systemd. On the other hand, I wish there had been more initial coordination / portability work to make sure stuff like GNOME keeps working smoothly on *BSD as well.
When you have to spend 2 days to get to a remote server room and back because suddenly systemd decided that a device in fstab not present at boot would halt boot with error and at the same time the emergency shell was broken and going into a loop of asking for credentials, on a debian server, in production.
Ok systemd works most of the time but after experiencing a few of these breakages I've become wary of anything systemd.
Whatever it brings to the table is not worth the wasted time and headaches it causes.
Apart from the two-item sample of personal experience I am kind of torn about systemd - some of the problems it attempts to address are real, and addressing these in general seems like a good idea. The way it does so, however, sometimes (I am being deliberately vague here) brings along some problems of its own.
And the further systemd adoption and systemd's mission creep go along, the harder it becomes to backtrack and replace it in case somebody comes up with a better solution.
 Like I said, I have not experienced any of these problems myself, but I have read a couple of reports from people that had problems with systemd that were definitely not just aesthetic.
That's the problem any time a system is designed around ideological purity.
Someone invents something clever that works most of the time for most people (with varying definitions of "most"), so they try to apply it everywhere. As no idea can account for everything, this sooner or later the inflexibility of the theory meets the variation of realty and some type of drama happens.
Usually it's better to avoid anything that exhibits that kind of inflexibility and hubris. Systems that build in ways of handing the unexpected or patching around their own weaknesses are necessary or drama is inevitable.
 Not just computer systems - this is true most of the time humans build systems. We see the same problems in religion, politics, and social constructs. It's such a common behavior, I suspect there may be an evolutionary basis for it (using a single common rule for many situations requires less energy).
 Even the physicists are still working on that problem.
The problems systemd tried to address are real, and some of the ideas behind it are appealing. But the way it attempts to replace several important pieces of the system at once, and the way it is being forced on people in a "eat it or leave it"-style feels uncomfortable.
Part of what made Unix what it is today is the idea to build a system that might be very far from perfect but that is easy to improve in small, incremental steps so one quickly gets a feeling for what works well and what does not.
Event-based service management is interesting, too, e.g. shutting down a network service when the machine is disconnected from the network and restarts them when a connection becomes available again (think NTP, DHCP clients).
Once you have an idea of how services depend on each other, you get the ability to start services in parallel for free (whether that is so useful or even a good idea is another question).
Given the fact that systemd is hardly the first attempt to solve these issues (think SMF on Solaris, launchd on OS X, or the couple of attempts on GNU/Linux), I think a lot of people have felt the itch to improve on the classical SysV init.
You can't 'ls' your network connections.
And for tricky API that does not match the filesystem view we have the fnctls.
You sure can in Plan 9. Unfortunately (?) Plan 9 is more Unix than Unix itself.
It is as smart as saying that by using a hammer heavy enough you can make a cube fit in a triangle. It is possible, but defeating the purpose of consistent API.
He's suggesting that not mounting it as read-only by default will break a few things like grub-install, but promptly suggested editing /etc/fstab to mount it read-only by default.
So... his own solution to the problem is to mount it as read-only.
The hard drive analogy is ridiculous because while you can lose data you cannot brick your hard drive or computer by erasing it. There are ways to brick a hard drive by corrupting the firmware but there are very specific tools like "hdparm" that you have to use to do it, not an everyday command like "rm".
This makes me wonder why Linux only has a single "root" user rather than multiple types of administrators. Windows, for example, has Administrator, which can do nearly everything, but some files and processes are protected under SYSTEM or TrustedInstaller. If you REALLY want to, you can still remove those files by changing ownership, or launching a command prompt with SYSTEM permissions, but it does add a significant hurdle to doing things that could cause significant damage.
The real crisis here is that modern firmware is developed in an environment so fragile and isolated that it's simply unable to recover from configuration that is "wrong".
I had a laptop that I bricked about two years ago by accidentally wiping out the partition table on the soldered-down storage.
I agree that the stupid firmware is ultimately to blame, and mounting this stuff read-write is a fairly reasonable initial decision. But given the existence of this stupid firmware, the resistance to implementing an easy mitigation by mounting it read-only is insane. I can't understand why there are any responses besides "oh shit, we'd better change this."
Having to reinstall an OS because some script kiddie decided to `rm -rf --no-preserve-root /` your server bites, but actually bricking the system requiring a board replacement is scary. (Of course, they could technically be a jerk and flash your UEFI EEPROM, but most servers have a backup they will restore from in case the primary has been corrupted).
EDIT: Though, now that I think about it, this is still only going to stop users from hosing their systems. A malicious user could still run an arbitrary binary or kernel module that hoses the UEFI variables as well. Or by modifying the UEFI boot config to load some other custom UEFI binary to hose it... firmware vendors really just need to get their crap in gear, software (even running in UEFI boot services mode shouldn't be able to hose it).
Or EFI firmware not being so stupid, but we know that's not going to happen.
Allowing accidental access to a resource capable of doing permanent damage is dumb. Even embedded processors generally have an "unlock/lock" sequence when you want to write to their flash memory.
The fact that the kernel doesn't require you to write a "magic number" somewhere to gain access is foolhardy.
Turns out linux used to store crash-dumps inside efi-vars, so mine were full and new entries couldn't be written.
Good thing /sys/firmware/efi/vars/ was mounted rw, thus a `rm -rf /sys/firmware/efi/vars/dump-*` fixed it.
Some kind of protection would be nice, but the ability to see and use efi-vars as a filesystem is useful too.
If we listen to security researchers too much, our computers will be no fun.
Anyway, I don't think this is a systemd bug. The /sys interface comes from the kernel and whether you will access efi vars depends on your kernel configuration.
There is the older module (I believe _efivars_ if it isn't compiled as built-in) and the newer (I believe _efivarsfs_).
I've done my fair share of systemd bashing but here we have:
1. Broken and/or naive EFI implementations
2. Linux kernel's default choices (expose efivars as rw)
3. Distro's choices (compile kernel with said modules as built-in)
Well... If there would be an api requiring, for example, a special syscall (or maybe even uglier, an mmap of /dev/men), I'm 100% percent sure that the default install of all Linux distributions would include an /sbin/efivars command exposing this API to every shell script running as root... So nothing would be gained.
For safeguards against accidental damage, on the other hand, in completely in favour.
All software you install on your system through binaries is potentially suspect to an extend. Sure, I trust the Fedora build system to do the right thing, so I can with some certainty assume anything in the Fedora Collection or EPEL is "safe" - but when I need to install 3rd party software to get my video card working I can throw all that out the window.
Be sane when testing any binary you install, at least with RPM's or DEB's I can verify the GPG key of the vendor - assuming they bother to use one at all... piping scripts directly from curl to bash is bad because of the potential of the stream ending early and leaving in an inconsistent state, that's the only thing that is notable worse versus installing a package (and can be worked around by downloading it first, THEN executing it).
As an aside, goddamn is UEFI ugly. BIOS it was easy -- set the partition you want to boot as bootable, and away you go. here you have to copy the bootloader file, and then write to the firmware what file you want to boot, which feels even more fragile and confusing if you decide to wipe a hard drive to put a new OS on it unless you do everything just right.
You're goddamn right it is. Matthew Garrett  used to do UEFI stuff for -IIRC- RedHat. He has MANY blog posts about fucked-up parts of the spec, and horrifyingly fucked-up implementations of the same. Sadly, it looks like most (if not all) of his posts are now tagged "advogato, fedora", so I can't point you specifically to the UEFI gore posts.
The problem comes in with the small handful of UEFI firmware implementations that get EFI variables so horribly wrong that attempting to write to them in the standard way will cause the EFI firmware to overwrite itself or otherwise brick the machine. Linux does attempt to work around some of those systems through very careful handling of EFI variable writes.
I was installing a graphics card once and broke my EFI setup. Not because of the graphics card, but because I apparently bumped the CMOS battery enough to wipe settings.
 or maybe the first 512 bytes, if a sector is bigger
echo 1 >/sys/...efi.../enable_remove
echo varname >/sys/...efi.../remove_var
(yes I know about --no-preserve-root)
Uh, what? Somebody was trolling you if they gave you that advice. In the days of NFS or AFS (God, I'm old) that could get you into serious trouble if it wandered into somewhere unprotected.
dd if=/dev/zero of=/dev/<somedisk>? Sure. Bog standard guaranteed erase of the disk. Doing that even once means that you're now in NSA territory to recover it.
rm -rf /? Never heard of that, ever.
Or, in other words: I agree completely, in fifteen years I have never used 'rm -rf /', I always used mkfs to create fresh filesystem.
It certainly doesn't remove any file contents and to get a clean state, surely reformatting would be both easier and quicker?
> Unix has a long tradition of doing a `rm -rf /` before reinstalling a system. I don't agree with the people saying "don't do that".
So, yes, "tradition".
Hope that helps.
I'm no fan of systemd or Poettering, but this behavior is consistent for unix, root can do anything, including break peripherals or hardware thru poorly advised actions - why is this a big to do?
The Unix way isnt some holy grail holy shit perfect way. Its the simplest. Simple means its simpler to unfk. Because it will get fk'd.
Rendering someones hardware DOA is a really bad thing. Leaving that as a vector of attack is irresponsible at best.
If Redhat leaves this in expect legal issues.
2) remounting a mounted FS is a non-trivial operation with lots of corner cases. What happens when another program is accessing that FS at the same time?
It's simple--the kernel needs to be fixed. Access to EFI should require special sequences to cause writes.
systemd folks aren't going to ask for it because Linus will simply piss and moan and close as WONTFIX/NOTABUG.
One is efivarfs making it too easy to destroy stuff. It should probably identify problematic hardware and prohibit modifications that break that hardware. This is no fun, but hardware-specific workarounds are a fact of life for real-world OSes.
One is, as you say, bad default behavior with rm. Crossing filesystems by default is pretty weird. Preventing that unless it's explicitly requested would largely fix this problem and many others too.
And one is mounting efivarfs read-write by default. It's too dangerous to be that easy to modify.
I don't think it's too useful to try to pin one of these down as "the issue." They're all worth fixing.