Hacker News new | past | comments | ask | show | jobs | submit login
Systemd mounted efivarfs read-write, allowing motherboard bricking via 'rm' (github.com)
180 points by dogecoinbase on Jan 29, 2016 | hide | past | web | favorite | 173 comments



The comments by Lennart Poettering, IMO, are a prime example of the schism between what is needed for reaching "the year of Linux on the desktop" and what some hackers think is best.

Nobody sells a chainsaw with safeties disabled, remarking "you can always attach the chain catcher" if people report a couple of accidents, but Lennart thinks requiring almost every user to change the default install ((note that you can remount it readonly at boot, simply by adding an entry for it into /etc/fstab, that is marked "ro") is a better idea than mounting it read-only, and requiring those few who need to write these variables to do some extra work by remounting it read-write.


That is an extremely uncharitable reading of what he wrote, and I don't think it captures the essence of his intent at all. He said:

> Well, there are tools that actually want to write it.

It needs to be accessible in some manner.

> The ability to hose a system is certainly reason enought to make sure it's well protected and only writable to root.

Agreement it needs to be guarded, because it's a problem if it can hose a system.

> But beyond that: root can do anything really.

This is a truism for Unix, and altering this is much more of a problem than the occasional bricked machine. This is free software, and the ability to use it to control our hardware to the fullest extent possible is part of that freedom. Root cannot and should not be restricted from doing what it needs. It can, have hurdles put in place to make it harder to accidentally do something it doesn't mean to do though.

So, agreement on the problem, and agreement that something needs to be done to fix it, and a caution that access can't be entirely closed off without consequences, so root needs access. What's so horrible about that?


Nobody is saying root shouldn't be able to access such functionality but rather it just shouldn't be mounted rw by default. This is actually fairly uncomplicated.


But then whatever app that runs as root that actually needs to update it is going to remount it rw, at which point the original problem stands.

My personal opinion is that, for the use case of a single-user machine (i.e., "Linux on the desktop"), the traditional UNIX root isn't a useful concept. What's actually sensitive is your files, your saved passwords, your browser cookies, your emails, etc.; the ability to reconfigure drivers or replace OS components is only a problem because it allows all of those things to be compromised. Instead of throwing roadblocks in the way of root (remount ro, expose it as something more complicated than a filesystem so innocent users aren't tempted to look at it, etc.), let's take all the stuff that users should be able to do and make it accessible by nonroot, and take all the stuff they shouldn't be able to do and put it in the realm of automation.

UEFI has a pretty reasonable concept here, of a distinction between "boot" and "runtime" configuration; the BSDs have a conceptually similar idea of the securelevel. The idea is that if you're around during the boot, you might be interested in reconfiguring what the machine is doing, but as soon as you're out, you permanently abjure any further ability to reconfigure that. If we take a variant on that, that the OS doesn't abjure those capabilities as a whole but instead reserves them for itself, disabling all user-controlled ability to change things, I think we get very far. You can install new software, but not change the trusted sources for software. You can apply firmware updates via an existing cronjob, but not make arbitrary changes to firmware variables. If you want to do any of that, you have to reboot and enter a special setup mode. Apart from protecting against simple human error, this also serves as a very effective protection against malware: non-OS software simply has no ability to do unwanted things.

Then let's use UNIX users (which are a pretty solid isolation technique) to isolate things like your banking info from your Flash-requiring music service.

Chrome OS, incidentally, has this concept of things the OS can do that the user has no interface to, and Chrome OS is basically the most successful Linux on the desktop, so.


> But then whatever app that runs as root that actually needs to update it is going to remount it rw, at which point the original problem stands.

There's a difference between a system which, at all times, is one mistyped command from being bricked, and a system which allows you to brick it if you're explicitly doing something dangerous in the first place.

> let's take all the stuff that users should be able to do and make it accessible by nonroot, and take all the stuff they shouldn't be able to do and put it in the realm of automation

While theoretically reasonable, this is also a huge project, and while this project is slowly implemented users are still vulnerable to this bug. Additionally, there's always going to be Linux users who find the idea abhorrent, and thus it remains a bug for those users. Choosing to have more control should not generally mean that you can brick your machine so easily as accidentally deleting a few files.


Yeah, and that app isn't rm.

Precisely how many user space apps do you think need access to it?!?


Yes. And all I see from Lennart is agreement and the statement of a few facts (some programs need access), which implies that a slightly more nuanced solution may be required. I fail to see how this reflects poorly on him.


The point is that that would offer no benefit. A root user can just mount it rw again in a single call.


So in a similar vein, why bother with alias rm='rm -i' at all?


But that doesn't actually exist by default, and so it's not a good example.

A better example is that `/` is a special case for `rm`.

(Besides: I'm not a fan of `rm -i`, precisely because it's non-default. You'll get used to it, and then someday you accidentally some files on a system where you didn't add the alias.)


Lots of GNU/Linux distributions used to it by default, has it changed?


None I currently use (just tested with bash* on: Debian, Centos, Fedora, OpenWRT) or have used in the past few years (Arch) has it.

I don't remember anything about my time with SUSE and Ubuntu, thats way back.

* I normally use zsh with grmlzshrc and nearly no customization (which is why I love grmlzshrc -- usable defaults), which warns for `rm -rf foo/*` but not for `rm -rf foo`.


Redhat based distros absolutely do enable alias rm='rm -i' out of the box, but only for the root user. As a career SysAdmin, that seems like a good compromise. If you want to take the safety net off, you can escape out the alias (or any alias) with a slash i.e.:

\rm -rf somedir

Yet by default, if you don't include the "\", you get rm -i


You're right about RedHat, sorry.

.bash_history says I accidentally added `-f` on that host. I should have just checked with `which` instead of trying it out. Oops.


Actually even if you do rm -f, the -i will still be added if it is an alias. Better is to use the bash builtin "type":

type rm is better than which rm, but it depends on what distro you're using. Some distros (Just checked on RHEL7) will fancify which as well ie:

$ type which which is aliased to `alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde'


> Actually even if you do rm -f, the -i will still be added if it is an alias.

Yes, I know, but rm ignores -i if -f is given.

  [root@example ~]# touch blah
  [root@example ~]# rm -i -f blah
  [root@example ~]# 
And hence:

  [root@example ~]# which rm
  alias rm='rm -i'
  	/usr/bin/rm
  [root@example ~]# touch blah
  [root@example ~]# rm -f blah
  [root@example ~]#
@type: yes, you're right, thanks for the reminder.


Maybe I am wrong but I think Red-Hat, Mandrake and SuSE used to do it.

Nowadays I spend most of my time in Windows and mobile OSes to remember it properly.


Sane distributions don't include that alias by default. Because otherwise people will always use `rm -f` and lose the feature that `rm` asks before removing files marked as read-only.


Then quite a few distributions are not sane, as I remember once a upon a time it was quite common to have it like that on fresh install.


Root needs access, but creating a pseudo-filesystem to access an area of the system with metadata, etc that doesn't map well to a filesystem, especially given the number of very buggy implementations of UEFI out there is just asking for trouble. There are properties that have settings to be visible only certain times during the boot process, there are settings that involve changing guids, etc. Access is better served through tools that better understand UEFI's confusing properties, not hammered at with general filesystem-level tools.


I agree with you, but then we delve into the fun of needing to implement a syscall for this or a something in sysfs that limits the damage that can be done. I suppose it's better than the current efivars filesystem, but neither is a wonderful solution when we have to limit access to the hardware that could potentially do useful things (without patching the kernel, of course).


It's useful, yes, but because of the buggy implementations out there, the risk in bricking, wanton removal of important variables, etc, is just too great. If anything, it should be a device node or the like. /dev/efi makes a lot more sense given its structure and purpose than trying to abstract it into looking kinda sorta like a filesystem.


My big problem at the end of the day is while we can help prevent users from killing their own boards - we still NEED to be holding these firmware vendors accountable. No matter what abstraction is presented to user space once someone has root on my system they could potentially destroy my hardware.

Maybe one there is some nasty Windows Virus out there that fries boards by deleting EFI variables these stupid vendors will get their shit in order.


Not only is root root, and should be permitted to access and modify the system as desired, it seems to me that the root of this problem is that motherboard manufacturers are shipping faulty boards. I shouldn't be able to brick my motherboard from my operating system, and that I can is a bug. You're telling me there isn't so much as a factory reset on these boards?

Perhaps systemd could shield users from their motherboard's manufacturer's stupidity, but the amount of flack being received doesn't seem warranted; has the same amount of outrage been directed towards the manufacturers?


It is true that this is ultimately a hardware/firmware problem, but if the solution is waiting for motherboard manufacturers to not have bugs or stupid designs, then we will be waiting forever. Badly designed hardware will always exist.

If you look at how probing for hardware works in the Linux kernel, quite a few older drivers (and the occasional feature subtree) have auto-probing disabled by default because it tended to crash some types of hardware. You see this again in lm_sensors, where the auto-probe script specifically asks permission (with a default "no") before it tries certain probes that risk crashing the hardware.

We don't see this as much in recent years as hardware has gotten better - listing the device_ids/vendor_ids on a buss is much safer than poking ISA IO ports - but the problem remains. Some hardware needs special hand-holding, especially in the initial probe or initialize stages. Hence, tools that handle those stages are always going to have special cases and other annoyances.

> the amount of flack being received doesn't seem warranted

The flack against Pottering (and the rest of the systemd cabal) is because this isn't the first time his idealism has caused serious problems for others. Yes, it would be nice if we could just mount UEFI areas (or anything else). In reality, as this bug shows, this is dangerous behavior. It should have been made optional, or the entire idea of exposing those UEFI areas as a filesystem should have been reconsidered - a special purpose tool that handled bugged hardware might have been more appropriate.


Fixing every vendors implementation is an impossible task. Ensuring that the default configuration can't brick your motherboard is an easy one. So do the easy thing to ensure user safety WHILE working to solve the hard one to the extent feasible.

Robust systems must work in the real world not just in an ideal one.


We've been living in a world where you couldn't brick the firmware from the OS for a long time, and it's been terrible. I'm perfectly happy to take the risk of being able to brick a board from the OS if it means I never again have to try boot to a dos disk and run a proprietary exe to update the BIOS.

Some boards do have something of a "factory reset" in the form of dual bios so that if one is bricked, the backup can still be booted. But it's not a bug that not every motherboard has this feature.


The point isn't that it can be bricked; it's that it's possible to brick it via an "rm". Or heck, doing anything hardware-related via an 'rm'.

The UNIX philosophy of "everything is a file" has always been based around the conceptual 'contents' of these 'files'. The kernel doesn't (directly) care if you remove a device node (block or character-based), it doesn't care if you remove a pipe, and it doesn't care if you remove a socket - even though all of these things are interfaces to the kernel in one form or another. Removing these files only removes your ability to access that interface.

Simiarly with files in /proc - which, like /sys, is directly exported by the kernel. In many cases the kernel refuses to even allow you to unlink the 'files' in /proc, even when it might make sense. (For example, you can't kill a process by 'rm'ing anything in the relevant /proc directory.) Instead, you update the configuration values that the files in /proc represent by echoing into their contents, where applicable.

The case with the efivars 'file' actually causing the kernel to take action when it's removed is what's at issue here. The fact that it happens to brick some computers is particularly unfortunate, but does a good job with showing the problem here. This isn't what the "everything is a file" philosophy is about.


I was not defending SystemD, I was defending the hardware manufacturers against the accusation that they're "shipping faulty boards".

I think SystemD is in the wrong here. Operating Systems should be protecting users from bricking their hardware.

Harware manufacturers should not be shipping hardware that's nearly impossible to brick, because it results in hardware that is nearly impossible to update.


> You're telling me there isn't so much as a factory reset on these boards?

Quite a few boards have a backup EEPROM to initialize from if the primary EEPROM fails to boot, but who knows if those that do will bother to reinitialize the UEFI variables stored in NVRAM. Honestly, since they are just supposed to be stored in NVRAM what happens if your board battery runs out of juice, will it still brick?


Hosing a system is a way different than bricking a motherboard.

Hosing a system is something that can be fixed by root by reinstalling from scratch or restoring from backup. Bricking hardware is not something a sysadmin can fix by himself.

Poettering attitude refusing to acknowledge his mistakes and fixing them is not something new.

Someone who doesn't get the difference between deleting the content of a drive and rendering hardware inoperable shouldn't be in charge of a critical system component.


I agree with everything you wrote, but he did close the issue, which generally implies "I'm not going to do anything about it".

Maybe it's not his responsibility and distro creators need to protect it, I don't know.


If they changed it to "ro" by default, people would complain as well if it breaks something.


He who play in root, kill tree.

-Slackware fortune file


The comments by Lennart Poettering, IMO, are a prime example of the schism between what is needed for reaching "the year of Linux on the desktop" and what some hackers think is best.

I tend to think the Linux on the desktop thing is a bit of a diversion, but this issue and Poettering's response certainly lend weight to the concerns many have raised about systemd.

What happened to defence in depth? Even if your users are all perfectly competent, with this sort of arrangement you're one vulnerability away from having bricked boxes, not just one vulnerability away from reimaging and restoring from backups.

What happened to making systems safe by default? There is no need to make this kind of system so fragile.

What happened to using the right tool for each job? The obsession with making everything look like a file that lives under the same global file system hierarchy causes endless problems, and it's never been entirely clear to me what benefits it's supposed to offer in return. It's interesting that Poettering remarked on exposing /dev/sda as root as well, because it's another excellent example of how broken this model is.

What happened to minimising trust? Of course it's crazy that any hardware/firmware actually allows itself to be permanently broken by a software error like this, but the software should not assume the hardware/firmware will take responsibility for providing any necessary safeguards.

Ultimately, this is just bad engineering and bad UI. It can result, and reportedly has resulted, in serious damage. No amount of dressing it up will change that, and the root cause of the problem should be fixed.


Did Lennart have anything to do with the actual efivars implementation in the kernel? If so, then yeah, shame on him by all means. I agree it's a broken abstraction, but it's what the kernel developers decided on (and they sure like making system configuration available with filesystems, see sysfs and procfs).

Ultimately access is needed to even configure the bootloader, and right now that means it needs to be mounted RW. Let's go yell at the kernel developers instead of taking it out on systemd for using a virtual filesystem instead of a syscall.


By the way, here's the author of the kernel module in question, talking about this issue:

https://twitter.com/mjg59/status/693494314941288448


I'm really glad he's speaking out on the issue, Matt is a smart guy (I used to talk with him frequently on IRC) and it's sad to see many of the same people trying to act like they know better than him in that twitter thread. There's no easy fix to this issue, and regardless of whether an rm -rf / can hose your system there's bigger issues at stake with malicious actors that a stupid bandaid like that won't fix.


Here is the thing, one day systemd overrides a kernel default without warning because of "best practices", another day they leave it alone even if it is basically a cocked gun aimed at the admins foot.


Well, I could see a problem occurring if Poettering decided not to represent efivars as files. Many people would claim that Systemd is trying to get rid of the Unix philosophy and force everyone to use the software if they want to keep up. After all, so many people are already complaining that Systemd uses a binary format to store logs.


My view is that treating everything in the universe as part of the same filesystem hierarchy is one UNIX philosophy we should have dispensed with long ago. I don't think it makes sense to project the same abstractions we use to represent file storage onto other aspects of a system just because they also happen to provide some information and then to treat everything as part of a huge, monolithic, homogeneous structure. The Filesystem Hierarchy Standard is very complicated already -- a discussion for another time, perhaps -- but what does something like "rm -rf /" (or any other actual file/directory manipulation) even mean at this point? I think reverence for some of these ideas just because they are "the UNIX way" is holding the industry back.


If you mount efivarsfs read-only, efibootmgr can't make your system bootable. So at a minimum, you can't just remount the filesystem read-only; you need distribution integration so efibootmgr can still write to it.

There's no point in mounting efivarsfs read-only; you might as well not mount it at all.


(Honest question, I lack hardware to check this on myself:) What do non-systemd systems do about this?


FreeBSD has an efi library that's used in early boot to handle this instead. To be honest, with the structure of UEFI, exposing it as a filesystem is problematic at best. A filesystem just hides too much data from you -- errors, the reason a call to setting a variable failed, etc. Not everything is a file, and shoehorning things in to try to pretend just doesn't work.


Indeed. It should be exposed the standard way you expose functionality in Linux -- with a D-Bus API.


If I had a system that I booted in UEFI mode, I could tell you, but all of my hardware (even the ones running UEFI) boot OSs in BIOS mode.

Despite the fact that they're primarily a systemd distro, Arch Linux strongly recommends that you mount efivarfs in ro mode and remount rw when doing kernel upgrades.

GRUB2 seems to use efibootmgr to write stuff to EFI, and efibootmgr appears to lack a "detect if efivarfs is mounted ro and temporarily mount it rw" switch, so you have to do that manually too.

Gentoo's Genkernel-next doesn't seem to do anything at all with EFI.

Dracut (one of RedHat's initrd management tools) looks like it reads from efivarfs, but does not write to it. (But there's a lot of code in Dracut, so it's entirely possible I missed something.)

It's not entirely clear to me that you can't use BIOS boot with a GPT-formatted disk if your bootloader supports that sort of thing. It's also not clear to me whether or not chainloading from GRUB into Windows is supported with a BIOS/GPT partitioning scheme.

Regardless, it's a pity that the UEFI design committee apparently thought that requiring one to write data into the motherboard's persistent storage in order to boot into a OS was a reasonable thing to do.


> Despite the fact that they're primarily a systemd distro, Arch Linux strongly recommends that you mount efivarfs in ro mode and remount rw when doing kernel upgrades.

Why not leave it unmounted entirely, and only mount it when installing new boot entries?


> Why not leave it unmounted entirely, and only mount it when installing new boot entries?

Because there are -presumably- reasons to read EFI variables? If they're exposed through efivarsfs, then it's a convenient way to access them.


A traditional BIOS or UEFI running in CSM requires an MBR formatted disk since the bootloader must be started from the first 512 bytes on disk. I have to boot ESXi on my ProLiant ML10 off a flash drive because my 3TB disk would become a 2TB disk otherwise.


Did you hear about "hybrid MBR"?


Hybrid MBR is an extremely fragile and hackish technique to make crappy old firmware boot larger disks. In general I want my whole 3TB drive allocated to VM storage instead of partitioning it up so ESXi has somewhere to live, so just running that off an easily replaceable USB key is a lot simpler to maintain.


FreeBSD generally avoids system calls via filesystem - it's cute because you can change kernel parameters with echo and cat, but it's generally not efficient and prone to unexpected problems such as what this whole thread is about. You can just use appropriate commands which in turn use the appropriate system calls to accomplish the same thing.


I got the efi vars in /sys/firmware/efi/vars on a Debian witht sysvinit boot (but cgmanager and systemd-login), and its mounted rw as default.


> If you mount efivarsfs read-only, efibootmgr can't make your system bootable.

You'd think that efibootmgr would have a "--temporarily-remount-efivarsfs-readwrite" flag to handle that very thing, given hardware vendors' long history of getting UEFI dangerously wrong. :/


I think the "Year of Linux on the desktop" thing is BS and needs to stop. It's been the year of Linux on the desktop for 15 years now. Can't GNU/Linux just be a solid server platform? Not trying to be a troll, but if someone says "the year of Linux on the desktop" one more time...


Linux is important on dekstops to thrive. I've successfully transitioned my parents to Linux. What I got? Deser maintainance, more reliability, everything just works for them.

And of course, sticking to proprietary solutions in a long run would be rather a sick idea.


> Linux is important on dekstops to thrive.

That's what distros are for. The "Linux" family of operating systems are used for a very wide variety of uses, some of which are orthogonal (and sometimes incompatible) with "desktop" features.

Something like "Year of the KDE desktop" or "Year of the Linux Mint desktop" would make much more sense than trying to shove all of Linux into one pigeon hole. In fact, even within "desktop" there is variation. While an easy to use, mostly automagic desktop is a good idea, that's not the only definition of "desktop".

Ultimately, this is the the biggest problem with the idea of "THE Linux desktop"[1]: one size never fits all. Fortunately, Linux distributions exist[2] allowing for great variety - including "everything just works" desktop distributions. Just remember that the Linux ecosystem is larger than one type of distro.

[1] and systemd, and 15+ years ago the "desktop environments" that insisted there should be only one GUI toolkit, etc

[2] This assumes distros decide to keep their own identity instead of continuing on the homogeneity bandwagon that many joined when they converted to systemd. Turning into a clone of Red Hat is the path to obsolescence.


I think that pretty soon features backed by machine learning and mass data collection, features like intention comprehension and prediction or voice command/search, will be commonplace and hotly desired by users.

When that happens, I don't think the Canonical or Redhat will be able to compete, and only Google will be able to create an OS on top of Linux, and open-sourcing won't do any good because the most important services will be behind Google servers.


Linux derivatives rule the mobile space spo in essence it won really big. The desktop is not that impoortant. And so many of its paradigms are quite dated compared to the mobile experience.


The mobile experience is in a lot of ways absolute garbage designed by incompetents.

That the dominate platforms have any customers at all just illustrates how mediocre the entire software industry is.

If software were medicine we would still be leeching people and learning about the miasma theory of germs.


Android/Linux could become tomorrow Android/BSD or Android/QNX, Android/Almost POSIX Kernel and only OEMs and people using undocumented APIs would notice.


I also think that desktop vs server are two different domains.

The desktop / personal user domain is looking like it's in the early stages of transformation, with mass data collection to support voice search and user intent prediction. While there's something to be said for privacy, I don't think consumers will care enough, and I don't think desktop linux systems outside of whatever Google may make is going to be adequately competitive.

This will likely not be a server concern. Engineers probably won't be interested in a service that opaquely collects data and sends it off to remote locations just so that they can say, "Cortana, run the Ansible script to redeploy my API server." But desktop users would find value in saying, "Cortana, move all email from unknown entities to my todo list."


Linux is just source for a kernel that the userland for the various distributions are built on though. It's already largely on people's phones with Android. In fact, Chrome OS is based on Linux.


That appears to validate my belief that only Google can competitively take the Linux kernel and build atop it a software stack known as a desktop/mobile/user operating system.

And if my assertion that intention prediction and voice search will be competitive features for users, features that rely on mass data collection and expensive top talent, then I think organizations like Canonical or RedHat will fall behind in the desktop/mobile/user os space.


I tend to mix up Linux and Unix and that might be the issue here... But isn't the `Apple Garden` (OSX and iOS) an example of a desktop/mobile/user operating system built on Linux?

Apple's no M$, but they are certainly competitive.


It's entirely possible that Apple takes code or inspiration from the Linux software project, but I think it's better to say that OS X is Unix based. Even if Linux and Darwin hail from Unix roots, at some point, I think they become separate projects with separate interests and fates. What's good or bad for one doesn't have to be the same for the other.

When I think of what organizations will be competing on machine-learning backed features like intention prediction or voice search, I think of Apple, Microsoft, or Google. Those OS features won't be inside the OS. They'll be backed by a wealth of proprietary data hidden behind company servers.

I think user expectations for what a modern OS ought do will evolve, and I don't think Canonical or Redhat will be able to catch up to Microsoft, Apple, or Google.

Eventually, what we think of "Linux OS" will be Android (and if you want, OS X) and maybe some other offerings by these big giants, but the most user-demanded features won't be on the OS. Therefore, Samsung can't just fork Android and expect to compete sans Google, and Canonical or Redhat won't be able to compete either.


> It's entirely possible that Apple takes code or inspiration from the Linux software project, but I think it's better to say that OS X is Unix based.

Apple can't take code from Linux as the code is GPL licensed. OSX is largely based on NeXT and BSD. Linux and OSX are both POSIXish though.


OS X had been UNIX certified, at some point, though. Don't know if that is still true


It is. Various optional things from POSIX are missing though (https://github.com/uxcn/yafd/blob/master/src/c/pthread-dw.h).


No, OSX is based on NeXT, which was based on Mach, and BSD. As far as I know, iOS is proprietary.


I believe that broad games support and GPU driver support in the past two years has made Linux more viable for a lot of the kind of people who might not install it. I have been running a linux-only household for a year now.

Offices most certainly could run linux only if they weren't chickenshit (I mostly kid, but after seeing how broken the desktop management model is now anyway, I don't see why Linux can't be used more in Corporate.)


Linux on the desktop is now a lot easier than it was 10 years ago. I just recently switched to Linux Desktop after having it used on servers for a lot longer.


I don't think he's kicking the can down the road, at least not all the way to end users. I think he is implying that it's something distros should do, presumably after installing as part of their post-install config.

Either way is going to break things and I'm sure there'd be people lining up to complain about them making it ro by default or adding some sort of safety layer in. Might as well leave it as it is and let the various distros do what they want with it. I do think though that someone should do something and this sort of thing shouldn't be possible, but I can understand the stance of leaving it up to the distros to decide as a reasonable compromise.


"Nobody sells a chainsaw with safeties disabled, remarking 'you can always attach the chain catcher'"

Last week after a storm I watched two blokes cut up some fallen wood. One held the chainsaw, the other ran the wood over the blade.

When you start using tools at a low enough level, there are some things you cannot guard against.

  Mounting efivarfs read/write by default 
  can lead to accidental deletion of the 
  EFI variables. -- laloch 
At system level there no way this should happen. That is like having a chainsaw with a reversible saw motion.


"let's make everything a filesystem so it's dead easy to hang yourself from userland tools"


Let's make everything a filesystem... so that they are easy to access and modify.

Anyone can "rm -rf" anything, that doesn't mean that they should.


Except that the UEFI spec has a shitton of bits and pieces where the filesystem metaphor falls apart on. There are attributes that determine which parts of the boot process variables are visible in, variables that should require a guid in addition to data, etc. It's an ugly, messy protocol that really does require specialty tools.


that's a pretty weak argument. fault is in the firmware, but design of thing is important. it's why ATM gives the card back before the money. they don't just tell the user to suck it up. they implement good, safe best practices.


The new Citibank ATMs that take the chip card hold onto it until after you've taken your money. I already accidentally left my card in a machine after taking the cash.


Wow, that's some bad UI. Wells Fargo ATMs require you to take your card before it gives you the money.


People learned the hard way what good UI was for this sort of thing -- by having bank clients complain about ATMs eating their card after they took the money and left the card in the slot.

I know, I was one of those before my bank got newer, smarter ATMs.


I was taught "least privilege" was a fundamental principle of computer security. I guess this guy was taught differently.

Actually, it's more like this. I remember when I used to blame my users for what my software could do.


Yeah, it's called empathy for your users.


These threads are never productive. Running rm -rf / is going to offer multiple interesting ways to make your life miserable, eg: a mounted FUSE filesystem, an NFS folder pointed somewhere important, Samba mounts from your network automatically connected from your desktop, etc. I wouldn't be surprised if you could nuke firmware off a device by deleting the appropriate file in /sys/.

Systemd hate is en vogue these days so they are an easy and common target. Why no invective towards the kernel that actually implements EFI-as-a-filesystem?


Sometimes people run rm -rf / just for fun before reformatting a system, just to see what happens. Given that its purpose in life is to delete files, it stands to reason that running this command on a system which contains no important data is OK. A default configuration which makes this command destroy hardware is not reasonable.


rm -rf / probably doesn't do what you think it does. The coreutils version of rm includes --preserve-root[1], which is the default[2].

It also supports --one-file-system, which would prevent this and a host of other problems as well. That said, I don't really see a problem with Lennart's response. It's basically, "we should take steps to make this hard to do, but root is capable of doing anything, so don't expect it to be foolproof."

1: https://www.gnu.org/software/coreutils/manual/html_node/rm-i...

2: https://www.gnu.org/software/coreutils/manual/html_node/Trea...


Is there some response besides the one linked here? Because all I see here is basically saying, "We're not going to change anything, this is not a problem, remount it yourself if you don't like it."


He says "The ability to hose a system is certainly reason enought to make sure it's well protected and only writable to root." That looks like agreement that it needs something done to me. I took the followup comment to be a useful to on how to mitigate the problem until then.


Isn't it already only writeable by root? The request is to make it not even writeable by root without taking some additional action to make it writeable first. If I understand correctly, that bit you quote just describes the current situation.


I take the and in his reply to infer he agrees something additional should be done. Whether his ultimate response for how to do that is adequate is unknown, since he didn't really elaborate on exactly what he thinks should be done. It could be he meant that to signify agreement with the suggested course of action in addition to agreement that there's a problem to be fixed.

In any case, the only thing clear to me from his statements is that he agreed that there was a problem in need of attention, which makes the response here somewhat baffling to me (although not as baffling as it should be. It's fairly easy to see how a lot of the animosity comes from feeling about systemd in general and Lennart in particular, especially since some people state as much, as if that has any bearing on his response in this instance).


There are a variety of ways root can hose a system and many are recoverable without replacing hardware. Hard bricking a motherboard is a whole different league.


It's funny you mentioned that rm's purpose in life is to delete files. I'm reminded of a common saying in the unix world.

Everything is a file.


This is a good example of why you shouldn't take that saying too far.


The issue here is actually bricking the device. It's not your standard "do we let people hang themselves or limit the user's capabilities" debate.

A filesystem is a perfectly reasonable way to implement access to the EFI vars.


> Systemd hate is en vogue these days

To be fair, Lennart Poettering could probably end war and famine forever in a single day, and some people would still find something wrong with that.

On the other hand a certain amount of skepticism and criticism is very much in order. (I am saddened, though, by the way online discussions so easily deteriorate into name-calling and bitter ranting.)

Just to be clear, I was highly skeptical of systemd initially, now I have two systems running Debian Jessie, and quite honestly I haven't noticed much of a difference one way or the other. So at least now I am more confident that we are not all going to die because of systemd. On the other hand, I wish there had been more initial coordination / portability work to make sure stuff like GNOME keeps working smoothly on *BSD as well.


On the other hand I was happy and welcoming of systemd then I started to experience way too many breakage due to attitude of the devs.

When you have to spend 2 days to get to a remote server room and back because suddenly systemd decided that a device in fstab not present at boot would halt boot with error and at the same time the emergency shell was broken and going into a loop of asking for credentials, on a debian server, in production.

Ok systemd works most of the time but after experiencing a few of these breakages I've become wary of anything systemd. Whatever it brings to the table is not worth the wasted time and headaches it causes.


Oh my, that is bad indeed.

Apart from the two-item sample of personal experience I am kind of torn about systemd - some of the problems it attempts to address are real, and addressing these in general seems like a good idea. The way it does so, however, sometimes (I am being deliberately vague here) brings along some problems of its own.[1]

And the further systemd adoption and systemd's mission creep go along, the harder it becomes to backtrack and replace it in case somebody comes up with a better solution.

[1] Like I said, I have not experienced any of these problems myself, but I have read a couple of reports from people that had problems with systemd that were definitely not just aesthetic.


> I have not experienced any of these problems myself

That's the problem any time[1] a system is designed around ideological purity.

Someone invents something clever that works most of the time for most people (with varying definitions of "most"), so they try to apply it everywhere. As no idea can account for everything[2], this sooner or later the inflexibility of the theory meets the variation of realty and some type of drama happens.

Usually it's better to avoid anything that exhibits that kind of inflexibility and hubris. Systems that build in ways of handing the unexpected or patching around their own weaknesses are necessary or drama is inevitable.

[1] Not just computer systems - this is true most of the time humans build systems. We see the same problems in religion, politics, and social constructs. It's such a common behavior, I suspect there may be an evolutionary basis for it (using a single common rule for many situations requires less energy).

[2] Even the physicists are still working on that problem.


Agreed. Which is why I said On the other hand a certain amount of skepticism and criticism is very much in order.

The problems systemd tried to address are real, and some of the ideas behind it are appealing. But the way it attempts to replace several important pieces of the system at once, and the way it is being forced on people in a "eat it or leave it"-style feels uncomfortable.

Part of what made Unix what it is today is the idea to build a system that might be very far from perfect but that is easy to improve in small, incremental steps so one quickly gets a feeling for what works well and what does not.


Frankly i feel that way to many sub-projects within systemd is at this point "i can rewrite this faster than get patches past the maintainers". Often with the "hilarious" outcome that problem that was solved a decade ago crops up again in the new implementation within systemd.


Out of curiosity, what do you consider to be real problems that systemd is trying to address? (Honest question).


For one thing, a method service management that does not just fire off commands but monitors services and restarts them if they fail. Also, doing this with an idea of dependencies of services.

Event-based service management is interesting, too, e.g. shutting down a network service when the machine is disconnected from the network and restarts them when a connection becomes available again (think NTP, DHCP clients).

Once you have an idea of how services depend on each other, you get the ability to start services in parallel for free (whether that is so useful or even a good idea is another question).

Given the fact that systemd is hardly the first attempt to solve these issues (think SMF on Solaris, launchd on OS X, or the couple of attempts on GNU/Linux), I think a lot of people have felt the itch to improve on the classical SysV init.


Because everything is a file in unix derivative land. That is a common accepted practice at least but optimizing for the common case here is the right thing to do. How often do people need to manage EFI things through the filesystem and how much of an inconvenience is it to have the initial mount be read-only? The answer is obvious to me because I know people run `rm -rf` often enough that having a safeguard in place is the right thing to do.


A file maybe, a filesystem no.

You can't 'ls' your network connections.

And for tricky API that does not match the filesystem view we have the fnctls.


> You can't 'ls' your network connections.

You sure can in Plan 9. Unfortunately (?) Plan 9 is more Unix than Unix itself.


With fuse you certainly can.


Yeah. Never coded a driver did you?

It is as smart as saying that by using a hammer heavy enough you can make a cube fit in a triangle. It is possible, but defeating the purpose of consistent API.


Yeah but you can normally recover from that without a soldering iron. Badly implemented EFI is out there.


This is not what actually happened when this was reported on arch linux forums: https://bbs.archlinux.org/viewtopic.php?id=207549


> (note that you can remount it readonly at boot, simply by adding an entry for it into /etc/fstab, that is marked "ro")

He's suggesting that not mounting it as read-only by default will break a few things like grub-install, but promptly suggested editing /etc/fstab to mount it read-only by default.

So... his own solution to the problem is to mount it as read-only.

The hard drive analogy is ridiculous because while you can lose data you cannot brick your hard drive or computer by erasing it. There are ways to brick a hard drive by corrupting the firmware but there are very specific tools like "hdparm" that you have to use to do it, not an everyday command like "rm".

This makes me wonder why Linux only has a single "root" user rather than multiple types of administrators. Windows, for example, has Administrator, which can do nearly everything, but some files and processes are protected under SYSTEM or TrustedInstaller. If you REALLY want to, you can still remove those files by changing ownership, or launching a command prompt with SYSTEM permissions, but it does add a significant hurdle to doing things that could cause significant damage.


In linux, there is the immutable file attribute which won't let root rm the file without first changing it with chattr.


File attributes need to be implemented by the filesystem. efivars is a pseudo-filesystem which does not implement them. It could, but that is one of the more needlessly complicated solutions...


To be fair, you could make a distribution that works this way (or you could modify a current install). Simply make critical files belong to root, make files that are fairly important (config files) belong to a special user, and make user files belong to the user. This way, you can do day-to-day system administration with a more powerful but still limited user, while performing critical operations, such as installing a bootloader, as root. NixOS sort of does something similar, although not necessarily for the reason of preventing breakage. Any regular user can install packages for his/her own environment, which the daemon installs in the package directory that belongs to "nixos" You still need to use root for system configuration, unless the "configuration.nix" could also be chowned to the "nixos" user.


To be fair, userspace access to these variables is required in order to manage EFI boot. Having them there to be deleted (by root!) accidentally is probably bad, but this isn't a meaningful security fix.

The real crisis here is that modern firmware is developed in an environment so fragile and isolated that it's simply unable to recover from configuration that is "wrong".

I had a laptop that I bricked about two years ago by accidentally wiping out the partition table on the soldered-down storage.


I don't think this is meant to be a security fix, it's just meant to make it so you can't accidentally break your hardware with what seems like an unrelated filesystem command.

I agree that the stupid firmware is ultimately to blame, and mounting this stuff read-write is a fairly reasonable initial decision. But given the existence of this stupid firmware, the resistance to implementing an easy mitigation by mounting it read-only is insane. I can't understand why there are any responses besides "oh shit, we'd better change this."


Technically, denial of service is a security issue - I have modern servers that boot via UEFI (since GPT is required to support large disks), hopefully with less buggy implementations but if someone manages to get root on a server they could totally kill the hardware.

Having to reinstall an OS because some script kiddie decided to `rm -rf --no-preserve-root /` your server bites, but actually bricking the system requiring a board replacement is scary. (Of course, they could technically be a jerk and flash your UEFI EEPROM, but most servers have a backup they will restore from in case the primary has been corrupted).

EDIT: Though, now that I think about it, this is still only going to stop users from hosing their systems. A malicious user could still run an arbitrary binary or kernel module that hoses the UEFI variables as well. Or by modifying the UEFI boot config to load some other custom UEFI binary to hose it... firmware vendors really just need to get their crap in gear, software (even running in UEFI boot services mode shouldn't be able to hose it).


Regarding your edit, exactly. The issue could be considered a security issue, but this fix is not a security fix. A security fix would be something like only allowing touching EFI during startup and then somehow securely and permanently disabling that ability until the next reboot, or a physical write-protect switch on the computer.

Or EFI firmware not being so stupid, but we know that's not going to happen.


More precisely, every EFI firmware not being so stupid. BTW, you will see that a "high quality UEFI BIOS" is part of my wishlist for an Intel/AMD branded laptop: https://www.reddit.com/r/hardware/comments/42cnbq/would_inte...


Maybe if there was some awful Windows virus that nuked boards with bad implementations they would fix their crap.


Bingo. You're only a call to SetFirmwareEnvironmentVariable away from having your Windows system hosed.


Userspace access. Not access via yet another pseudo-filesystem. Especially not access via yet another pseudo-filesystem automatically mounted read/write at boot time. GRUB works just fine on other systems that don't have a pseudo-filesystem for EFI access.


Yes but almost all use cases allow you to set them immutable or remount by default. The management of them can be done with a couple commands.


This isn't a systemd problem. It's a kernel problem.

Allowing accidental access to a resource capable of doing permanent damage is dumb. Even embedded processors generally have an "unlock/lock" sequence when you want to write to their flash memory.

The fact that the kernel doesn't require you to write a "magic number" somewhere to gain access is foolhardy.


One day I broke my laptop's boot list, so I went to add a new efi boot entry. I couldn't and thought it was due to my inability to find a proper usb bootable media with efi support since you can't access efi-vars if you boot from bios.

Turns out linux used to store crash-dumps inside efi-vars, so mine were full and new entries couldn't be written. Good thing /sys/firmware/efi/vars/ was mounted rw, thus a `rm -rf /sys/firmware/efi/vars/dump-*` fixed it.

Some kind of protection would be nice, but the ability to see and use efi-vars as a filesystem is useful too. If we listen to security researchers too much, our computers will be no fun.


Breaking your laptop's boot list will happen more often with it mounted RW anyway. So it's good it fixed it for you, but I doubt the number of cases it's fixed something/been useful outnumbers the cases where someone bricks their system using what is probably the most repeated Linux command on the internet: rm -rf /.


The reason the efi's boot list was broken was because I wanted to replace a HDD. The laptop has 2 HDDs, I removed them and booted with the new, empty disk for an experiment before adding back my system disk. UEFI thought is was appropriate to remove the non-existing disks' entries from its list and thus I couldn't boot anymore.

Anyway, I don't think this is a systemd bug. The /sys interface comes from the kernel and whether you will access efi vars depends on your kernel configuration. There is the older module (I believe _efivars_ if it isn't compiled as built-in) and the newer (I believe _efivarsfs_).

I've done my fair share of systemd bashing but here we have:

  1. Broken and/or naive EFI implementations
  2. Linux kernel's default choices (expose efivars as rw)
  3. Distro's choices (compile kernel with said modules as built-in)
And we give the fault to systemd. Every other init system out there does the same.


And that's the sort of thing that could easily sneak into the `curl http://some/url | sudo bash`-style of installs that are popular these days.


Are you arguing about malicious attempts to mess up, or brick, your computer, facilitated by efivars presenting an interface accessible by simple shell scripts?

Well... If there would be an api requiring, for example, a special syscall (or maybe even uglier, an mmap of /dev/men), I'm 100% percent sure that the default install of all Linux distributions would include an /sbin/efivars command exposing this API to every shell script running as root... So nothing would be gained.

For safeguards against accidental damage, on the other hand, in completely in favour.


Good.


I could do it in an RPM post-install script too if it will get people to stop whining about running arbitrary scripts.

All software you install on your system through binaries is potentially suspect to an extend. Sure, I trust the Fedora build system to do the right thing, so I can with some certainty assume anything in the Fedora Collection or EPEL is "safe" - but when I need to install 3rd party software to get my video card working I can throw all that out the window.

Be sane when testing any binary you install, at least with RPM's or DEB's I can verify the GPG key of the vendor - assuming they bother to use one at all... piping scripts directly from curl to bash is bad because of the potential of the stream ending early and leaving in an inconsistent state, that's the only thing that is notable worse versus installing a package (and can be worked around by downloading it first, THEN executing it).


Given you need to be root to install software on an Ubuntu system, a deb could do that also.


Unless you're doing something super screwy, GRUB, the excuse for this backwards default, doesn't even need access to UEFI. UEFI booting looks for a file in a FAT-formatted partition of a hard disk with a specific ID -- C12A7328-F81F-11D2-BA4B-00A0C93EC93B for GPT disks and 0xEF for MBR disks. All EFI does is to look for a file BOOTX64.EFI on that partition. There's no need to muck around the UEFI settings for this, it Just Fucking Works.


Only removable disks use bootx64.efi, though some (but not all) BIOSes will look for that on fixed disks too. efibootmgr needs access to efivarsfs to configure an EFI boot entry for Linux on a fixed disk. Relying exclusively on bootx64.efi on the EFI system partition will break on some devices, and will overwrite the Windows bootloader (and conversely, Windows doing so will render Linux unbootable).


The standard is for EFI to look for that partition on all disks, even hard disks. If there are a few broken systems that do otherwise, they could use other methods, but keeping things mostly standardized seems like the best way to go. It's how Windows does it, it's how FreeBSD does it, it's how OpenBSD does it, etc. Mucking around with it doing otherwise is just asking for problems.


The EFI standard specifically limits that to removable media. Windows installs itself via the EFI boot variable mechanism; Windows also installs itself to bootx64.efi, but even if you overwrite that, you can still boot Windows via its EFI boot entry. Having Linux systems install themselves only to bootx64.efi would produce a fragile installation (particularly in dual-boot configurations).


Okay, I read the standard and you're right, they are supposed to install to a vendor specific location, so my apologies. Boot's an additional place you can put it, but not the regular place.

As an aside, goddamn is UEFI ugly. BIOS it was easy -- set the partition you want to boot as bootable, and away you go. here you have to copy the bootloader file, and then write to the firmware what file you want to boot, which feels even more fragile and confusing if you decide to wipe a hard drive to put a new OS on it unless you do everything just right.


> As an aside, goddamn is UEFI ugly.

You're goddamn right it is. Matthew Garrett [0] used to do UEFI stuff for -IIRC- RedHat. He has MANY blog posts about fucked-up parts of the spec, and horrifyingly fucked-up implementations of the same. Sadly, it looks like most (if not all) of his posts are now tagged "advogato, fedora", so I can't point you specifically to the UEFI gore posts.

[0] http://mjg59.dreamwidth.org/


My personal favorite is how http://lkml.org/lkml/2013/11/11/653 is posted by @hp.com that also locked access to UEFI updates to paid customers not long after it was posted.


At least in theory, UEFI intended to avoid the current problem of OS bootloaders taking over (by writing to the MBR) and then needing to know how to boot other OSes. (Or, in practice, Linux bootloaders knowing how to boot everything and all other bootloaders only booting their own OS.) If you install your bootloader via the standard UEFI mechanism, you won't step on any other OS on the same system. And at boot time, you get a menu of OSes, with labels provided by those OSes.

The problem comes in with the small handful of UEFI firmware implementations that get EFI variables so horribly wrong that attempting to write to them in the standard way will cause the EFI firmware to overwrite itself or otherwise brick the machine. Linux does attempt to work around some of those systems through very careful handling of EFI variable writes.


> As an aside, goddamn is UEFI ugly.

I was installing a graphics card once and broke my EFI setup. Not because of the graphics card, but because I apparently bumped the CMOS battery enough to wipe settings.


Uh... don't do that?


Sure, but historically something like that wouldn't render a system unable to boot an already installed operating system.


Iirc, bios just loads the first sector[1] of the disk into a specific place and jumps into it. Interpreting the partition table or not it's up to that code.

[1] or maybe the first 512 bytes, if a sector is bigger


Why not just turn off recognition of "rm" in the efivarfs? That is, the response to "unlink" of a file that represents a system abstraction of firmware should be something drastic only to the unlinker. SIGSEGV or something. "Go away, don't do that."


That sounds like a sign that the filesystem may be the wrong abstraction for this.


Its far easier to just close the ticket and tell everyone to piss off.


Well there are use cases where you'd want to remove an EFI variable.


Then just require the efivars to have different semantics, just have a look like unbind works for drivers. If you really want to be super cautious, add a safeguard to enable this.

    echo 1       >/sys/...efi.../enable_remove
    echo varname >/sys/...efi.../remove_var


Unix has a long tradition of doing a `rm -rf /` before reinstalling a system. I don't agree with the people saying "don't do that". It worked safely before, it should still be safe to do.

(yes I know about --no-preserve-root)


> Unix has a long tradition of doing a `rm -rf /` before reinstalling a system.

Uh, what? Somebody was trolling you if they gave you that advice. In the days of NFS or AFS (God, I'm old) that could get you into serious trouble if it wandered into somewhere unprotected.

dd if=/dev/zero of=/dev/<somedisk>? Sure. Bog standard guaranteed erase of the disk. Doing that even once means that you're now in NSA territory to recover it.

rm -rf /? Never heard of that, ever.


`rm -rf /` was never safe, even if reinstalling the system. If it worked for you then you were just lucky. In the context doing a reinstall you need to specify --one-file-system, but really you should just reformat the partition, it'll likely be much faster anyways.


On one episode of BSD Now[1], Bryan Cantrill tells how they made 'rm -rf /' not work on Solaris. It seems like a good idea in that they managed to stay compliant to POSIX while disarming that particular foot seeking-missile.

Or, in other words: I agree completely, in fifteen years I have never used 'rm -rf /', I always used mkfs to create fresh filesystem.

[1] http://www.bsdnow.tv/episodes/2015_08_19-ubuntu_slaughters_k...


What's that supposed to be good for?

It certainly doesn't remove any file contents and to get a clean state, surely reformatting would be both easier and quicker?


Experimentation, learning, having fun, curiosity, and all the thing that make for a proficient sysadmin.


So cargo-cult? Doing things out of "tradition" without understanding what they are for?


Uh no. Not tradition: experimentation, learning, having fun, curiosity, and all the things that make for a proficient sysadmin.


To quote the original post:

> Unix has a long tradition of doing a `rm -rf /` before reinstalling a system. I don't agree with the people saying "don't do that".

So, yes, "tradition".


Actually, no. You might want to experiment on a test system to see what "tradition" actually does. Then you, you know, learn the consequences because you are, you know, curious to understand the result of the operation. And many people find this, well, fun. Normally people who do this sort of thing exhibit the traits that make them a good sysadmin because they are verifying Unix-lore to ensure that they aren't just following some sort of cargo cult methodology.

Hope that helps.


That's a dreadful response from Pottering.


So, the short read here is, one (arguably poorly thought out) complex system interacts with another (again, arguably poorly thought out) complex system in unthought about ways... Why is anyone surprised?

I'm no fan of systemd or Poettering, but this behavior is consistent for unix, root can do anything, including break peripherals or hardware thru poorly advised actions - why is this a big to do?


Because this is a violation of principle of least astonishment[1], as a sysadmin when I hose my system using rm I expect to be able to repair or recover without having to replace the motherboard.

see also: https://bbs.archlinux.org/viewtopic.php?id=207549 https://bbs.archlinux.org/viewtopic.php?id=208102

[1]: https://en.wikipedia.org/wiki/Principle_of_least_astonishmen...


How long before someone realises that grub could also be made part of systemd... cringes


> How long before someone realises that grub could also be made part of systemd...

https://wiki.archlinux.org/index.php/Systemd-boot


This is why I didn't even want to google it... ignorance is bliss. Thank you, i guess...


I can't understand why people argue about this. Just put some safety measures on it already. Or at least change the rm option to "--brick-my-motherboard"


I believe you hit the nail on the head. Many folk with experience have a different perspective than the folk writing this stuff. Ignoring their perspective dooms you to repeating their mistakes.

The Unix way isnt some holy grail holy shit perfect way. Its the simplest. Simple means its simpler to unfk. Because it will get fk'd.

Rendering someones hardware DOA is a really bad thing. Leaving that as a vector of attack is irresponsible at best.

If Redhat leaves this in expect legal issues.


1) systemd is the wrong place for the safety measures. rm is also the wrong place for the safety measures.

2) remounting a mounted FS is a non-trivial operation with lots of corner cases. What happens when another program is accessing that FS at the same time?

It's simple--the kernel needs to be fixed. Access to EFI should require special sequences to cause writes.

systemd folks aren't going to ask for it because Linus will simply piss and moan and close as WONTFIX/NOTABUG.


Honestly, I feel the issue is more rm -rf's default behaviour being to cross filesystems. This is almost always not what you want (and tends to result in deleting network drives, to give an example which has nothing to do with systemd), especially considering attempting to delete a mount point fails.


I think there are at least three issues here.

One is efivarfs making it too easy to destroy stuff. It should probably identify problematic hardware and prohibit modifications that break that hardware. This is no fun, but hardware-specific workarounds are a fact of life for real-world OSes.

One is, as you say, bad default behavior with rm. Crossing filesystems by default is pretty weird. Preventing that unless it's explicitly requested would largely fix this problem and many others too.

And one is mounting efivarfs read-write by default. It's too dangerous to be that easy to modify.

I don't think it's too useful to try to pin one of these down as "the issue." They're all worth fixing.


Could be a job for SELinux. Most of the files in efivars don't ever need modifying. So SELinux could make all files unmodifiable by default, and allow modification of the few that are needed by efibootmgr.


Unless you set SELinux to run in strict mode root still has access to delete everything.



sensible defaults




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: