The problem is not systemd, its disastrous proprietary UEFI implementations that are shipping the most insecure and awful code in the world.
The problem is we cannot fix this for 9233. MSI will absolutely refuse to disclose the firmware to his laptop so that he can make it so his replacement does not also brick itself. People have been treating coreboot / libreboot like a joke for a decade, but this is exactly why those projects matter and why the continued erosion and trend towards firmware blobs and proprietary bootloaders cripples individuals control of the hardware they supposedly own.
Its the John Deere tractor problem, but until enough people care - I mean, enthusiasts and techies already don't care, and we would need a popular general consumer movement to care to inspire real change - it will only get worse.
All the 802.11 AC wireless NICs in the Linux kernel use firmware blobs. As of Skylake, there is not a single GPU supported on x86 systems in Linux that does not use firmware blobs. Almost every Chromebook is shipping Coreboot with cancerous unauditable firmware blobs. Samsung SSDs have bricked themselves because of their proprietary firmware blobs. Its a constant endemic problem yet nobody cares to put their money where their mouth is.
Hardware has bugs. A lot of hardware has had bugs for a long time. Linux has had tables of "quirks" for hardware pci ids / usb ids / etc. for a long time, for thousands of buggy hardware devices it needs work-arounds for. Some of those bugs are really in hardware, some are in the firmware loaded on the hardware, it doesn't really matter. This is a pervasive reality, and it can't just be demanded that the user get hardware which is not "shitty" by this metric ... it's all a trade-off.
And finally, I've used linux on bios systems and efi systems, and I've never needed efivars mounted, I've always set up the bootloader some other way (which was simpler for me to control and manage as I prefer). My personal biggest complaint about systemd is how it automatically mounts and starts and uses all kinds of stuff that I don't need. I prefer to set up what I need and want, and not have anything else cluttering up my system, just waiting to cause serious reliability or security problems, and getting in my way when I'm debugging something else.
So I'll be up-voting all stories about "systemd did something automatically and on some systems it was unfortunate" because yeah, UNFORTUNATE STUFF HAPPENS WHEN STUFF HAPPENS AUTOMATICALLY. This is why I left windows and OS X in the first place! So I had easy and convenient control over my computer! And now it takes extra effort to override and disable all the crap that systemd is doing automatically, and I resent it. (I actually already have a script on my systems that unmounts efivars and pstore and some other unneeded filesystems after boot.)
I think the blame is put on the wrong party here. Systemd mounts efifs so that only root can rw to it, root rm the fs, hardware is affected. Places where there is a bug, hardware+firmware + unix idea that everything must be a file. Systemd follows the specs and behaviors that are expected of it. If the EFI fs should not be a FS complain to Linus. If the EFI should not brick itself after a rm / complain to EFI developers.
Systemd already took the reasonable security precautions. Root can rm anything it wants on Unix systems. rm efi fs is dangerous, so only root can do it. If root does it then all bets are off. Root needs to be able to write to the fs, per api and other tools needs.
People are quick to blame Poetering, but it is Linus who is leading the project which has the design decisions that are causing problems.
In the end it sounds like 3 projects needs to change their code to avoid an issue with a user mistake/bad firmware combo that only avoids that issue an a blue moon Monday.
All in all the usual storm in a tea cup against systemd. In that case it is funny because the issue is that systemd is to unixy :) everything is a file (system).
Another reasonable option would be for a distribution to include SELinux policies that allow only the blessed tools (grub-install, systemctl, etc) to write to that filesystem. It would be a big change, though, because most distributions leave root and normal user logins unconstrained.
In hindsight you can say "we should have mounted efi ro and made our efi tools much more complex so they can remount it rw write to it and then remount it ro again" but that is asking the systemd and efitool developers to have been prophets that motherboard firmware makers would be so disastrously inept and stupid they would put essential parts of their boot process in a shared variable table with the OS.
Its like saying you cannot ever introduce a memory clear into a GPU shader because you might wipe the firmware and the GPU could explode when its fan shuts off. You have to have the reasonable expectation the API you are working with is conforming, and when it is not you need to raise hell about it, and in the short term introduce the work around until the upstream problem is resolved.
The issue here though is that the upstream problem is proprietary farmed out awful firmware that restricts your freedom and bricks your hardware because nobody can audit or fix its terrible design.
OS X isn't too bad of an approximation though, if you don't sign into iCloud, turn off spotlight integration with internet services (see the recent story about people's macs crashing due to a malformed response from the server!) and don't link your iPhone to a mac (or don't have one). Finally, don't update to the next release until just before it's replaced by another, e.g. don't upgrade from 10.10 to 10.11 until just before 10.12 is released (but do install security updates of course).
But then again ... you'd have to be a bit of an expert to know whether you could trust my advice, and if not, who to trust, especially since you'll definitely get contradicting advice from similarly-knowledgable-sounding people ... so you're ultimately on your own.
I personally do not want to be forced to know all the hazardously settings that exist in the kernel and base system. I want sane defaults, defaults that are set automatically when I install my Debian packages. If I disagree with an automatic default, my options are to either utilize the control (freedoms) that FOSS software provides, or issue a bug report and try get the community to agree with my views.
If "rm -rf /" is a valid use case/concern, send up a bug report. Its better than trying to remove automatic defaults from distributions.
That's kind of an ironic argument to make, given that you are using it to argue in favour of a design where a single mistake can actually brick your machine to the point that fixing it needs a soldering iron.
There are more operations that root can do that can brick a system or destroy hardware. Why should systemd try even harder to make root not do that?
Since the OS doesn't provide permission levels to express this difference, it makes sense to create that isolation otherwise.
I've run rm -rf as root in the wrong directory before, and nuked stuff that required a backup to fix. I'd prefer if everything worse than that required some extra mental confirmation that, yes, I'm sure I want to do that.
A lot of distros also alias "rm" to "rm -i", something that many users explicitly disable. Its a complex problem of security vs usability where most discussions has been rehashed several times.
Using rm on its own will happily perform the command without further verification.
On the other hand, rm -i will request a yes/no on every last file involved.
Personally i have taken to using mc for any "complex" file system manipulations.
I think the two competing ideas are:
Either you make libraries that strictly conform to the spec and are "technically perfect", leaving any bugs that stem from non-compliance to the violating parties (i.e. the UEFI implementors)
Or, you make libraries that conform to the spec and have some "dirty" handling to work around holes left by crappy UEFI implementations.
The first options feels great to write as a developer, but the second option is what most users really need - it's the "it just works" factor that users care about. I know it hurts to code around shitty implementations, but there's no other alternative if reliability and idiot-proofing matters.
The best I can suggest is to make the workarounds "pluggable" so that developers don't have to deal with the harsh realities unless they specifically go looking for the plugged-in workarounds.
The computer industry really needs to overhaul how system firmware is developed, tested, and deployed. Linux devs and users aren't the only ones feeling the pain; they're just the ones who are both skilled enough to trace the problem back to firmware, and inclined to rant about it publicly.
We have seen this happen elsewhere, and seen how it turned out. Browser rendering engines used to be NIH proprietary behemoths and IE6 was the pinnacle of the failure of that model. Graphics APIs and hardware are another great example - game developers are writing broken shaders that driver developers need to work around because the API is broken and the hardware is undocumented proprietary bullshit. Then you end up with insanity like Nvidia having the de-facto standard OpenGL implementation in the industry such that a ton of OpenGL programs break on AMD drivers because they don't have nuances and breakages of the Nvidia one.
But gradually people realize that its not a profit center to be a closed source broken douchebag. Either the industry goes through the pain of correcting the damage (gecko / webkit / blink / etc) or just abandons the insanity (Metal / Vulkan / DX12 / Mantle).
The thing is we are not putting any financial pressure on motherboard makers to not be complete proprietary douchebags. Libreboot and Coreboot are at best infant hobbyist projects with extremely low penetration and they take half a decade to support products or just ship firmware blobs that were half the problem in the first place anyway (cough, Chromebooks). Network routers used to be that way, until the grassroots OpenWRT projects were so succesful manufacturers started making sure their routers were supported (TP-Link / Linksys) or even in the best scenario selling hardware running open source firmware (Buffalo). This needs to happen with x86 chipsets and motherboards desperately because UEFI is making the ACPI mess worse when the answer has been obvious this whole time - open documentation and specifications and working together rather than working in secret.
God forbid you should try and play a new AAA game without the latest patches for your driver!
This is wrong on so many levels. It makes it really hard to compete with nVidia/AMD when you need to hire hundreds of driver writers to patch every game under the sun if you want to have reasonable performance.
For example, they included an exception in their memory protection code for SimCity. This because said game held a "use after release" memory bug that had gone undetected. And rather than have the game crash on later MS product releases, MS put in said piece of code.
It's a long-standing tradition to have workarounds in kernel modules for broken hardware.
Hopefully there is a sufficently-well-funded testing lab out there to get a complete list of UEFI implementations that are broken in this way so that the next rev of the uefifs module can properly refuse to do whatever it is that's bricking these dangerously broken motherboards.
But that is my whole point - there is. We can need market pressure that it is unacceptable to sell computer hardware whose controlling software's source is undisclosed.
There is no solution for this user, but settling for broken implementations and broken libraries is the stuff of the Windows world where everyone behaves like a baby who refuses to share their toys.
This isn't Facebook or any consumer proprietary product where the software is the product. This is hardware controlling firmware that is never pretty or sexy and whose implementation is not a trade secret or the difference between selling units.
The IBM PC, XT, and AT came with a physical copy of the complete source code for the BIOS, several hundred pages in a 3-ring binder. (You can get a digital copy of it here: https://archive.org/download/bitsavers_ibmpcat150ferenceMar8... ) IBM did this, despite the fact that they strongly recommended applications use the BIOS interfaces and not access the hardware directly. They could've saved so much paper if they just documented the API, but they didn't --- they released the whole thing.
Yet 30 years later, in an era where it's easier than ever to distribute large amounts of information, companies regard such details as confidential and proprietary, hiding them even from their customers --- the real owners of their products. It costs next to nothing to distribute a DVD with each machine containing all the source code and documentation, or just put it up for download. Unfortunately the only recourse seems to be the occasional leak, and what's more disappointing is the overwhelming response that this is a "bad" and "insecure" thing, when it's really liberating what users should've received with their hardware.
Create a virus that goes around messing with UEFI vars to brick the machine. Don't damage any user data, just the BIOS.
It'll get fixed.
Why a filesystem and not a library? Write access doesn't require a filesystem mount.
How often do you need to run efitools though? Should it always be mounted read/write for the one or two times in the lifetime of the system that you need to adjust the boot variables? Wouldn't it be more reasonable for efitools to suggest you might need to remount it read/write while you're changing the variables?
Systemd has brought lots of good things to Linux, but tons of bloat and insanity too. I'm quite scared because the next thing they seem to be tackling is one of the core things that makes Linux so nice, package management .
They intend to container-ize everything. While that might be in principle good (NixOS got it right), their solution is likely to end up being a mess: no control over container contents. Imagine a critical security bug on e.g. OpenSSL. Good luck patching all your dozens of unmanaged containers running different OpenSSL library instances.
I think what gets quite few riled up over systemd, besides the continued shoggoth-like scope creep, is the flip flopping on how things are handled.
For example, if you run mount -a, mount will present an error pr failed mount, but will continue trying to mount the rest of the entries in fstab. Systemd on the other hand will fail the whole mount unit on just a single such error, and this in turn will fail the mount because various other units depends on the mount unit completing.
This has bitten admins of remote systems that has gotten "upgraded" to systemd as part of a update of stable distro releases. All because they had a straggling entry in fstab that may have been sitting there quietly for a decade.
Then you have them overriding a kernel default on mount namespacing, because the containers people want it that way, while others have come to rely on the kernel default being, well, default.
I don't think they have yet to "solve" handing NFS mounts, instead giving the world their own take on ctrl-alt-del, while disabling the sysrq sequences.
Or how systemd would (will?) blank out a environment variable via systemd-pam when su -l was used, resulting in people getting their personal settings getting mauled by root. Apparently su is "broken", according to Poettering.
And now Poettering goes ahead and closes this report with what amounts to yet another "wontfix".
Theodore Ts'o seemed to hit the nail on the head nearly 2 years ago.
And iirc, containers are crap sandboxes.
If Nginx, for example, has a security flaw, it's properly Nginx's job to make a new container. That flaw might be because of Nginx's dependency on OpenSSL, but it might also be because of a vulnerability in Nginx itself. Nginx (and every other "app" developer) needs the process in place to get new releases out in the latter case; if they have that process, there's no reason to not apply it also to the former case.
Distro maintainers have been picking up the slack of creating "security releases" for app software for decades now, but they only do it because app developers were unwilling to take on the responsibility of maintaining packages in several different package ecosystems. App developers are not nearly as timid when it comes to container-image releases, since it's just the one QA process rather than five. So, properly, the responsibility—and the SLA—is reverting back to the app developer.
You can't expect, now, that paying Redhat or Canonical for a support contract will mean that Redhat/Canonical will be required to rush you patches for your app software. It never worked this way anywhere else; Microsoft and Apple aren't responsible for patching the vulnerabilities in the apps in their ecosystems. (Even the ones you buy through their stores!) The Linux world is just coming back to parity with that.
Now, on the other hand, system components—things the OS itself depends on—still need to be patched by the OS manufacturer. Microsoft still delivers IE patches; Apple still delivers Webkit patches. Because those are fundamentally system components, used by the OS to do things like drawing notification dialogs.
Those components happen to come with apps, but the system component and the app are decoupled; there's no reason the version of Webkit the OS uses, and the version of Webkit Safari uses, need to be the same. And they're not: you can download a fresh Webkit from Webkit.org and Safari will pick it up and use it. Apple, thus, only takes SLA-responsibility for the system-component Webkit—not the "app Webkit." The same is (soon to be) true for Linux distro-makers.
The near-term effect of all this, of course, isn't that app developers universally pick up the maintenance-contract stone and start carrying it. Most have spent too long in the idyllic world of "downstream support" to realize, right away, what this shift spells for them. Instead, in the near term, it'll be your responsibility, as a system administrator, to be a release-manager for these container-images. This was always true to some degree, because every stack tends to depend on some non-OS-provided components like Nginx-current or the non-system Ruby. But now it'll be pretty much everything.
An interesting incentive this creates is to move your release-management more toward monolithic synchronized releases. If you're managing the releases of all the components in your stack, it's much easier to think of yourself as releasing a whole "system" of co-verified components, rather than independently versioning each one. When you upgrade one component, you bump the version of the system as a whole. This sort of release-management is likely familiar to anyone who has managed an Erlang system. :)
I can have containers, but these containers are still manageable, not black boxes.
Furthermore, since packages come as declarative recipes, one can try to reproduce binaries (see guix challenge command). Otherwise you are at complete mercy of the packagers.
For very large deployments, Docker-like containers are fine. But for desktop applications I think it's not the way to go.
If you're writing software from scratch, though, by all means, target Nix as your package-dep ecosystem, and make your own software into a Nix package. Nix is a great source for creating "community verified-build" container images, for any "legacy" container ecosystem. Where a Nix package exists, it's exceedingly simple to write a container-image formula sourcing it. Thus, having a Nix package makes the software much more accessible to many platforms.
It would also, as an aside, make perfect sense to enhance systemd with the ability to run Nix "units" that are just references to Nix packages that get resolved and downloaded+built at launch-time. (systemd would still be better off turning those Nix units into container-images, though—systemd relies on containers to provide runtime sandboxing, ephemeral container filesystems, and cross-image deduplication of base-files.)
So, for small teams, I see a likely move more toward the model we see with "community-made verified-build Docker container images": a GitHub repo containing the container-image formula for the release, that many small teams depend on and submit PRs to when vulnerabilities occur.
While not ideal, this is far better than the Ubuntu PPA style of "some arbitrary person's build of the release." It doesn't give you anyone to blame or bill for downtime, but it does give you frequent releases and many-sets-of-eyes that hopefully make your downtime quite small.
It's a bit like the atomized "bazaar" equivalent to the "cathedral" of a distro package-management team, now that I think about it. Each verified-build formula repo is its own little distro with exactly one distributed package, where anyone with a stake in having good releases for the software can "join" that distro as one of its maintainers. :)
It doesn't even matter if you're pro or anti-systemd. That sort of response just shows a huge lack of understanding about the severity of the problem.
Note that he say writable only by root and well protected. As far as I can tell, that's agreement that there needs to be something done to make it safer. All his other statements seem to be noting that it's not as simple as making it always read-only, as there is legitimate need for write access in some instances, and tips on how to mitigate the current issue.
The personal attacks on Poettering should be directed to Red Hat instead, who actually have the power to do something about it.
I don't think you quite understand the relationship Red Hat has with its employees ("associates" in company lingo but I abhor such doublespeak). Allowing those employees to act and speak independently is kind of a core value. That freedom is only curtailed when it directly impacts paying customers to the extent that it would be considered a breach of that relationship. Upstream, in the open-source community, almost anything goes. Yes, that means systemd developers can be a bit priggish. It also means other developers, also employed by Red Hat, can call them out for it. It's the only way to ensure the diversity and meritocracy that are essential to innovation. Otherwise, you end up not being able to trust a word that employees of a company say because you know they'd never dare say anything even slightly inconsistent with the party line. I used to see that when I worked at EMC, just for example, and it's really quite stifling.
Personal attacks on Poettering should not be redirected anywhere. For one thing, personal attacks don't get anyone anywhere. Legitimate criticism of his views should be directed at him, just as legitimate criticism of my views should be directed at me and legitimate criticism of your views should be directed at you. There's no reason to bring any third party into it. No matter how much you hate them or why, that's simply irrelevant.
Meanwhile, on the GitHub issue:
>Locking this one. Note sure which peanut gallery site linked this...
This is clearly a guy who thinks his own opinion is above reproach and the unwashed masses have no right to question him.
The problem you seem to miss is that his arrogant antics _do_ reflect on Redhat, and reflect badly on Redhat. I for one will _never_ use anything from Redhat, nor pay Redhat any money for anything as long as Poettering and the rest of his crew (Seivers et. al.) are employeed by Redhat.
I am sure I am not alone in this viewpoint. As more realize that the problem is a problem at Redhat, eventually Redhat management will be forced to intervene.
PS as an employee, you should be pointing your upper management to these types of discussions, for the very reason that the actions of a loose cannon group is reflecting very badly on Redhat as a whole.
I think you're missing the point. It's not about who you tell. It's about who you're criticizing. Red Hat is not the one making these comments. If you condone a company summarily removing a project leader you don't like, then you also condone a company summarily removing a project leader you do like. That doesn't end well. In fact, I could name projects on which I feel Red Hat has forced their will on upstream entirely too much, to the detriment of both. It's not the way a meritocratic community is supposed to work. I think in general it's better to let technical communities deal with their own issues, and in general Red Hat is wise enough to recognize that.
Believe me, I know where you're coming from. I almost didn't join Red Hat myself because of people like Ulrich Drepper (who was still there) and Al Viro (who would even have been in my group). I understand the sentiment. Criticize Poettering if you want, make sure Red Hat knows the effect that his behavior has on your purchasing decisions, but don't blame them for trying to do the right thing by adhering to a policy with a solid philosophical and practical foundation behind it. Do you want corporate overlords to be meddling in every project's internals?
Poettering, being a high profile employee of Red Hat, is one of the faces of the company. Because of this, I would argue that Red Hat has a duty to step in from time to time to tell him that what he is doing is harmful in the long term. If they don't do it soon, I can almost guarantee there will be a lot fewer support contracts for Red Hat in the months and years to come; they just don't create dependable developers.
If all Poettering did was community outreach, you would have a much stronger case that he's bad in all aspects of his job and should be fired. Trying to pressure Red Hat by saying it 'reflects poorly on you' is basically saying 'you should get rid of this guy even if you think the good outweighs the bad, because I don't like him.'
If this is true, then this is exactly _why_ the criticism should be directed towards Redhat. Because in his arrogant world view, he can do no wrong, so he will not report to his bosses that he has ineffective community leadership/stewardship. The only way his bosses will know of his ineffective leadership/stewardship is if the criticism is directed towards Redhet, and therefore, his bosses.
I.e., the criticism has to go around the roadblock, Poettering being that roadblock.
Three employees among thousands, representing two projects out of hundreds. Why generalize from that sample? Why ignore all those contributions to the Linux kernel or Fedora, OpenStack or Kubernetes, gcc or coreutils? Some pretty good leaders in there. And if "creating leaders" is supposed to be how we judge companies, what should we make of much larger companies where few employees engage with the community at all? I don't even mean unpopular companies like Microsoft or Oracle. What about Google, for example, or Apple? When it comes to community leadership, they're net negatives; existing leaders go in, and are never heard from again. When every single developer at a billion-dollar-a-year software company is engaged with some open-source community or other (often several), there will be a few losers. That's a poor reason to insult thousands of others.
The whole thing reminds me of how priests and monks would debate the number of angles that could dance on the head of a pin.
So that's basically what uint32 proposes: to make an impact.
I can't seem to locate it now. But i seem to recall a video released within the last year so so that showed various people from within RH, regarding the history of Linux. And at one point one of them energetically declared "we won" in regards to some "unix war".
All in all the video gave the impression of a company culture that had the mentality that they could do no wrong.
Start informing Redhat that these responses on Poettering's part reflect badly on them as a whole, and especially _stop_ paying Redhat any money until the issue is resolved, and Redhat will resolve the issue very quickly.
Poettering and Sievers are just freedesktop.org adding another layer of feces on the pile. The fact that people switch distributions to avoid his applications (NetworkManager, PulseAudio, Avahi, systemd), and the fact that someone hacked XBill to include him because he's a worse villain than Bill Gates, should put him on a level that even Ullrich Drepper couldn't touch, and everyone hated that guy.
Redhat has been a useless organization since they chose to promote Havoc Pennington over Carsten Haitzler (longer than any of you have known they exist). Blaming them does nothing to stop their slow lurch over the linux landscape. Boycotting them might, but there are too many RHCE's out there. Best to focus on FDO and Poettering/Sievers.
Then again, there's also the question of why removing EFI configuration variables would make the machine unbootable; you would think that in the absence of any explicit configuration, the firmware should just choose a sane default. That would be like making an rm -rf / also reset your BIOS settings, which is probably surprising but easily recoverable behaviour. This seem as crazy as mounting the whole flash ROM as a filesystem, so deleting it erases the firmware. The symptoms described do sound like what happens if you try to boot a motherboard with a completely blank BIOS chip (from personal experience...) --- the system will power on and just stay on, but nothing else will happen, not even a POST beep.
Edit: given that the majority of users have probably never touched BIOS/UEFI settings so that they remain at defaults, resetting them would not be noticed by them either. It's likely the advanced users, the overclockers and so forth, which will be running non-defaults.
All these hardware companies - the motherboard makers, the wifi radio makers, the hard drive makers, the peripheral and chipset makers - all write the most awful insecure disastrous code in the industry. But because nobody cares enough to put their wallet where their mouth is and refuse to buy hardware without access to the firmware to audit, improve, fix, or replace it this is what you are left with, and you get what you pay for.
And there are plenty of firmware settings you might want to change even without overclocking. Change the default boot hardrive? Firmware. Turn off unused ports on your motherboard? Firmware. Change fan speed settings? Firmware. Any implementation of network / usb booting? Firmware. Full HD encryption? Firmware.
I just know the next laptop I buy will be whatever the highest end liberated Chromebook at the time is, preferably without cancerous firmware blobs that control everything, but that seems unlikely considering how anti-freedom Intel is.
I just hope AMD saves x86 computing in the Zen generation. They are the underdog, they have reasons to not throw users under the bus for complete control of the platform like Intel does. But their hijacking of radeonHD and injection of firmware blobs there doesn't make me hopeful for first-gen freedom respecting hardware from them any time soon either.
RISC-V, save us!
Yep, I don't think you'll find x86 Chromebooks without blobs. Nowadays the only way Intel provides to boot their hardware is "here, take this binary and put it at the beginning of your BIOS".
Same thing with AMD, btw.
You presented evidence that hardware vendors write awful firmware code - why will these people write decent firmware as soon as RISC-V comes, i.e. what evidence do you have that we won't have the "accepted" behaviour of having RISC-V and bad firmware?
A lot of the problems with x86 motherboard vendors is probably that they legally cannot disclose a lot of the internal documentation and code handed them from Intel, because Intel uses some of it as a trade secret against ARM / AMD.
Poke around Intel's firmware developer center http://firmware.intel.com/develop . There's pretty much everything there you need to make a firmware that isn't terrible, but companies will find a way to provide one anyway.
FYI the pre-Pentium subset of x86 has been public-domain and free of patents for a long time, and I believe several more Pentium-level patents are going to expire soon, so in that sense a lot of the basic x86 instruction set is more open than ARM. No doubt if RISC-V becomes popular there will be plenty of proprietary extensions to it too.
Systems would be safer if this stuff was stored on disk and OSs never had any reason nor even possibility to tinker with motherboard's configuration memory.
The "low level hardware settings" were always stored on the motherboard, ever since the PC/AT. The big difference is that a simple CMOS reset would reset those to the defaults, and the machine would be bootable with the defaults. In the old days some errant program could corrupt CMOS (writes to port 70h/71h), but that was relatively easily fixed.
With this UEFI stuff, it appears the configuration data is stored in nonvolatile RAM, there's no easy way to "reset to defaults", and the defaults are either missing/unusable.
The good-old-days weren't all Wine and Roses. For example, there was this: http://webpages.charter.net/danrollins/techhelp/0054.HTM
If you wiped out CMOS, a "simple CMOS reset" was not sufficient to allow booting, because knowledge of the type of disk you had installed was lost.
Sure, you could iterate thru those preloaded disk types and stumble upon the correct one. BUT, having a fixed selection of types proved to be too limiting. So there was a scheme to add drives types. That info was also stored in CMOS, so if you lost that, it was quite difficult to restore the configuration:
Newer BIOSes provide a means to define a custom
setting. The setting will be stored in an
undocumented location in CMOS memory (and is lost
if the battery ever fails -- so write it down!)
But the problem of drive geometry detection essentially disappeared with IDE autodetection, which quickly became the norm sometime in the early 90s.
But isn't boosting the security of PC systems a selling point of the UEFI/Secure boot implementation? Or was that all a lie?
SecureBoot is a farce. 99.99% of users will never be the target of the attack it supposedly prevents and the other less than 0.01% of users know who they are.
On top of that it doesn't even work. The premise is for it to be used in combination with full disk encryption (since otherwise the attacker could just remove the drive), to protect the integrity of the boot shim that prompts you for the decryption password so the attacker can't replace it with one that gives the attacker the password. But there is necessarily an unencrypted analog connection between the human and the computer and the attacker can still capture the password that way.
If you are bothered by people booting unauthorized disks on your hardware, enforce signature checking on OS images.
# rm -rf /
Among other things, it will delete the current directory. In the standard it does not say what to delete first, In their implementation it will try to remove the current directory first -> undefined behaviour -> it fails.
The logic behind it: When is it really your goal to delete your entire machine, mostly never, you don't type it out by accident, but shell scripts with unset variables might do it.
And regarding Poettering's response (not trying to start a fight): It's Poettering, what do you expect? You can hate or love systemd, but part of why people hate it is his intellectual arrogance in everything he does.
 He tells the story somewhere in here https://www.youtube.com/watch?v=l6XQUciI-Sc (quite entertaining)
I don't think it's just his arrogance, it's that it's not backed up by substance. Linus is an arrogant prick, yeah? But his kernel works pretty well, so he gets some slack. All pulseaudio ever did for me was waste my time and break my ability to output sound. Systemd wastes my time, makes my computer work different for no reason that's apparent to me, and now makes it easy for me to brick my machine if I'm not careful and I have a terrible bios.
 I've yet to meet a bios that isn't terrible, although hopefully few are terrible in this specific way.
I find it even more worrisome that some compare this mistake to accidental clobbering of /dev/sd?.
That's an insane design decision and if I replicated that in my professional capacity designing heavy machinery I'd be rightly fired and sued to oblivion because the equivalent result is dead people. This is a basic case of the principle of safety-in-design.
Linksys had this problem a few years back with their new line of "open source" routers - it took them months to clean up their awful internal coding styles to get patches accepted into DDWRT, and even then they were accepted on a compromise where DDWRT developers had to fix a lot of it to make it less of a security portability and readability nightmare.
These hardware vendors at all levels - storage controllers, chipsets, radios, and more all have absolutely no QA on their code and by being so extremely proprietary nobody can do anything about it, and not enough people care to speak with their wallet to changes these terrible habits.
Hardware vendors do have QA, but it's mainly about ensuring that things work, not about trying to break them in every possible way. Safety and security seems to be notoriously hard for people who have been taught how to make things work, but not how to make them fail.
I know I'm repeating myself but I still think the interations between OpenWRT and the Chinese firmware vendor that was pushing Linksys firmware upstream is a valuable example of why open source is valuable in this context even if you are not intimately involved in the development, testing, or inspection of such code. Public code by its nature requires more scrutiny and its harder to get people to accept something broken or poorly written when they can see just how bad it is.
If you want to develop awful coding habits, only work with people who never develop free software. If you want to have really good habits, work in a very popular free software community, because when your work is in the open like that and everyone is a volunteer nobody is going to put up with crap.
> then hundreds of eyes would be upon it and awful code that
> does this insanity would never enter official repos.
There is no comparison between the bugginess of BIOSes and OpenSSH.
There are degrees of return on code visibility, though. Even a dozen competent developers could miss arcane buffer overflows or bad page execution issues in a large patch because the language is awful and lets you do crazy shit. That is one aspect of development quality that doesn't go away when you move from closed to open source.
But the best practices - consistent code style, documentation, reasonable variable names, reasonable line lengths, and the need to defend your contributions are all products of open collaborative development processes.
I'd argue in many ways that the open nature of OpenSSH is why we have only had three (four?) major security vulnerabilities out of it in the last five years. Its a sixteen year old ANSI C codebase, of course its a security nightmare, but it is a lot less dangerous than it could have been - imagine having heartbleed on a proprietary TLS implementation where developers could not immediately fix it or easily deploy the fix.
You shouldn't be able to hose it completely except through special equipment, for example by connecting to system programming terminals on the motherboard with external hardware. The fact that a higher-level system can damage a lower-level system is just bad design.
As of OSX10.11, the live, everyday-use OS doesn't have write access to EFI variables. Instead, to fiddle with EFI vars (which happen to include the OS's kernel-module code-signing grant store, which is how people most often run into rootless) you have to reboot into the recovery partition.
In other words, instead of creating a custom BIOS setup as a special UEFI app with privileges that the OS never has, Apple has instead given OSX the equivalent of SysV runlevels—and then made EFI only writable in the single-user maintenance runlevel. Instead of transitioning between these runlevel-equivalents "online", you reboot between them; and instead of being modes of the same OS image, they're two distinct copies of the same OS. But the usage semantics are the same.
(The key to security here, if you're wondering, is that the recovery OS is a single solid image that's been code-signed as a whole, with the signer's pubkey kept in another EFI var. The live OS can't just be made to overwrite the recovery OS into something malicious, even though the live OS has full control of the disk it sits on and is responsible for replacing the recovery OS when it receives updates.)
Personally, I think something similar might be the best solution for Linux as well. People are suggesting something like a wrapper program, but a wrapper can still be used maliciously. It's far easier to secure a "maintenance mode" of the OS that must be rebooted into, and doesn't bring up the network; such a mode necessitates (remote virtual) console access to actually do what you want, rather than allowing you to simply trigger off a destructive EFI op over SSH.
This can still be automated; your automation just needs to be able to speak to the remote console. And tools like grub-install can still work; they just need one program on the live-image side and one on the recovery-mode side, where the live OS's grub-install just records your desired changes, sets the recovery-mode flag, and reboots; and where the recovery-mode grub-install agent reads the file, actually performs the op, unsets the flag, and reboots back.
Difficulty in getting to the battery aside, he just did a regular CMOS reset, the standard technique for getting otherwise unusable systems back to a good state.
Now that you posted this, I think I recall one friend telling me that hibernation killed his laptop. But this was over 10 years ago and I only know about one such incident.
OTOH, what UEFI gave us is basically a portable and convenient API to brick any machine from any OS.
efidestructivecmd opts... --really-brick-myself-and-catch-fire # fire optional
Why is it incredible? It's no news that you can flash various things if you've got root.
Ended up being the case that the EFI pstore was filled (half full?) with Linux crash dumps from before I ironed out some OC stability issues. Had to manually mount it and then delete files named "dump-type0-" from the BIOS NVRAM to resolve the issue, which was pretty fun.
Something along these lines: https://bugzilla.redhat.com/show_bug.cgi?id=947142
forgot account name
Here's OpenRC also mounting this filesystem read-write, since 2013:
Here's a Debian bug from a year and a half later, asking for systemd to do the same on Debian Linux:
And here's Finnbarr P. Murphy in 2012 explaining the whitelisting that the old efivars system imposed upon variable access, stating that this system "should be retired", and questioning why these checks are not performed in applications-mode code rather than in kernel-mode code. I suspect that a lot of people can now answer that question, with hindsight. (-:
With systemd it is done in the C code of their init binary, thus you have to work around it by a remount on fstab.
Many regular (and somewhat tech-minded) individuals may not fully understand the issue, but this issue was a problem that crept up due to pure coincidence that somebody tried to rm -rf something, otherwise that bug would linger like how OpenSSL bugs did.
I also really wish the pro-systemd crowd would stop attacking anybody that does not agree with their views. Linux was and always will be about community, and if you alienate the rest of the users, freedom means they will (and probably should) move on, even if 90+% of Linux-variants now use systemd.
Linux distribution other than Arch are likely to be affected as well because systemd is hardcoded to mount the EFI variables pseudo-filesystem with RW access on every boot.
Is this correct ?
Basically the UEFI spec is crap to begin with and there are implementations that make egregious mistakes that don't matter if the guest OS is Windows but goes horribly sideways if something atypical comes up.
For instance, a while back there was a case of Linux bricking a bunch of laptops because while efivars had space available, trying to use some of this "free" space would leave you with a paper weight.
Systemd decides of its own accord (that is, the distro cannot tell systemd not to do this) to mount dangerous filesystem read-write by default.
"Solutions" provided by systemd developers:
"Well, there are tools that actually want to write it. We also expose
/dev/sda accessible for root, even though it can be used to hose your system."
"I don't see that particular behaviour as much of a problem. The problem
is that buggy systems can be bricked; it could just as easily happen
because of, say, a bug in gummiboot or refind."
"So all fixes mentioned here can only protect from accidental deletion -
not malicious intent."
"As long as distribution that are aimed at consumers remount it ro and
on updating kernels wrap grub with remount this is a complete non-issue."
"If anyone needs protection from idiocy, mount it as ro in /etc/fstab."
"To make this very clear: we actually write to the EFI fs in systemd.
Specifically, when you issue "systemctl reboot --firmware" we'll set the
appropriate EFI variable, to ask for booting into the EFI firmware setup.
And because we need it writable we'll mount it writable for that."
This sounds fun and dangerous.
There are good things that come from treating everything as a file.
Because not everything is a file? And therefore forcing people to treat everything as a "file", which they have certain natural expectations for, leads to problems such as the one here when the objects cannot fulfill the users' expectations?
> You can use it to do many more cool things with shell scripts (such as [...] dodgy cowboy stuff)
I feel this is fairly obvious... if your argument for the everything-is-a-file model is a love for dodgy cowboy hackery, then is it really that surprising that you're sacrificing something (in this case, usability/sensibility/etc.) in the process? I mean, yeah, those who feel like cowboys might find your system intuitive, but do you not see how it might not be very usable by (or useful to) other people?
> Because not everything is a file? And therefore forcing people to treat everything as a "file", which they have certain natural expectations for, leads to problems such as the one here when the objects cannot fulfill the users' expectations?
"Everything is file" doesn't mean that everything is associated with a block on disk. Filesystems are a very good (and intuitive) way of describing hierarchies, and benefit from requiring literally only one syscall interface that works will all of your tools and programming languages without needing to update the stdlib. How would you propose to represent hierarchies using syscalls? Would you have a "set_uefi_variable" syscall? How would that not become unweidly? ioctls wouldn't work (not just because they're ioctls but also because you'd need to open a file, and devices aren't files either -- because "everything is a file" is bad, right?). You could try doing it all with kdbus (or something), and that might even be somewhat plausible. Until you realize there's no way of doing anything that doesn't require breaking out a C compiler. Shell scripts couldn't do simple things like change the dimness of your backlight.
> > You can use it to do many more cool things with shell scripts (such as [...] dodgy cowboy stuff)
> I feel this is fairly obvious... if your argument for the everything-is-a-file model is a love for dodgy cowboy hackery, then is it really that surprising that you're sacrificing something (in this case, usability/sensibility/etc.) in the process?
It also doesn't require 500 pages of API docs each time you want to change your backlight with a shell script. The whole point of Unix is to solve problems quickly, not to have to break out your C compiler each time you want to do something less trivial than renaming a file.
> I mean, yeah, those who feel like cowboys might find your system intuitive, but do you not see how it might not be very usable by (or useful to) other people?
Filesystems are intuitive to almost all users. When teaching Unix to my friends, I start by saying "everything is a file" and move on from there. Why? Because it's actually a useful abstraction. Filesystems are already incredibly intuitive. Not to mention that nobody was actually complaining about how intuitive it is to have efivarfs, it's a non-issue.
I guess Germans aren't human beings?  :/
mwfunk is making the incredibly exclusionary statement that human beings should not (and that he does not) take pleasure in the pain and/or misfortune of others.
We can infer from this things like:
* mwfunk does not chuckle when a series of unfortunate and highly improbable events results in someone suffering very minor injury.
* mwfunk enjoys almost no comedy, because the vast majority of humor revolves around the retelling of stories where at least one of the participants has been harmed in some way, however minor.
* mwfunk does not feel any form of pleasure when a Bad Guy has gotten his comeuppance and is now being punished for the wrongs he inflicted on others.
It seems rather unlikely that these three points (and the hundreds more like them that could be inferred) all apply to mwfunk. The more likely explanation is that mwfunk is speaking from a rather high horse, and hopes that we groundlings can't hear when he chuckles at a rather good joke in a stand-up skit.