Hacker News new | past | comments | ask | show | jobs | submit login
EFI Sucks (2012) (plus.google.com)
101 points by turrini on Feb 25, 2018 | hide | past | favorite | 72 comments



UEFI is a really unfortunate second system with all the markings of Intel's internal cultural issues imbued onto it that are hard to not see if you've worked with intel for any length of time. In the common case of an end user running an operating system, it's really hard to botch that without being kicked out of the market in ways that can't be worked around so it wont matter to most computer users. But UEFI is only a minor improvement over BIOS versus other industry standards for the people that could reap the benefits of an open and extensible bootloader in embedded, appliances, and hyperscale -- device manufacturers and operating system developers.


I'll go against the grain here: I go out of my way to use UEFI over BIOS, because it Just Works for me.

When I got my first UEFI-using notebook in 2012 (same year as the submission), things were a bit rough because the Arch wiki did not quite know what it was talking about, and the Arch ISO was not UEFI-bootable, so you had to use some trickery to install an UEFI bootloader, but even then, it only took me two hours to figure out partitioning and install the bootloader.

Since then, stuff has gotten much better. Just recently, I switched my VirtualBox VMs to use UEFI emulation instead of BIOS emulation because it allows me to use the well-designed systemd-boot (previously known as gummiboot) instead of that absurd abomination called Grub 2.


> it only took me two hours to figure out partitioning and install the bootloader

This doesn't read like a success story to me.


Immediately before, it mentions that this was 5-6 years ago and that there was no documentation available at the time, and immediately after it says that the situation is much better now.


Now try to install arch to a bootable USB stick from your UEFI booted system, and see all hell break loose.

I boot UEFI, but it's just so bloody cumbersome and fragile. The entire concept of having boot configuration in NVRAM is terrible.


Not to turn this into a support thread but this weekend I tried to install Arch on a new, UEFI-enabled laptop and I've been stymied so far because I can't get the laptop to recognize my Live USB. I've built it with Rufus, dd for Windows, and Easy2Boot all with the same result (no bootable device) even though it works on another machine. Could it be because I created the USB on a PC using UEFI?


Huh? I just `dd` the official image onto a USB drive and it works 100% of the time, with various UEFI-using machines.

(Except for that one Gigabyte mainboard which I returned upon finding out that its v1 firmware will only boot Windows 8.1. Not even Windows 8.0. There was a firmware update that fixed the issue, but without a bootable OS, applying it is a chicken-and-egg problem.)


Well then, I guess my problem likely lies elsewhere. Thanks for the reply. :)

And that has to be the strangest "feature" I've ever heard of from a hardware firm. Was it maybe purpose-built for a bulk order of machines that were going to be pre-loaded with Win8?


Even more fun when OEMs put what should be read only ROM variables in NVRAM, and they get nuked...


The one thing I really love about EFI on non-macs is that you don't need a boot loader. You just compile your kernel with an EFI stub and whatever you want in the initrd and then select it with the firmware GUI.

That's nice


I love efi for a similar reason: I still use bootloaders, but I don't have to chainload them or otherwise slow down my boot process with a menu asking which OS I should start.

Default boot into OS #1. If I want any other OS, I manipulate efi bootnext (efibootmgr -n #### && reboot).

So much simpler and more efficient.


The EFI stub is the bootloader, and is analogue to the bootloader needed on a BIOS system.

You don't need a seperate bootloader, or a multi-bootloader on any system. They only exists as a convenience. People still use GRUB on EFI for exactly that reason.


You and swiley are conflating boot loaders with boot managers.

* http://jdebp.eu./FGA/bootstrap-programs.html

* http://jdebp.eu./FGA/efi-boot-process.html


I didn't understand a lot of what you said. Any sources?


The Arch wiki in the sibling comment is a good resource for learning a lot. I'll try to (over)simplify the short version here:

With the old BIOS-based boot process, the computer (the hardware/motherboard; not the operating system you ran on it) would, when it started up, look around for installed operating systems to boot. It wouldn't look too hard: on each hard disk, it would look in one (or, if you were lucky, four) places for a specially-prepared segment outside of the data installed by the operating system. This segment isn't a file or something that you can browse to normally on the drive (though some things shimmed it so it appeared thus); it's a non-file special segment outside of the OS's normal data/files. That segment (well, segments, but still) had to be set up just so otherwise the computer (BIOS) wouldn't find your OS to boot it. Doing this was often tricky, even more so when you needed multiple operating systems to be bootable. Doing this was also pretty inelegant, since if you wanted to stick an operating system on a sub-partition of a disk, as part of the installation process you had to mess with data that affected everything on that disk, not just what was in the sub-partition. It worked, and was simple, but it was the equivalent of being forced to modify a single central global variable in a software project every time one sub-part (file, function, whatever) of that project wanted to use a new library. For an excellent writeup of the gritty details of the BIOS/MBR startup process, see this document: https://neosmart.net/wiki/mbr-boot-process/

UEFI, put most simply, looks around for installed operating systems a lot harder than the BIOS method did. It does a lot more than that, but the benefit GP was describing accrues from UEFI being much more capable in terms of discovery: instead of checking a single global prepared location for bootable OSes, UEFI can look in the files on the drive and do the intuitive thing: "looks like these files are part of an operating system; I'll give the user an option to boot from it".

This doesn't always work perfectly, and there's a lot of complexity (and criticism) I'm hand-waving away here, but that's the gist of the difference.

Sorry, turned out to be a not-so-short version.


If it just did discovery that would be pretty annoying. Most firmware actually lets you select a file to boot manually, that's what I like.


The Arch Linux wiki has a good explanation: https://wiki.archlinux.org/index.php/EFISTUB


Many of the issues are just that BIOS is old and stable while EFI and UEFI are new and immature. There should be a testing suite for UEFI like the CSS Acid test so there is a baseline of working functionality. But instead the test is ‘does it run Windows’ and that really is only a small part of it all.


A lot of the painpoints are gone, understandbly, but some still exist.

The biggest suck on EFI is that it uses PE fileformats. I would have liked it more if it would have just loaded a raw PIC binary or ELF. And not used Windows Calling Conventions.


That is not important. The real bad part is that you still cannot easily load modules of your own to work around broken setup by mainboard manufacturer or Intel or boot rom in the device. This essentially causes lock in. Or even disable some broken modules.

(Security has nothing to do with it, the module can be crypto signed by user installed key.)

Plus the critical hardware setup documentation is not being released at all.

Another is the use of SMI to communicate with the thing which just does not play well with a preemptible OS.

Memory map handling is additional badness on top.


That's more of an implementation problem of common EFI vendors, not EFI itself, sans SMI and mmaps.



Linus spent a lot of energy and Management by Perkele to make sure stuff worked and kept stable under ACPI. People would submit patches that fixed some things and broke others, and Linus put a stop to this

I'm not so sure about the ACPI standard, it certainly has some warts but the main issue seems to be manufacturers who ship the thing once their hardware manages to boot Windows. Once. And note I said boot, not run.


"There are two kinds of interfaces: the simple ones, and the broken ones."

If only more people kept that in mind. ;-)


There's a lot about it that's bad, but the worst is definitely that it allows the firmware to keep running after the OS boots, interfering with its normal operation. Thanks to SMI, the system is non-deterministic: No latency promises can be made.


The same happens with BIOS, that is how many of its settings work.


Yes it does, but that's only of historical value by now.

Sigh, they had one job, and they managed to screw it up, so we do still have to put up with this.


Is EFI the system that my newer motherboards use when booting up?

Is he talking about the system that has replaced grub on most of my machines with a more reliable one that's built into the hardware?

In 2012, EFI/UEFI was pretty unreliable because the implementations were bad and the software support was terrible. Nowadays, I find it preferable.


Unfortunately his statement is still valid even after 6 yrs, it is still a complicated disaster. And it uses Fat32 and PE. An abomination in my view.


FAT32 is a good choice IMO. It's the simplest of the "mainstream" filesystems.


Is it a good choice though? Isn't there a bunch of possible issues with FAT patents, in case someone wants to ship a product that boots, that could have been avoided by using something else?

(I don't know, I just have some vague memory of some Tomtom/Microsoft conflict due to FAT)

edit: user 'xxs' managed to type up the same issue before me (while I was typing mine).


Yes you are absolutely right about the tomtom fiasco and they did end up paying MS.


I think only exFAT has patent issues? https://en.wikipedia.org/wiki/ExFAT#Portability_and_adoption


Both have.


correct me if I am wrong but it's still patented by Microsoft. Also time precision is up to even seconds (i.e. it can not store file stamp 'modified at 10:00:01')


FAT is only patented if you want long filenames. If 8~3 filenames are long enough (and for EFI boot they are) then you can use FAT32 with short names.

Additionally the specific EFI FAT32 implementation grants various patent rights which means you can implement EFI FAT32 (atleast, last time I checked by the situation)

The other restrictions of the FS barely matter since 99.9% of the time it's used as /boot it will be read from and has barely any activity. The data integrity can be insured by setting up secureboot (MS or User/Setup mode)

[0]: https://en.wikipedia.org/wiki/File_Allocation_Table#Patents


> Also time precision is up to even seconds (i.e. it can not store file stamp 'modified at 10:00:01')

In the pure form of the file system, yes, but with VFAT (which also adds long file names, creation time stamps, and access time stamps), the precision is exactly 1 second increments.


My main memory of VFAT comes from Windows 95 (or was it 98), I remember that when you saw this word anywhere, this meant your operating system is now borked beyond any repair.


If the patents haven't already expired, they're probably expiring very soon. Windows 95 which was the first to have the LFN feature came out more than 20 years ago.


With the sibling comment references: 2001-09-04 US6286013B1, so 2021. However that affect only long filenames which should be ok for EFI


FAT32 is not a bad choice. It's a relatively simple filesystem that reduces the amount of stuff the UEFI has to deal with while also being understood by basically any modern operating system (modern = since 2000)

The only other option I'm aware of that is cross-OS would be UDF, which back then wasn't popular outside DVDs and even today people continue to ignore. Though it would have been patent free.


> It's just sad how people always try to "improve" on old standard interfaces by then over-designing the improvements to the point where the new interface is just overwhelmed by complexity.

Did Linus also criticize the complexity and untested nature of systemd?


I vaguely remember Linus finding some strong words when the systemd developers wanted to integrate dbus into the kernel.

(IIRC, they claimed it offered better performance, Linus ran some benchmarks and claimed the kernel-dbus implementation performed worse than the regular dbus daemon. My memory on the subject is fairly vague, though, so don't take my word on it.)


While he has not voiced any direct criticism of the project that i recall, he has voiced concerns about the principle developers of it.

Keep in mind that at one point he was so fed up with Kay Sievers that he announced he would not accept any kernel patches that originated from that person.

I think it related to Sievers unilaterally deciding that the kernel, and not udev, should be responsible for loading firmwares, leading to the kernel devs scrambling to implement the required functionality. In the aftermath some kernel devs even pondered forking udev and bundling it with the kernel code as part of its userspace toolkit.

And this was from before udev was merged with systemd (iirc).


How is systemd an 'interface'?


It's how you interact with daemons.


Ah, right.

I suppose it could also be argued that the kernel is also unnecessarily complex as well?


Possibly, but a pretty obvious retort is that it's necessarily complex; there is nothing else to do all the things that it does. Whereas for BIOS and EFI the argument would be that they should do as little as possible to hand over to the real operating system.


Or at least possible to reduce to a minimum start-up set. Which is possible with EFI but thanks to mainboard manufacturers and Intel and AMD being secretive plus crypto signing everything... not at all.


> but a pretty obvious retort is that it's necessarily complex

I would like to offer, as a counter example, every micro kernel ever.


You mean all those micro kernels that are in widespread use and see huge market shares?

The closest relative is NT and that is a hybrid, not purely micro. Modular and hybrid kernels (Linux and NT respectively) offer better external vs internal complexity tradeoffs; Linux and NT are internally complex since they do all the hard stuff, micorkernels are externally complex since someone else needs to do the hard work.

Microkernels will shove this exact complexity elsewhere not make it go away.


The most prolific (non-embedded) operating system on the planet is MINIX, a micro kernel based OS. On the embedded, medical, automotive, and aviation side there are plenty of micro kernel OS’s that give Linux a run for its money. Just FYI.


But is that because it's a microkernel or for other reasons? Crash stability can be achieved in monolithic and modular kernels too, it's not unique to microkernels.


QNX is reasonably widely used and a quite elegant system. Among other things, it runs the user-facing electronics of several car brands.


It's not obvious that a micro kernel plus all the modules necessary to do what Linux does is less complex.


Eh its a little different. Your analogy would be more correct if we kept adding to the BIOS interface till it got large, rather than starting from scratch.


Actually, that is what happened in the years before EFI. The API for old style PC/AT and PC98 firmware is a patchwork mess of oftentimes-conflicting subsystems that grew over the years.


I remembering reading this a while ago. EFI just tries to do too much and thus is too complicated.

Let alone the concerns that secure boot and other OSes.


Arch Linux on VirtualBox using EFISTUB

Super simple steps to install an Arch Linux guest on VirtualBox with pure EFISTUB. No bootloader! Fill in the blanks with https://wiki.archlinux.org/index.php/installation_guide

timedatectl set-ntp true

gdisk /dev/sda

# 256 MB partition type EF00

mkfs.fat -F32 /dev/sda1

mkfs.btrfs /dev/sda2

mount -o discard,compress-force=lzo /dev/sda2 /mnt

mkdir /mnt/boot

mount /dev/sda1 /mnt/boot

pacstrap /mnt base base-devel btrfs-progs vim openssh

genfstab -U /mnt >> /mnt/etc/fstab

arch-chroot /mnt

ln -sf /usr/share/zoneinfo/America/New_York /etc/localtime

timedatectl set-ntp true

vim /etc/locale.gen

locale-gen

vim /etc/hostname

vim /etc/hosts

systemctl enable dhcpcd

systemctl enable sshd

passwd root

useradd -m user

passwd user

# /boot/startup.nsh

fs0:\vmlinuz-linux rw root=/dev/sda2 initrd=\initramfs-linux.img

exit

umount -R /mnt

poweroff


Had the same experience with my plastic MacBook from 2007 and I could install Fedora 27 and Mint using a related program to iso master. Good to know about ISO master.

Wrote about the process: “The 2007 plastic MacBook lives again with Mint-y Linux awesomeness” @harijay https://medium.com/@harijay/the-2007-plastic-macbook-lives-a...


For this area, my ideal world is one where both the BIOS and UEFI are dead, replaced by something that is inspired both by OpenFirmware and coreboot.


[flagged]


EDIT: Parent post was blaming Linus personally for sundry perceived issues with the greater Linux/Unix ecosystem. I've left my original reply below.

----

Linus had no part in any of those. Most of them existed long before Linux was a twinkle in his eye.

Heck, some (two-letter commands) even existed before Linus was a twinkle in his parents' eyes [1] [2] [3].

(The exception being git, which is objectively one of the best DVCSes which exists. There is a reason it is so widely used.)

[1] http://swenson.org/multics_wiki/index.php?title=Linux-to-Mul...

[2] https://en.wikipedia.org/wiki/Multics#Project_history

[3] https://en.wikipedia.org/wiki/Linus_Torvalds#Early_years


i didn’t say he did create any of those. i only said he created git, which is true. i also didn’t blame linus like you say. i removed my response because, well, it was negative. i would just appreciate linus taking a less combative approach because his world hasn’t endowed the world of computing with as much nicety as he seems to think it has.


Linus has endowed the world with what is possibly objectively one of the greatest, and most historically significant, pieces of software ever to exist. He's entitled to his opinion. To claim he isn't, because the greater open-source ecosystem is fragmented and not user-friendly, is kind of like saying John Lennon has no right to complain about disco, because hippies are smelly.


i never claimed he wasn't entitled. i have simply stated i am tired of his complaints and rants. and my point is that the world he has helped nurture is not without its travesties, so maybe he should concentrate on that rather than constantly bitching about what other people have built.

it would be nice if you stopped twisting my comments to say what you think they say. and i don’t even understand your retort anyway.


>... constantly bitching about what other people have built."

Exaggerating to the extreme implies you do not have a strong basis for your argument.

If you actually count the number of times he's bitched about something, it's probably like 20 or 30 (can anyone provide a real number?). And even then only 2 are debatable in their value. (also a made up number to demonstrate the lack of solid data on this matter.)

Therefore, I suspect you knew this, and just want to vent or make Linus look bad. I hope you have a better day tomorrow and make a decent follow up comment.


in my own personal experience, my primary source of linus news comes from articles being posted about him ranting or cursing about something. i didn’t post those. someone else did, and i read them. normally, i will follow these up with reading about various things about him and linux where he isn’t ranting, but those articles have been my portal to his world. why should i care to seek out more about one who i find is well known to throw e-mail tantrums, flip off cameras as a message to companies, etc.? please send your advice and condescending call for positivity to linus as well. :)


> my primary source of linus news comes from articles

It's easy to fall into survivorship bias[1] when "news" is your only exposure to a topic. You are only seeing the extreme events that were "newsworthy".

> why should i care to seek out more about one who i find is well known to throw e-mail tantrums

Do you want to have an accurate understanding of reality or the distorted view limited to whatever was sensational/scandalous enough to be "newsworthy"? When knowledge is derived only from the extreme cases that make headlines, people start to believe inaccurate generalization like "black people are criminals", "Muslims are terrorists", or "gamers are antisocial nerds that might shoot up a school" even though all of these examples refer only to the most extreme members of the group[2].

> but those articles have been my portal to his world

If you're going to make accusations about someone, you should probably check if those articles (aka indirect knowledge) are both accurate and representative.

[1] https://en.wikipedia.org/wiki/Survivorship_bias

[2] https://www.youtube.com/watch?v=_Qd3erAPI9w


The written form of communication lacks nuance. I genuinely hope you have a better day tomorrow.


"i would just like to get error messages that actually said something more than something went wrong. yea, no shit something went wrong, i already know that, but what and why and how do i fix it are questions i want answered. i know the code knows what went wrong, so just tell me."

..and then a whole bunch of other comments with you bitching and being entitled about stuff.

Hi pot!


I'm sorry, but you won't get that. That isn't how computer's work.

There is no magical "understanding" available through a computer to tell you WHY an error occurred.

It is a computing device. It can tell you THAT a problem occurred, and WHAT it was doing at the time. With a debugger, you are free to ask the WHAT question at many points within the program to figure out WHERE the problem is coming from.

But the answer to WHY can only be divined by you. Mom had a saying she was fond of: "You must be at least 10% smarter than the piece of equipment."

Computers are not smart. They are dumb things that are exceedingly trainable at doing very dumb, repetitive things very quickly. Layer on a bunch of layers of dumb stuff and finding the spot where someone derped becomes very difdicult, very quickly.


wut?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: