Hacker News new | past | comments | ask | show | jobs | submit login
CosmicStrand: The discovery of a sophisticated UEFI firmware rootkit (securelist.com)
251 points by Harvesterify on July 26, 2022 | hide | past | favorite | 117 comments



> The most striking aspect of this report is that this UEFI implant seems to have been used in the wild since the end of 2016 – long before UEFI attacks started being publicly described. This discovery begs a final question: if this is what the attackers were using back then, what are they using today?

I always marvel at the ingenuity and technical complexity of these kinds of attacks, but this is also something that makes me lose sleep at night.

I can’t help but wonder just how utterly compromised we all are, and won’t know it until many years down the line.


Most modern exploits on this level are extremely difficult to get onto users machines - without any conspiracy at play, you would have to essentially get users to run untrusted code, and for general use case there are a whole bunch of blockades against this. For private entities seeking financial gain, its completely pointless to burn a zero day like this for the return that you would get.


Really ? On some of my computers the UEFI partition is a FAT32 partition writable by anyone by default.


Sure, from your computers OS. Its not like javascript loaded from the web can write to your UEFI unless you use an insecure browser.

Most people are not going to be downloading random executables and running them, since software is managed through App stores nowdays.


I think the point of the original comment was that it's extremely feasible for attackers this sophisticated to have access to browser 0 day which would allow fs access, "insecure" browser or not


Zero day exploit =/= people automagically get infected.

One would have to first craft the shellcode insertion into the exploit, which is not exactly trivial, then hope that enough users visit a particular website to get infection (which is a negative feedback loop as the more users visit a website, the more likely it is to get scanned and reported as containing malware), then drop a crafted executable to presumably steal something that is worth money, which is a whole separate problem.

Possible? Definitely. So is you getting held up, your car stolen, and chopped up for parts, with no available recourse. Both are rather unlikely.


I’ve personally found multiple 0days in Chrome, and I doubt anyone (including myself) would consider it an insecure browser.


Here it start from a modified motherboard firmware, stored on a chip on the board, NOT a normal partition. Harder to infect at first, but way more persistent, surviving even disks changes.


What systems are those? Windows doesn’t allow you to do that by default unless you’re an admin.


Any systemd-using Linux distro will also automount /efi as writable only by root (assuming the mount was generated by GPT auto generator, not by a specific fstab entry), so it's not that either.


Normal home users are administrators, they have to go through the pop-up to run things with escalated privileges but that, according to Microsoft, is not a security boundary.


If we're considering Administrator / root access as trivially available, then any exploit becomes trivial itself. Even on a BIOS machine root can overwrite the MBR / kernel / initramfs to contain an exploit.


Sorry, the argument is:

> Windows doesn’t allow you to do that by default unless you’re an admin.

The refutation is that by default users are an admin. So no, they’re not protected against persistent threats like UEFI malware.


Yes, and thus I made the comment that I made.


Unless the mobo has that Intel KVM thing built into the hardware and they access that. Very low level remote access this way.


Shameless self-plug here.

I wrote about the potential for this problem in 2014 for my graduate thesis.

https://search.proquest.com/openview/cd06aab6e06951ba6cdc064...

Edit: to the parent, I shared many of the same concerns back then, too. I tried to speak to those anxieties in my final product.


Yeah, I assume that either everything is infected and backdoored and there's no way to detect it until it's too late, or almost nothing is infected because doing so would be the cross platform compatibility nightmare of all nightmares.

I don't know which one it is.


Definitely the first. Beacons have better cross-platform support than most Microsoft products.


Microsoft sets an extrememly low bar as they only want one platform, theirs.


> I can’t help but wonder just how utterly compromised we all are, and won’t know it until many years down the line.

It's not hard to imagine.

USB-C 3.0+ cables all need chips inside them for negotiating USB-PD, among other things.

Imagine what could be done with an infected USB-C cable. Yes, of course, keyloggers are possible (that's been done plenty in the past with regular old USB-A 2.0), but think about one of USB-C's common applications: docking stations.

If you hooked up your laptop to a docking station with a malicious USB-C cable and you had ethernet, an external monitor, and a keyboard plugged into the dock, you would basically be giving an attacker a VNC session. It could scoop up everything you type, everything on your screen, and communicate via a connection that is entirely transparent to the OS.

At that point, your only hope is a firewall flagging the connection, otherwise you'll be completely oblivious to the ongoing surveillance. And it could compromise a network connection to insert a malicious payload into a file you're downloading, just to make the surveillance persistent when you're not plugged in.


If you care about security, consider using Qubes OS. It will be extremely hard to infect your UEFI from a VM.


You know what'll help? Pluton. The future of computing is a signed code path from power on to end-user application code with multiple layers of sandboxing in between. With so many hostile actors, from script kiddies to government agencies out there, "general purpose computing" (which, from a security standpoint, is just arbitrary code execution) just isn't viable anymore. We need provable attestation that no layer of the software stack has been tampered with.


So you will have backdoored SW but you will not be able to replace it because it is "secure".


As long as we control each layer this sounds great. What are you thinking, some sort of physical switches on the computer that turns on and off access to the layers from software so you can control them individually? That's the tricky part, how to switch those layers on and off so you can work with them in a non-software controlled way.


yes that is a problem, even bigger is that pluton is intended to make this as close to impossible as is possible.

pluton is about burying TPM and keys in the processor package rather than as a separate host on the bus. this could be defeated by using microsurgical technique to reveal the die and alter the connections, an extreme effort requireing an extreme motivation.


“With so many hostile actors”

Like Microsoft? And like government actors compelling Microsoft via things like NSL’s?


This has been echoed in physical security for as long as it’s been around.

Look at “bump keys” for example. Those who have the knowledge walk right through security barriers like they aren’t there and meanwhile security companies make optimistic claims of safety just to sell more locks.


I live in fear of being told my factory delivered Dell rackable servers have been EFI infected since inception on my network.

It's silly to pretend a BSD OS is going to be immune of the consequences of an EFI which is compromised at birth. Sooner or later there will be a value chain in compromising my OS, through the EFI.

I wish we had better out of band EFI validity checks, based on what the manufacturer thinks should be there, as a reproducible bitstream.


It would also help if there was a standard header on the mainboard that you can use to verify all of the flash chips when the computer is powered off to minimize the amount of the computer you have to trust.

While some may argue that this header would be the perfect place to install a implant, doing so is vastly harder than popping some manufacturers computer. Also, since the header will be specifically checked by some users, it becomes a very risky place to install an implant.


I think it would be easier to do it safely if you made it so that the number of chips to be flashed was small and they were easy to pop on and off the motherboard. I grant that this is more work to use than a single master connector, but it removes that point of vulnerability both for undermining the ability to flash things and the massive backdoor that is a single port with the ability to reimage every chip in the machine.


Ideally the write enable line of the flash chips would be hooked up to their respective application processors, so when you are reading them via this header they will be read-only as the processor would still be powered down. For an adversary that is able to remove soldered chips there isn't much you can do without going completely custom for everything.

Having sockets would increase the costs ($1-20/flash chip) and doesn't raise the sophistication level of the attacker from unskilled labor (literally anyone in the chain of custody) to skilled labor (eg: someone that can do SMT or BGA rework).


I've recently reflashed BIOS/UEFI chips by soldering plastic-ended jumper wires (easier to work with than regular breadboard wire) directly to the BIOS chips, and plugging the other end to a Raspberry Pi's SPI host pins and running flashrom. It's definitely involved to learn and tricky to pull off (like any form of complex soldering), but much lower in equipment costs than desoldering surface-mount flash chips (which I hear requires hot air to do without damaging the chips or board).


https://github.com/chipsec/chipsec

It's not out of band and therefore vulnerable to massively clever malware, but it's still useful.


You can use the Dell Trusted Agent to to do just that:

https://www.dell.com/support/kbdoc/en-us/000126098/what-is-d...


While there seems to be verification off the device how do we know the EFI attack isn't aware of this agent and presenting it with legitimate data for it to process while still remaining hidden?


Windows only. OP mentions BSD.


Whilst it’s not perfect and not exactly what you want, you can use chipsec to check for a bunch of known UEFI security vulnerabilities.


I wonder why more computers don't use the simple boot model that devices like the Raspberry Pi use. From what I've heard, the RPi is effectively immune from persistent malware. Firmware can't be modified [1], and while the second stage bootloader can be flashed in the RPi 4, the first stage bootloader can't be modified [2]. What this basically means is that no matter what infects your pi, you can always just replace the SD card and restore it to a clean state. In contrast, I've heard so much news about how USB firmware can get reprogrammed [3], how PC malware can survive BIOS reflashing [4], how malware can live in external drive firmware, etc. Of course, if there's a bug in the raspi firmware, it also can't be fixed, but the attack surface is so small I'm willing to make the trade-off (and buy a new pi if it comes to light).

[1]: https://raspberrypi.stackexchange.com/questions/8963/are-the...

[2]: https://www.raspberrypi.com/documentation/computers/raspberr...

[3]: https://security.stackexchange.com/questions/97246/badusb-wh...

[4]: https://security.stackexchange.com/questions/44750/malware-t...


I would actually be on board with that, if the boot/firmware (micro)SD was separate from the main OS drive, because the annoying thing about the Pi is that it can't take generic images - you have to flash a pi-specific image to your card because it has to include the firmware. There's a part of me that says by the time you've put the boot firmware on a dedicated card and made that card robust enough to survive the lifetime of the machine you've just reinvented built-in flash chips, but I agree that the ability to trivially remove it and have all the (changable) firmware in one card is an improvement over the status quo.


Raspberry pi has firmware on the USB hub AFAIK :)


I remember being called a reactionary naysayer like, 8 years ago, because i told that this would happen.


A lot of people don't like negativity so strongly that they'd rather be screwed over than have to consider the possibility that something bad is happening.


We'll see a lot more 'conspiracy theories' proving out to be perfect practices in the near future, and the funniest thing is that none of those who label critically thinking people as tinfoil hats would admit their fallacy, on the contrary - they'll be ardently asserting that they 'definitely saw it coming' too!

Cognitive dissonance is a scary thing, makes people doublethink by repressing the conflict between expectation and observation into the subconscious.


A lot of people don't like idealism so strongly that they'd rather stick with old hardware over the newest hyped-to-death shit.


Just in case it isn't clear, I wouldn't have called you reactionary or paranoid back then because I would have agreed with you. Its just sad that others choose not to open their eyes.


This rootkit is old by computing standards (2016), and apparently found somewhat by chance in that it was found in free (probably consumer) users of their product.

Could this indicate a higher likelihood of it being a consumer board supply chain attack? It might explain the lack of detection in business oriented computers, though it also would seem to indicate that it was not precisely targeted.


My hopes of large volume fully open source systems died when I learned that beefy RISC V boards will ship with UEFI.


UEFI can work open source no problem. You'll still need binary blobs for memory initialisation and such, because no open systems exist for that, but the boot process isn't really closed.

Aside from open source UEFI setups like Tiano, you can also use CoreBoot or LinuxBoot where UEFI doesn't work for you.


UEFI is not proprietary. EDK2 is a complete UEFI implementation licensed under the BSD license.


You can implement something else, riscv is a young isa


yup, RISC-V happily boots with just uboot


Yep! You can run linux even on an entirely open source from hardware to software toolchain: https://github.com/litex-hub/linux-on-litex-vexriscv Though the FPGA IC itself of course is not open, the bitstream generation is, and there are many fully open source hardware board designs, for example the orangecrab. With a Lattice 85K gate FPGA, you can get 4x 32bit riscv cores at 50Mhz or 1 64bit riscv rocket 64 bit core at 20Mhz


Regarding the alegation that sems to be chinese actor: isn't kaspersky gone from the western world after russia x ukraine?

And so... this could be undetected just because kaspersy isn't being used anymore?


Such sophisticated attacks always amaze me, and I've always wondered how people go about developing them in the first place.


someone who worked on the UEFI implementation writes it


That was one theory I had in mind. My guess is the organization developing these exploits form teams of people focusing on a single exploit, with each person on the team having deep domain knowledge in a single subject (UEFI, Windows Kernel, etc), and they use that knowledge to develop their portion of the exploit chain.

People who developed such specifications in the first place, like UEFI, would be great candidates for an organization looking for someone with deep knowledge of the subject.


if you were handed the UEFI implementation codebase like a new-hire, how long would it take to figure out this potential codepath? a couple days?

now if you were handed only the binaries, and left to objdump them etc, how long? evidently there’s symbol names since the article uses those. so hopefully no more than an extra order of magnitude: a couple weeks, maybe a full month if my manager’s asking for a deadline and i want to be conservative?

also, think about where/how they hooked: it sounds like they hooked at the equivalent of an interface boundary, where it’s easiest to inject a new implementation — but then they have to check the return address to know where in the larger scope of the process they’re currently at: if you had access to the codebase and build tools why wouldn’t you patch your exploit into the code more directly and just rebuild it? why abuse the return address like that?

i don’t mean to say it’s not impressive, but it’s not magic. there are lots of competent engineers out there capable of reverse engineering a UEFI implementation.


hooking into win32 kernel is not something uefi developers usually do


You say it like the kernel is, at that moment in time, running, when in fact is just a simple .dll file just sitting there and is being manipulated by the uefi no different than what Notepad does to a text file.

Also the article says it clearly that the uefi rootkit is searching and replacing functions within the kernel and then putting them back once the next phase is complete, in order to avoid security check.

Hooking in windows is a technique allowed by the OS, while this "hooking" is nothing more than just simple search/replace file operation. That's being taught like in 1st month on any coding school.


The ars technica article said it was windows focused, but the same techniques should work on other OS. If you had network monitoring how hard would it be to see this firmware-kit trying to talk to the internet. Is it sophisticated enough to hide in normal traffic somehow?


> One of our industry partners, Qihoo360,

Ooh, I recognise that name. They were involved in certificate shenanigans with Startcom. I'm immediately suspicious.

(I've barely started reading the article, but I'm predisposed to distrust anything involved with Qihoo)


Chipsec (https://github.com/chipsec/chipsec) is a project to check for bugs in your firmware.


This exploit would only work when CSM is enabled? Nowadays with SecureBoot I think it would have to be much more complex? (patching all functions in UEFI, bootloader and OS to bypass the verification).


as a civilian, I am repeatedly amazed at the relentless, intrusive and manipulative tactics that the "heroes" use on the "sheep" .. I am quite capable of managing my own affairs and have invented and solved using computers for decades. I have a sense of personal sovreignty that is offended and threatened by one-way-mirror, controlling, destructive Spy-vs-Spy comic books being played out by eternally funded jerks. I am not running to DELL to save me from "scary" hacks -- indeed, I am being victimized and trodden on by DELL and "state actors" .. DELL is a "state actor" ..

ugh


This! ^

Tech companies are all subjects to the government in which they operate.

They have become spies. The real terror is when you can't buy chips that don't spy on you.


> The real terror is when you can't buy chips that don't spy on you.

So about 5 years ago?


No joke


This is something that Pluton /TPMs can help prevent via attestation. Pretty funny to read comments here saying that they wish there was a way to plug something into a motherboard to verify all of the software/firmware components.


If you have good info or known to have “good” info, just assume you are being watched.


>We were able to identify victims of CosmicStrand in China, Vietnam, Iran and Russia.

I wonder if those computers could be used for false flag operations?


That's why things like the Pluton processor and TPMs are useful.

(A rain of downvotes falls on me)

Seriously, even good old BIOS is susceptible to rootkits, there has been tons of them. So no crying over UEFI please.

We need a fully signed and auditable chain of trust for booting OSes.

Of course all this crap needs to be open source but it needs to be locked down to prevent not trusted binaries as much as possible.

And for the 1% of people who are going to bang about their right own the hardware and run Linux and what not (I'm definitely one of those), we need to be able to do it but in an obvious way (computer should boot but display a clear message that it's been tinkerer with).

I really like software freedom, but the fact that I can disable secure boot on pretty much any computer I have physical access to and that the user will never know about it is not okay.


I would argue that the complexity of TPMs[1] and UEFI[2] leads to a larger attack surface with more bugs present, making it easier to launch attacks such as the one described. The opaqueness of these technologies and inability for and difficulty of security researchers to investigate and debug implementations of these technologies does not help either. There is no chance of a typical system owner having the time and skill to understand TPM and UEFI technologies in enough detail to know whether their systems are vulnerable or compromised.

[1] Example: 176 pages at https://trustedcomputinggroup.org/wp-content/uploads/PC-Clie...

[2] Example: 2540 pages at https://uefi.org/sites/default/files/resources/UEFI_Spec_2_9...


In theory yes, more functionality means more surface, but then some aspects are fundamentally designed to be less exposed or dangerous. We will keep on finding a bunch of bugs in those implementations, but I suspect the end result is better than what we could've ever had with BIOS.

I don't think there was any chance for a typical system owner to have the time or skill or even the foundation required to understand if their machine's BIOS is vulnerable or compromised. UEFI provides some basis to actually start with that process.

I also think there is a real opportunity to write open-source versions of these components in safe/verifiable languages. That way we could have our cake and eat it too.


> I suspect the end result is better than what we could've ever had with BIOS.

I would take that bet, if there were any way to objectively assess. The surface area of BIOS is just so much smaller.

> I also think there is a real opportunity to write open-source versions of these components in safe/verifiable languages.

In theory yes. In practice they're always going to be giant blobs of C written by hardware makers, massaged just enough to get Windows to boot.


> There is no chance of a typical system owner having the time and skill...

Hence the suggestion to make it tamper obvious.

As for the attack surface, it is true, but it's a separate problem. You don't have to bloat your firmware to make it trusted.


The problem with pluton is not the tech. It's that:

- it's proprietary

- it's controlled by entities that have a terrible track record

- it's going to be, as usual, forced upon everybody without consent


I have argued the same as you, it needs to be open source to fix the first two points.

For the third one, nobody is forcing you to buy a specific product, but yes, it will be hard to avoid.

But like for vaccines, individual consent is at odds with the greater good. Society needs computing that it can trust.

Maybe the solution is a healthier hobbyist market where you can buy "use at your own risk" unlocked computers? Maybe we need laws to force manufacturers to make such models? I don't know. But most people, from my mom to my CEO need a computer that will run what it is supposed to run.

It's not 1990 any more, PCs do way too important things.


>Society needs computing that it can trust.

The playbook that is unfolding right now with pluton is the exact opposite.


> Society needs computing that it can trust.

Then it shouldn't mess with trust. Closed source tricks is not trust. And MS lost its credibility long ago. Every closed system on a computer inspires distrust.


This would be comparable to vaccines if Bill Gates had actually hidden tracker/kill-switch chips in each dose...

There's no reason to think a large, opaque computing base that's been forced into the market by a monopoly power is trustworthy. I'd argue that "boot sector tampering with physical access" is pretty low on the list of computer-related real-world attacks against society; it's certainly lower than zero-days caused by implementation errors in baroque computer software / hardware.

Also, suggesting you can avoid having a computer (or phone) at this point is ludicrous. It's like suggesting you can avoid using credit cards and cash. Some would argue it is actually easier to give up living under a roof than to give up owning a cell phone (and make exactly that tradeoff).


You can choose not to use them. Most of the ways people got things done in the past haven't disappeared, although some are more difficult to find now.

My mother does not have an Internet connection. Or a computer. Or a cell phone.

She definitely prefers to live under a roof. She's just not interested in any of the above.


> For the third one, nobody is forcing you to buy a specific product, but yes, it will be hard to avoid.

Yes it is being forced. We have single digit years before participation in society becomes impossible without a device that attests that it is not under the control of the owner. First it will be banks, then government services, then access to the social graph, and so on.

> But like for vaccines, individual consent is at odds with the greater good. Society needs computing that it can trust.

So not under the control of centralised corporations that get paid when they successfully manipulate you into doing something you wouldn't otherwise, and have a proven track record of unaccountable censorship.


"I'm sorry but your computer is not running Genuine Windows 11:tm:. You may not be secure." will be the new "An application is attempting to make changes to your computer..."

Alert fatigue is real.


Alert fatigue is real but silent rootkits are way worse.

Also, it's not just about booting windows or the OS, it's about the UEFI, which even fewer people are going to want to tinker with.


I actually disagree. Silent rootkits in the bios are relatively rare, but alert fatigue is horrifically common.

It's not just the severity of harm but also it's frequency.


Alert fatigue is much worse than silent rootkit.


> Seriously, even good old BIOS is susceptible to rootkits, there has been tons of them.

Were there? I couldn't find anything, but then again Google is garbage nowadays if you want to find older stuff.

To my understanding, the limitations of the old BIOS world would've made it much harder to hack on it other than maybe enabling hidden menus.

The UEFI world is so much larger, more powerful and already offers plenty of abstractions and services. You can probably write a relatively portable rootkit that works across a plethora of different Mainboards and Chipsets. And then you come in and try to fix it with secure boot, and when secure boot isn't good enough anymore you add pluton and then what?

And what's your threat model anyways? "Professional Hackers" nowadays are in it for the money. Why would they want to custom tailor a rootkit for your BIOS? Why would they want to target you with a generic UEFI rootkit? Apparently, crypto ransomware is perfectly capable of infecting a user on windows with secure boot enabled. No need for a rootkit.

Who is going to be interested in rootkitting you besides a government backed hacking op? And in that case, I fully trust them to be able to get around any secure boot measures either with yet another exploit, or because they have access to the signing keys one way or the other.


A buddy of mine installed some crapware he downloaded off of a warez site back in the mid-2000's which reflashed his BIOS to a very hackers-esque bootsplash that prevented boot. Fortunately, he had a Gigabyte board which ran a dual-BIOS config, so he was one jumper change away from getting back to his system and cleaning stuff up. I'm not sure it was ever intended to do anything more than punish someone, but the capability of doing plenty even on that tiny ROM was there.


but the capability of doing plenty even on that tiny ROM was there.

That sounds more like he just got the BIOS erased and replaced with something else entirely, rather than something intended to parasitically coexist.

Also, for a while, they had write-protect jumpers.


Compare that with having boot settings on a UEFI partition. Partition gone, boot not possible.


I mean we already have this with AMD PSB. Basically a vendor key is physically burned into the CPU that validates the vendor firmware (BIOS/UEFI) and if the signature doesn't check out, the CPU just refuses to run.

I think physical access overriding digital security should largely be fine though, because we have a lot of tamper-evident devices for identifying physical access but you largely can't do that for digital.

Like, I can imagine a more consumer friendly PSB using a physical "key" plugged into the motherboard that's essentially a ROM board but using human visible solder pads for the bits of the key. And you could order your own key pair that comes with signing keys from the CPU vendor if you want to sign your own authenticated firmware.


I'm not certain this is where Pluton or a TPM could've helped much (individually), it's more the task of Secure Boot, Secure Launch(/DRTM) and Trusted Boot.

I'd love to see a similar effort at ensuring boot integrity for Linux, but way too many distros can't even handle Secure Boot with Nvidia/DKMS. Though even more practical features, one being (f)TPM-backed FDE, are very cumbersome and underutilised.


Without more infos on the initial infection vector, it's difficult to assess the impact of those mitigations (Secure Boot, Secure Launch and Trusted Boot). Qutoing from the report:

"Looking at the various firmware images we were able to obtain, we assess that the modifications may have been performed with an automated patcher. If so, it would follow that the attackers had prior access to the victim’s computer in order to extract, modify and overwrite the motherboard’s firmware. This could be achieved through a precursor malware implant already deployed on the computer or physical access"

While Secure Boot + BitLocker with TPM and PIN would have prevented an Evil Maid attack (at least would have triggered a PCR change and a Windows Recovery prompt), a preliminary infection (I understand by it a supply chain attack before it reaches the user for the first time, but maybe I'm extrapolating a bit what the report is saying) would have stayed undetected in most scenarios (depending on how Intel Boot Guard is configured).

Regarding the Linux part, ANSSI did a pretty great job with their CLIP OS implementation: https://docs.clip-os.org/clipos/boot_integrity.html, but it's really "for the masses" :(


I was being a bit provocative with Pluton :-) I know it presses people's buttons...


Does anyone else find it really too coincidental that the anti-Pluton article goes under, and not long after, this one appears with such comments?

All I can hypothesise is that some entity with huge vested interests is now trying to do damage control.

to prevent not trusted binaries

"trusted" by who? The faceless bureaucracy that wants to control every bloody aspect of your life?


TPMs would not have helped here. The problem is that some boot code must come first. That bit of boot code, if it can be rooted, can lie about the code it's loading, and so the TPM can't help.

The only way a TPM could help is if it was the boot loader. But that can't be, not unless the TPM were a firmware TPM running in tight cooperation with the ME.


As long it is implemented so the end user has complete control of the system in one way or another then these things are great ideas. Hard to think of how they'd do that given they seem designed more for central control than security, but if they did it'd be interesting to see what developers could do with it.


  > Of course all this crap needs to be open source
slightly pedantic maybe, but i would just add that without being able to replace said software/hardware (while maintaining a root of trust of course) just being open source (you can look but don't touch) isn't enough


>even good old BIOS is susceptible to rootkits

Those common rootkits were not in the BIOS firmware, they were just malicious code on the hard drive. But the code was written on drive space not used by the file system so it withstood malware scanning of the volume and often reformatting/restoration too.

The Master Boot Record (fist 440 bytes of sector 0) would often need to be renewed from trusted media, and the malicious code zeroed using a raw disk editor which can address sectors which are not within the file system.

Or the shotgun approach could be taken and the whole HDD zeroed.


Hah, this reminds of a security researcher a few years ago that was reporting malware that he couldn't research without infecting his other machines. I'm fuzzy on the details, but everyone wrote him off as a paranoid delusional and the incident was quickly swept under the rug. Makes me wonder if he found some sophisticated state sponsored stuff and got smeared to hush it up.

I mean realistically, we'd be naive to not expect that state-sponsored hackers have rooted machines somewhere in the supply chain (hardware, firmware and of course software). Is everyone being monitored all the time? No, but I'd stay away from electronics if I expected an intelligence agency was interested in me.


Maybe you're thinking of https://en.wikipedia.org/wiki/BadBIOS.


Shutting down everything because of paranoia sounds a bit extreme


Eventually it electrifies its power cord so that if you try to power it off you get zapped.


I had that kind of megalomaniac fantasy in the past, but the started to think that xkcd.com/2347 (“random person in Nebraska”) should apply to NSA malware too, and there can’t be as much tons of people working on it as in my imagination.

Though I’d happily cooperate if me watching team did exist and came out of shadows to clarify their doubts and pass along the taxpayer money saved :)


Furious searches for BIOS only era hardware are taking place on ebay as we speak.

To use with a modified Linux kernel that emulates a bog standard Thinkpad uefi environment of course.

EDIT: I forgot to phrase this as a question - besides missing a QubesOS or KickSecure on top, is this a decent plan for airgapped stuff?


I'm not sure what you mean by "a modified Linux kernel that emulates a bog standard Thinkpad uefi environment". The UEFI environment is provided by the firmware and starts EFI applications, which could be a UKI containing your kernel+initramfs, or grub that then starts your kernel+initramfs from /boot, or anything else. ie the UEFI sits below the kernel.

UEFI can be emulated on top of BIOS using something like Clover. But for your BIOS-only mobo, just keep using it with a BIOS-only bootloader, ie GPT disk with grub or whatever written to the MBR + BIOS Boot partition. There's no reason to involve any UEFI, emulated or otherwise.

You will obviously not have as good protection from evil maid attacks as you would've gotten from Secure Boot. But presumably you're okay with that, and emulated UEFI will not help in that regard anyway.


UEFI Secure Boot doesn't completely protect against physical attacks. If a person can turn off and turn on the computer, they can replace the currently active UEFI bootloader with a shim app, enroll their own key. They can then run any UEFI binary, that binary can then do whatever, and at the end remove the SHIM NVRAM variable that it used, finally loading the original OS bootloader and removing all traces.


>UEFI Secure Boot doesn't completely protect against physical attacks.

I didn't say it did. In fact I formulated what I wrote precisely to convey the opposite message.

>they can replace the currently active UEFI bootloader with a shim app, enroll their own key.

UEFI can be protected by a password if the implementation supports it. How secure that is is of course up to the implementation.


Presumably the point is to run something that assumes/relies on UEFI (an OS or application) without having to run and trust the giant blob of low-quality code that is a typical hardware UEFI implementation.


Sure, but we know that the OS they're asking about is Linux, and none of the major Linux distros require UEFI to boot.


I understood them to be talking about using Linux as the hypervisor that would emulate a UEFI environment, the guest OS might be something different.


Yup.


Nope.


Just run your OS in a VM.


what should you run the vm on?


A turtle.


On a trusty prehistoric hardware obtained in Brown Sector.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: