Like the original Thunderstrike vulnerability presented at CCC last year, firmware passwords and FileVault encryption don't prevent infection, reinstalling OSX won't remove it and it changes the RSA keys in the ROM so that Apple's firmware update routines can't remove it either. The only way to remove it is with a hardware in-system programming device connected to the SPI flash chip.
This is a transcript of our hour long presentation at DefCon 23 / Blackhat 2015 last week, which is why it is too long to read... Here is a shorter overview and a demo video.
Is a thunderbolt display considered to be an "option ROM"? Meaning it would be possible to have a rogue monitor spreading a firmware infection?
Firmware updates should not be "transparent", "seamless", "one click", or whatever other terms are used today to describe silent or little-noticed changes. They are modifying a very important part of the system, and the user has to be aware of that.
Nevertheless, as long as such a button or switch is easily accessible, and not able to be covered by a seal, the system stays vulnerable to evil maids. And such a thing definitely wouldn't be the Apple Way™
How do you see this working out in the longer term - is there a Secure Boot alternative that allows freedom to boot Linux, yet protects against vulnerabilities like these ?
Any user can simply create their own key, sign their own firmware, linux, and what have you with it, and then boot away.
Unfortunately, Microsoft mandates secure boot but doesn't require the feature of adding keys to be present... so the reality is a bit more grim.
The reality is that most distros have managed to get a signing key from microsoft (and those that haven't, there's a grub shim signed by such a key) that is included by default in microsoft certified secureboots. This has been working, but is not as ideal.
Not really. The barrier to obtaining a signed bootloader isn't that large, and if you're unwilling or unable to do that you can use http://mjg59.dreamwidth.org/20303.html and just oblige your users to jump through an additional (easily documented) hoop. We had legitimate concerns over the impact of Secure Boot on free operating systems, and for the most part we were able to reach some reasonable solutions.
Funny. And here I thought I was secure booting Ubuntu already.
I think this issue was resolved 2 or 3 years ago.
> Thunderstrike 2 doesn't make use of any new zero-day
> vulnerabilities -- these are all older vulnerabilities
> that had been previously disclosed and fixed on other
System integrators shouldn't be trusted to write software. If you've ever installed software from your various BIOS manufacturers you surely know that
Back in the 1970s, when 9-track mag tape was popular, there was a saying: "no ring, no write". There was a physical plastic ring that needed to be inserted into the tape reel hub before the tape could be written to.
When 8" and 5" floppy disks were popular, you needed to place a physical piece of tape onto the side of the media, to cover a cutout. Without that, you couldn't write to the floppy.
Then 3.5" floppies became more popular and you had a mechanical tab you needed to move to enable/disable writing.
Same with early PCs. In order to update the BIOS you needed to physically move a jumper on a motherboard.
All the above was enforced by hardware. There was no magic SMM that could do whatever it wanted.
But all that was too expensive, or just not friendly enough for end users.
The bean counters and hipsters making the decisions nowadays have learned nothing from computer history. We've regressed.
Now get off my lawn. And be quiet, it's time for my nap.
As someone who's also used and remembered write-rings, etc., this is nonsense. The concern over threats back then was VASTLY lower than today, because the threat was in fact far lesser. Computers were vastly less interconnected and knowledge about exploit tactics was still nascent, and a fair bit harder to come by. Stuff was massively exploitable because in the "good 'ol days" there were a lot fewer computer literate people to think about things like "attack surface", and fewer still who had motive to use such knowledge maliciously. I knew grad students back in the 80's who had written their own illicit versions of "su" to make their lives easier. I.e. local privilege escalation tools. An undergrad banned (and transformed into an overnight pariah amongst his peers) for hacking CS dept servers. All kinds of devices hacked by the curious via some hardware port intended for maintenance or just left behind on the PCB. Those things happened, but the collective impact of much of that is less than one major exploit pattern today (Flash 0-day, legacy consumer routers, take your pick).
No, the root problem vs "yesterday" is just that our computers are cheaper and far more interconnected than ever before. Things like the option-ROM-as-vector being an economically practical technology. Our collective level of ability to ship secure systems is probably far better than it ever has been, but that's almost nothing in the face of an exponential explosion in the pervasiveness of computing. We still ship an incredible amount of insecure software and hardware systems, just because we make so damn much of it. Why design a complex fixed mechanism or circuit for something when you can solve the problem 1000x better with a CPU or DSP? That's great, but "oops, we forgot the security again." Or we didn't, but an exploit was still found and updating is otherwise infeasible.
It seems that ability to ship secure tech needs to be nearly pervasive, enough for a sort of technological herd-immunity (or herd-defense-in-depth), if you will. At this point, I suppose we're looking forward the day when all our little computers are muttering to themselves like trees in a forest chemically signaling about attacking pests.