The next realization is there is no way to turn them off or remove them. It’s posisble even moving won’t help.
And yet we really don’t seem to care much. Lesser issues generate national outrage and high volumes of press coverage. Why?
HN may be uniquely positioned to show us the answer. Take a community of people with generally above average interest and/or knowledge in this stuff, and the comments are filled with questions asking what the hell ME even is.
Apparently, ME is the perfect combination of opaque, obtuse, and obscure. It’s not rocket science, but complicated enough it’s hard to explain well quickly. It’s easy to be a highly technical person yet never have the need to cross paths with the subject. There has been some press, some activity, but all of that is simultaneously dampened for the same reasons.
I know we're used to "Internet speed" and the tweet happened an entire 24 hours ago, but give it a bit of time before declaring it dead. Wired and Vice need a second to write it up, and see if it hits the mainstream before declaring the issue ignored.
Not saying it will get picked up, though I sure hope it does, but as you point out, it's a bit obscure and takes some explaining.
Still, there are numerous reports that the Facebook app on your phone listens to every word you say to show you ads for things you were talking aboutm but on HN and other web forums, there are always an element of disbelief. We may believe that it must be something else - correlation to web searches from the same IP, but there's never been proof one way or the other. (A Facebook exec's claim doesn't count as proof.)
So there's a section of people that believe they already have high resolution recording devices that we take into to bed. What's one more?
(Typing this, very gratefully, from an ARM Chromebook.)
There are plenty of hot news scare pieces after boring security leaks all the time bringing awareness to issues that technical people have known well. For ex, I'm hearing about friends older parents putting tape over their laptop cameras.
We love to trash bugs that having marketing brands but these exploits, and more importantly obvious attack surfaces like ME or cellphone modems, often just need the right amount of human-interest story, practical real-life example, and wrapped in good news friendly explanation for regular people to care.
It tends to happen a lot more randomly and without a rational order of priority, but it's still happening more and more often. At the end of the day it's always going to come down to the time and effort of a security researcher caring and the tech community putting the effort in to make the journalists care.
Tape over laptop cameras isn't just a "parents-of-friends" thing, it's a good idea. Buy a set of stickers and support the EFF: https://supporters.eff.org/shop/laptop-camera-cover-set
Anyone know somebody at Wired?
Support the EFF! But I hate the stickers.
Everyone puts a sticker on their webcam and completely ignores the hot mic. But you get that false sense of security…
I did not break her phone in that way. Her phone is more dangerous.
Please note that a speaker may also act as a microphone if configured so (at software level). This is especially true for speakers/headphones connected via the jack.
I (personally) think that intelligence/privacy-wise the mic on many devices is a lot more troubling than a camera. Camera is very directional and (again, personally for me) would at most result in embarrassment? While a microphone can record all conversations in a room/appartment, transmit everything with fairly negligible bandwidth, and taping over it won't do much good. You actually need to open the device and disconnect/cut a wire.
Of course a switch would be nice, similar to the older Thinkpads which had a hardware switch for the network devices on the front, originally for use on airplanes.
Any chance you documented the procedure or have links to relevant documentation?
I wouldn't be surprised if the better Dell models have similar documentation available.
*: it's what we did on the One Laptop Per Child laptop, and I'm sure others have too.
Any idea where to get a CLI-from-boot notebook for teaching kids programming and encouraging a hacker ethic?
I think it would be about as easy to make a cardboard one.
The likelihood is that if an attacker has the ability to record audio, they also have the ability to bypass this device.
Not sure why you took that statement as deriding the practice because some older people are doing it? I noted it merely as an indication of how far the behavioural change has spread as a result of news stories... Of course it's good security hygiene. Not an ideal solution though, as others have already mentioned, compared to a hardware switch or built-in cover.
Across the mobile IT we manage, the groups who covered or unplugged their webcams, internal or external, most consistently were middle to older age women, with the least common as women in their 30s or younger. Men covered randomly across all ages, but still less than women. And while they'll cover their laptop camera, most who didn't have a folding case did not also cover their tablet or phone cameras.
I don't know if it's comfort or awareness, but the ones who covered acted like it was common sense, and the ones who didn't hadn't considered it or felt like they weren't likely to be a target of interest. A few women said they chose their laptops because of the physical shutter over the camera.
I don't know if trust in electronics is more generational or age related, but it's definitely a gradient.
I think the threat profile faced by "the rest of us" is probably just a little less intense than someone as well known, wealthy, famous and influential as the CEO of Facebook, but perhaps that's just me.
You're right about this new development of the functional JTAG being very recent and needing time to get around. However, security folks and others have been decrying the potential of the IME backdoor for years.
That they've been largely ignored for years lends some credibility to the thought that "we don't seem to care much." It's hard for most laypeople to understand, and it's difficult to get people excited about something they don't understand (unless they are convinced that they do understand, of course).
How about an official statement from Facebook itself over a year ago? If it was true, people could find out by decompiling the app and make Facebook look absolutely horrible.
That also doesn't count as proof; it carries barely more weight than the Facebook exec's statement.
However, you're quite right about the decompiling argument. And chances are that security researchers have done just that.
The main thing I haven't gotten working yet is the touchpad. For wifi, I'm using an atheros dongle with open firmware.
Doesn't work for every Chromebook, but it does for many. Working fine on my Chrome Box.
> "(Typing this, very gratefully, from an ARM Chromebook.)"
Have you installed a non-stock OS on that Chromebook? Google aren't exactly known for being strong advocates of user privacy.
Imagine, Intel were a Russian company. Tomorrow, there would be a simple and clear [screaming] headline similar to "Russians hacked the election" (general public doesn't need to know or understand how the network, computers or elections actually work). The day after tomorrow it would be illegal to buy anything Intel.
The DREs don't seem to have been a target, though the physical and OPSEC vectors are quite well known at this point.
I'm not commenting on the information operations or the loose and ambiguous langauge that has been used to describe these events.
And the military are also dependent on foreign technology. If not for CPUs inside tanks and planes, then for CPUs inside command center computers. (and not just CPUs)
I do care, a lot. I have decided to avoid Intel (and AMD) hardware like the pest. I will not buy any Core iSpyOnYou or AMD equivalent anymore. I'm an advocate of economic and judicial sanctions from the political level against Intel (and AMD). I tell people around me about the problems and explain how it is an issue of privacy, security, national sovereignty, and market power abuse.
I'm looking for a desktop computer for my day to day use that comes without compromised hardware. The Raspberry Pi 3 works quite well, but doesnt work without non-free software. I successfully installed Debian onto my A20-OLinuXino-MICRO, but the GUI doesn't start. I'm looking for Hardware without Intel Hardware, without non-free software, mainline Linux support and preferable Debian compatible. The OpenPower system by raptor engineering is simply too expensive. I cannot afford it. Any ideas?
Libreboot replaces firmware for some decade-old server machines lacking a ME (although noisy, they work quite well as a desktop).
The amount of money that Intel gets off these chips can't be significant because they are currently over two years old anyways (Q3'15).
That one can maybe fix a hardware backdoor in Intel chips does not make buying them any better. Intel does not get my money but should instead get clear-worded, sanctioned letter from the authorities. I think they should be banned from trading and selling their backdoored stuff. I will not buy even old products of them. They messed up big time and don't even explain themselves.
So you are obviously right that proof of concept of fully FLOSS ARM phone doesn't directly translate to x86.
But it does show that they know how and want to build fully FLOSS systems. Whether they don't quite succeed 100% with the x86 system (as the management engine is still in my computer, albeit isolated and disabled), they have been making great strides in getting close to %100 and I am willing to reward them for that.
Regarding the morality of buying Intel chips, I do share the reluctance to support Intel. I hadn't bought a new Intel computer or CPU since 2007 with Core2Duo for that reason. But sometimes morality decision can't be made in a binary all or nothing manner. In this case I am aware of the damage done by purchasing an x86 system, but it makes up for in being able to have a productive labtop which I can more effectively work to make FLOSS applications with.
Depends on the kind of Mac you buy. I can only comment on their mobile offerings:
- Best supported are the current MacBook Airs. Everything except the Webcam should work out-of-the-box. The webcam needs the out-of-tree bcwc_pcie driver (https://github.com/patjak/bcwc_pcie)).
- The Retina MacBooks need some manual work before being usable (e.g. need to compile out-of-tree keyboard & touchpad driver (https://github.com/cb22/macbook12-spi-driver)), but should work fine as well.
- The MacBook Pros before October 2016 are also quite good supported, the support for newer ones is still quite incomplete (check out https://github.com/Dunedan/mbp-2016-linux for details), although it's possible to use them as daily driver if you're aware of the limitations.
All of the Macs I have require binary firmware blobs for WiFi, for the open source driver to work. And even in that case none of them can do even 802.11n. I have to use the proprietary WiFi driver to get 802.11n or 802.11ac. And yes each model varies in this regard, which makes it something of a Choose Your Own Adventure book.
Not innocuous, but probably less dangerous.
https://news.ycombinator.com/item?id=12584880 and others might have details on WiFi and such
I really wonder why the GPU situation is such a huge mess. Very few are supported by free drivers and the closed drivers are, besides being closed, often of very low quality. Is patent law holding this situation stable? Isn't patent law there to promote advance? I feel it isn't working.
I think it's even more sinister: I would argue that a higher percentage of users on HN might be sworn to secrecy about any knowledge they might have anyway.
So you end up with very smart people who're either sworn to secrecy or who aren't; those who aren't are asking questions (and very few have answers, and those answers are partial or incorrect). Those who are can't answer them honestly or fully.
That is the argument I am asserting is dumb. And it is obviously dumb; in adversarial contests, you don't leave weaknesses exposed just because you might have other weaknesses. It also ignores the presence of differential threats; I may not care about hypothetical compromised NICs because my use case my not require a network, but need anti-evil-maid defenses.
Bottom line: in the context of discussing whether or not the ME is dangerous in the general case, other potential hardware threats are irrelevant, and I believe the argument is one used to intentionally muddy the waters.
 Putting aside deeper strategies; I'm not going to argue about game theory here.
And your (valid) refutation of that part in no way implies spending less time reverse engineering and disabling ME, nor being less excited about this tweet. Correct?
I'm sure that I'm far from the only one.
- create tools to help fight what you deem contrary to things that are important to you
- dispatch leaks in subtitle ways so that humanity is not left in the dark
- lead anonymous and discrete community of people sharing your believes, your skills and passion to improve things
Be careful. And best of luck.
But I haven't seen anything that worried me. Besides those NDAs are legally binding promises, I take those seriously.
"As shown in the presentation by security researchers Maxim Goryachy and Mark Ermolov, one way of accessing the JTAG debugging interface" "is to use a" "hardware implant" "running Godsurge" "which can exploit the JTAG debugging interface. Originally used by the National Security Agency -- and exposed by Edward Snowden -- Godsurge is malware engineered to hook into a PC’s boot loader to monitor activity. It was originally meant to live on the motherboard and remain completely undetectable outside a forensic investigation."
But this all was in January:
"The claim was made during a presentation" "which showed how hackers could use a cheap device to gain access to a debugging interface embedded in hardware."
What's the news now compared to then?
It is not just a matter of apathy. They get violently ill at being told to think about protecting themselves. If you tell them not to put their debit card into an ATM without giving the card reader a tug to see if its real first for example, they'll do it wrong anyway just to spite you. It isn't that giving the tug is hard or that it isn't wise, but they simply do not want you to be right or have to think about it.
Admitting that you are right about that one little thing means they have to deal with all the other issues that you brought up as well. There is probably an interesting field of psychology to be researched with just that phenomena alone. I don't think this is just the people I know personally because I've encountered a lot the public who has this mindset as well. It is how we got ourselves into these problems in the first place with no recourse.
I stopped sending the emails about a year ago because they asked me to. The reason stated is that no one cares about security and a number of them had already auto-forwarded me to the junk folder. Or, told me they saw my name and would skip my emails and not open them. I don't even feel like bringing it up again now that it is a going concern. They will continue to not care.
I find it frustrating.
EDIT: I've disabled Intel ME on my machines. I would offer to do it for others but they'd have to acknowledge it's something that concerns them to want that help, and they won't.
Not really how I think of it. Seems more similar to waking up one day and realizing Tesla controls your Tesla car remotely. Or Microsoft can push bad updates Windows. Or Google can push bad updates to Chrome.
> And yet we really don’t seem to care much. Lesser issues generate national outrage and high volumes of press coverage. Why?
On my end it's because I have not seen a single shred of evidence that it has been used for spying, and because I figure the moment anyone becomes aware of that happening, people would probably find a way to block the network traffic at the router or somewhere else.
Now I can't wait for rogue monero miners to use ME to propagate :)
- If it were to periodically "check in" with an external server to see if it needs to do any kind of spying -- admins would notice the network traffic.
- If it needed to be contacted externally to "initiate" any kind of spying at all, that would mean anyone behind a NAT would be safe, and furthermore, the the moment anybody notices such a thing, it would make the headlines and get blocked on networks, so this capability would need to be kept secret and turned off except for ultra-high-value targets... which most people do not view themselves as.
The NSA routinely intercepted Google internal traffic. Did Google, who are presumably running the most advanced network on the planet and staffed by people who don't suck, notice the intrusion? They did not; they got informed via PowerPoint.
While the sophistication of the attackers decreases as you move from NSA to random hackers, so does the sophistication of the network as you move from Google to mid-sized businesses.
How are these even similar? Did the NSA ever send traffic of their own on the Google network infrastructure? Citation needed if so, because I recall they merely listened in on existing traffic using external network equipment.
Not at all.
When you have something that good, you use it for specific targeting. You get a guy with a work laptop at home, you infect him, then you use the machine to get one closer to your objective. Slowly. With time between the events. Without being a beacon in the network.
Or you just use it to spy on a guy you suspect.
Or to get access to secrets of somebody you wanna black mail.
Did you even read what I wrote? Specifically the last sentence?
Admins would use network dedicated hardware with network chipsets driven by closed source firmware.
In a crazy but technically doable scenario, that closed source firmware could contain instructions to "send packets containing magic word X to address a.b.c.d" and not reporting or counting that traffic, even to applications opening the device in promiscuous mode, with all routers in between to obey the same instructions. Not a single byte would be reported, counted or sniffed unless someone sticks a digital analyzer on the network cable.
No, I fully expect there are enough admins out there running dedicated hardware whose design & source code they have access to. That is sufficient.
Similarly inbound control signals could be delivered by modifying inbound traffic that the ME observed and decided.
Depending on your throughput needs the signal could be delivered subtly by for example modifying the timing between packets in a way that would be very hard to identify as a signal.
I’m hoping the ME firmware Now gets dumped and studied closely. I’m betting there are some surprises in there
Until there is evidence, this is technically just a government conspiracy theory.
The way I see it is Intel ME is a processor-level application that has full control of all computer activity and cannot be blocked or disabled and can be accessed and controlled remotely.
Would like to hear other people's opinions of what it is.
So, the three O's is a good conceptual utility, for understanding the combination of traits that can exploit the realities of human psychology, and explain why no one seems to care about how technical paralysis can stifle apparent relevance.
It's not only a means of explaining sneaky cloak and dagger stuff. You could describe lots of niche tech staples with this mnemonic.
When something is difficult to explain, that's often enough to derail any casual conversation with simple mental fatigue.
EG tool: https://github.com/corna/me_cleaner
It's that even those of us that care vehemently, have no recourse. It is impossible to fight a secret police state.
Part of winning is to even begin to believe you can fight...
RISC-V  is an open ISA that looks promising. Unfortunately, the privileged instruction set is still a draft, which is important for getting a full modern OS up and running.
lowRISC  is as non-profit open hardware company that has been working on a fully open RISC-V-based SoC that can run Linux. In their about page they claim it'll be ready this year.
There's also the stuff from SiFive. They have an arduino-comparible microcontroller, and have ongoing work for a 64-bit quad-core. I don't think their hardware is open, though.
Commercially there's no alternative. Enterprises even use Microsoft over Linux/BSD. Why would they get rid of Intel?
For Anti-US spying a firewall should be enough.
I think I am a bit out of the current state of events. Is ME short for management engine? If that is the case, what is the problem with it?
Most people do not have any concrete notion about what management engines are. People weren't that skeeved out by Alexa, and that was a pretty easy-to-understand system. There's no way that people will have any kind of personal connection to something that they barely are aware of and don't understand.
Sure, if by one day you mean - for the past 10 years? And if by spy cams you mean - A product with official documentation from the vendor, and similar to other products from other vendors. And these products can be purchased by anyone.
You have some valid points, but they are clouded by your sweeping generalizations and needlessly polarizing language.
We've now entered a realm where an attacker could simply plug a device on an usb port of your computer for a few seconds to have it access your cpu's ME through USB JTAG and take over it, allowing him to have full access and control over what you do/read/open/type over the network, without you ever knowing it since you can't see it.
And the only way to get rid of it for sure would be to pretty much throw that cpu away and buy a new one.
Or am I being overly paranoid and there is something I haven't considered that makes this scenario impossible ?
EDIT: given the answers I think my main concern wasn't well expressed above. I'm not saying this as in "ME is making it easier to be compromised". That may or may not be true, but that's not my point.
My point is, we all know that once compromised, you can't clean it and need to burn it all and start from scratch: recover from backup (not files on the compromised machine), format everything, reinstall.
Due to the nature of the ME, this is not a solution here. The cleanup needs to be done at the hardware level. Unless I misunderstood something, once it happens, your cpu is done for, period.
And 'using a hack to cleanup the hack' is still in the realm of cleaning up rather than start from scratch, it's not a solution for the same reason than cleaning up your comprised linux box is not one and you need to start from scratch.
The concern with the Intel ME is that it has a native network adapter. You can bet efforts are currently underway to discover how to exploit the ME remotely. THAT'S when things get scary.
Your paranoia is not unjustified. Personally, I am nervous that some of my systems have the ME. When attention turned to it about a year ago, i knew it would only be a matter of time before someone broke into it.
Yep, this is the big deal. After I "discovered" the ME, my first stop on my home network was the switch, to block all that crap. (And I found my storage server, equipped with a Supermicro all-in-one motherboard, helpfully grabbed an IP for the ME to listen on with an 'admin/admin' password.)
I just wish the empire builders at the NSA would care about something other than their own little power center. They knew this would happen - it always does. The NSA is probably the biggest security threat to the U.S. people at this point, because they keep building concentrated, high-value targets and then lose control of them.
 Not to be confused with 'U.S. government interests'.
This is a useful document for understanding what exactly you're dealing with and what to do about it:
BMC doesn't always use a dedicated physical port, and it's commonly bridged in sideband to the other NICs on a server.
IOMMU effectively solves the "DMA is completely broken" problem, as far as I'm aware.
Evil Maid attacks are mostly worrisome because even UEFI cannot protect you against some bootloader attacks (what if you disable UEFI or reflash the firmware and then have a bootloader that just looks like a UEFI boot). There are some usages of TPMs that seem quite promising (they revolve around doing a reverse-TOTP-style verification of your laptop to ensure that the TPM has certified the entire boot chain).
It's quite a hard problem, made significantly harder by the fact that every fucking hardware vendor seems to want to make our machines even less secure.
Through this attack, they could compromise the ME longterm, which means the long accepted "nuke it from orbit" solution to security breach (unplug everything, format everything, start from scratch) still wouldn't be enough; that entire chip is done for. And 'using a hack to cleanup the hack' is still in the realm of cleaning up rather than start from scratch, it's not a solution for the same reason than cleaning up your comprised linux box is not one and you need to start from scratch.
A couple of years back, and being absolutely horrified at the remote management available on my second-hand lenovo t420s - including management over wlan.
Sure the features are gated by price/cpu "brand" - but I think it's safe to assume a) this is complex software and will have bugs with security implications b) once it's well enough understood - it seems likely it can be "upgraded" (similar to how you today can eg: replace the bios with coreboot).
The conclusion is that we need new platforms - perhaps power5 will help.
If the attack can he pulled off with only one time access, it’s worse than an evil maid attack.
I can't help but to think of all those exploits that target anti-virus software.
When you think about it, "it seemed like a good idea at the time" can explain most tragedies in human histories.
So if mass disruption of the very system that support your life can have supporters among a community composed of smart and actively debating people, "it seemed like a good idea at the time" probably happens every week at gov agencies.
What leads to failures, and I suspect happened at Intel, was that they mistook a very localized consensus for a broader one. There's a word for this, it's called "groupthink". A group of people can talk themselves into doing something very stupid (or evil) while still thinking they're doing the right thing, given enough time and motivation.
There was no widespread consensus, outside of Intel, that the IME was a good idea. If they had solicited opinions from outside their organization, they would doubtless have gotten horrified reactions. But they didn't, or if they did they must have dismissed those concerns, because they went through with the bad idea anyway.
The apparent secrecy with which they developed the IME is also a cause for alarm; groups of people who operate in isolation are particularly prone to groupthink, and so even if their motivations are good ones, the fact that they are working without continuous feedback from anyone on the outside raises the chances of a perverse outcome.
At least now that everyone can see the problem people can make informed decisions.
- felt guilty about it and decided to make amend
My money is that exploits have been on the blacks market for a while now. We just have an official public demo now.
Rule of thumb: when something that catastrophic is made public, the worst already happened and you are late to the party.
In white hats we trust :)
In tin-foil hats, we trust, more like!
In Faraday cages we trust.
But yes, I hope that with this it'll be possible to completely remove the remaining few hundred kB of Intel ME code remaining.
of course it is, because the USB DCI attack is one level below the Intel ME. Even if it is deactivated via HAP, which basically simply puts the ME code into an infinite loop or a CPU halt state - both can be reversed by JTAG.
If they have full debugger access to what's running in Intel ME then removing the code from the firmware probably doesn't make a difference (assuming they can run un-trusted code in that context). If they cannot write their own code and so an attack requires ROP gadgets then removing the code might make it harder (or impossible) to do, but I doubt it.
Huge corporations backed up by gov agencies with a lot of time, money and skilled people VS a few people working for free because they believe they should. Not a fair fight.
I thought this behavior disappeared ~10 years ago.
However there are bugs in USB stacks, especially now you can do so many alternate protocols over USB-C.
There's nothing fundamental in the USB spec that lets devices execute code on the host.
I know some of you might argue that even generic USB sticks can do damage and, whilst I agree, this attack is still a degree worse than most of those.
Thus far the most damage an unknown USB stick could do was type commands as a fake keyboard (visible to user) or exploit a driver vulnerability to silently cause mischief.
Correct me if I'm wrong, but those cases could still be trivially stopped by a security policy set by administrators disallowing unknown USB devices. (Of course, how many places would use this is another matter, but it's still very important for places where this does matter.)
This attack on the other hand would seem to be able to completely bypass your operating systems restrictions on USB ports.
Why? Security knows there are always bugs in software, and assumes they exist. Thanks to @h0t_max, the rest of us know this particular bug exists, but this bug has been around for a while - who's to say evil hax0rs didn't find this bug years ago and have been exploiting it since?
Or put another way - you say an unknown USB stick could exploit a driver vulnerability to silently cause mischief, and then claim that this is easily stopped if an administrator sets a security policy to disallow unknown USB devices. (I assume you mean a Windows GPO enforced policy or similar, and not a written social policy.) Who's to say the code that enforces the policy doesn't have bugs that's exploitable? What if the driver for a known USB stick has exploits?
Physical access is physical access, and while there are mitigations for the evil maid attack (like an encrypted drive and shutting down -not just suspending, when the machine is out of sight), there simply is no way around the fact that physical access is game over.
That mitigation is useless against Evil Maid. There are much more sophisticated mitigations (using a TPM to measure the boot, and then do something akin to TOTP in order to allow the user to actually verify the state of the machine) which actually could protect against Evil Maid almost completely (assuming you don't have something like Intel ME that cannot be verified by the TPM).
"Once you have physical access it's game over" is a very common response to these discussions, and I find it incredibly defeatist. Of course physical access means that the "clock is ticking" until your data is compromised, but sufficient protections can dampen the impact or increase the difficulty.
For example: IOMMU protects against DMA-based attacks, something that was impossible to protect against several years ago. This doesn't mean that someone cannot launch other attacks, but it does mean that the trivial "just plug anything into a USB port and you have DMA" attack is no longer possible.
Encrypted drive + shutdown is a defense against a specific Evil Maid attack, cold boot attacks. It is not a very expensive attack to run; for the cost of a can of compressed air, and a USB drive, anybody sophisticated can run this attack. https://en.wikipedia.org/wiki/Cold_boot_attack
Sorry to sound defeatist, but if you had been relying on IOMMU to save you, the trivial "plug anything into a USB port and you have CPU JTAG access" attack has always been possible. (Never mind that IOMMU implementations aren't guaranteed to be bug free.)
In the face of that, what do you do?
With this knowledge, all I really can do is stay up to date and patch-patch-patch. Have a travel Chromebook for leaving in hotel rooms, but ultimately I just have to know that it's not enough, especially against a CIA-grade Evil Maid, or an Evil Maid that's able to factor 4096-bit prime numbers. (That last one's not theoretical, either. It was revealed a few weeks ago that TPMs in Chromebooks and other hardware was generating weak keys, leading to cloud-factorable 4096-bit RSA keys.)
A less-sophisticated Evil Maid can still physically steal my laptop for pawning, and even if they can't get my data, I've still had my laptop stolen. Not-being-defeatist, I backup my data, although that has a totally different set of security concerns over the Internet.
Access to a USB port is not physical access. USB is a network interface that is commonly used to connect host computers to small portable embedded systems, often tiny NAS units of other people also known as USB flash drives.
This interpretation of "you're being overly paranoid" is new to me ;)
Can you not see the difference between an attacker opening the case and stealing your HD, vs inserting USB key for a few seconds, then going away, and exploiting later at their leisure?
Software do-over is a very well accepted solution (don't bother cleaning the rootkit, just format reinstall), but hardware do-over (change the cpu) is going to be a hard pill to swallow.