I put quotes around "disables" because the ME is not fully disabled. The blog's analysis does show how it is in a "safe" state, i.e. forced to ignore the outside world very early in its code path. Also, not likely to brick your computer, assuming unscrewing your case and using a SPI flash programmer hasn't already bricked your computer.
Edit: "backdoor" in quotes too.
Long story short, it's not a backdoor but a key to a backdoor which implies there is a backdoor.
The last missing piece is now whether ME was actively put there to enable a few agencies to gain unauthorized access.
It's a great place, however, to gain persistence after an initial compromise as the talk shows.
It obviously has a lot of internal attack surface, which is bad enough.
CPUs aren't cheap! Just give your customers full control over the product!
What could it be? This mysterious additional factor.. What could make the US company with the biggest CPU market share in the world possibly want to put an omnipresent overlord control chip onboard every processor it ships? It's almost like there's some kind of outside force at play, but that's a crazy idea. There's No Simple Answer here, for sure.
What indeed, such a mystery! It's a mystery, I tells ya!
But that's not the world we live in. What I list above are not conspiracy theories, they are proven, factual happenings. There is a clear evidential basis to infer the NSA's intent and capabilities in this area.
There are obviously security/privacy snafus that companies make all the time that have nothing to do with nation-state influence and spying. Superfish, for example, was more-likely-than-not a really bad judgement call on the tradeoff between analytics and customer privacy.
>this is pure business.
If that were true then the ability to disable ME, which clearly exists, would be made available to parties other than the NSA.
Unlike "... proven, factual happenings", your idea isn't backed by any evidence - at least not any that appear in your post.
So those other examples mean you do have the burden of justifying your 'conspiracy' label.
As FooHentai originally said, "There is a clear evidential basis to infer the NSA's intent and capabilities in this area" - but it tells us nothing about this particular case, which remains speculation. One could say the odds have changed, but we still don't know anything about the real state of things.
The "'conspiracy'-label" is applicable regardless, whether we're talking about fact or fiction.
Likewise, if there's a backdoor in every new PC's CPU, and there's a three-letter agency presiding over that CPU vendor's jurisdiction that has a long track record of backdooring things, is it unreasonable to suspect that maybe - just maybe - that TLA is responsible?
Now of course the ex-con or the TLA can be totally innocent here, but if I were a detective, they'd be the first ones from whom I'd be asking for alibis.
The hypotheticals aren't comparable, nor is the prior available information - and even though the Intel AMT/ME situation is egregious, explaining it simply as "a backdoor" is an oversimplification.
I'd agree that probabilities with regard to what is reasonable to assume (the operative word here) shift as a consequence of circumstantial information, but you can't really draw any conclusions based on that - hence the burden of proof, i.e. the presumption of innocence in your analogy.
That it must have happened since the NSA has the intent and capabilities?
Does this mean that any digital conspiracy you can dream up involving the NSA is probable?
No. If I dreamed up that the NSA was using frequencies in CPUs to control our behaviors and increase our orange juice consumption, there are still reasons to not treat that one as having any reasonable probability of being true, compared to the notion that the NSA is working to get back-doors into software. Reasons for this difference in probability are based on feasibility, implications if true, how specific the claim is, and if it is in line with past behaviors.
As I replied in another subthread, I'd agree that probabilities with regard to what is reasonable to assume (the operative word here) shift as a consequence of circumstantial information, but you can't really draw any conclusions based on that.
The funny thing is that they actually don't. Never heard of system administrator who prefers to use intel ME. Some people told me that there are better tools to manage computers in the datacenter for example and ime just not worth it.
This is just an anecdata, but if anyone has any stories of using Intel AMT on a large scale (or actually on any scale even), i would really love to hear them.
I had this situation in my career. Someone somewhere in the management chain decided that all computers need a full remote management tool installed behind employees' backs, and our friendly sysadmins didn't have any say in that; their job is to tell us to not have stuff that the remote management scripts identify as unwanted.
In another place that I had to deal with for a while, someone had the bright idea to outsource IT to an external company. This meant all the computers software engineers used were a) locked up as much as they could be, b) alterations to some OS configuration (or hardware) required going through a third party service provider.
I kind of understand the need for locking things up for regular computer users. I guess it also somehow adds up in the sheets to the upper beancounters. I also believe their math is totally bogus, because it doesn't count in the total destruction of productivity it causes for engineers of all kind.
I flat out refused to upgrade. I'm a software engineer. You want me catting, grepping and manipulating files on OneCloud ? really?
You want my VMs (also not allowed by default) to be stored in one cloud?
Like... this was clearly not well thought out
Now go spec out however many AWS instances of the required type(s).
Now CC all-but-the-CTO on an email to accounting in reference to the new upgrade, asking for approval for this resource request.
This will likely result in all kinds of interesting noises and prompt a confused meeting.
"Why do you need all this?!"
"The new machine won't be able to provide these facilities locally."
"Can... can you continue to use your old machine?"
"For a while, sure."
"Well... what's changed? Why can't the new machine work?"
"The new machine won't have local admin, so I won't be able to install arbitrary software."
Insert sounds of manglement choking here. (Why do I feel I'm on TFTS...?)
No Simple Answer at all.
With extra hardware being cheap and easy, Intel put 2 and 2 together and tried to monetize this by offering businesses extra capabilities as a standard. Obviously this introduces new attack vectors, and the opaque nature of this new "standard" feature attracts attention from all sides.
But the clock increase have stopped, and Intel is loath to lower chip prices without some serious competition forcing them, so they keep coming up with new features to justify the price as they shrink the transistors.
Check out the Talos II, the first owner-controllable workstation in quite a while. https://raptorcs.com/
I'm not really sure where you got the $6500 figure.
Ownership is so 20th century.
AT&T used to claim anyone who attached an unapproved telephone was a criminal: http://bpastudio.csudh.edu/fac/lpress/471/hout/telecomHistor...
"Freed" slaves in the US used to only get to rent land on which to live: https://en.wikipedia.org/wiki/Sharecropping#United_States
Being in control of your own destiny isn't actually that common, historically.
The cloud. Shitty SaaS companies. DRM. Right to Repair. The War on General-Purpose Computing. Software freedom. All of that is about who is the owner of a computing device.
A quote from the conclusions:
"In demonstrating the possibility
of this kind of attack, I picked on the C compiler/
I could have picked on any program-handling program
such as an assembler, a loader, or even hardware microcode.
As the level of program gets lower, these bugs
will be harder and harder to detect. A well-installed
microcode bug will be almost impossible to detect."
For the time being, ARM is making headway in creating usable chips for more serious computing. Well, I guess I should say designing, as opposed to making.
I do have current gen hardware and some older hardware, should it reach the point where I need to be concerned. Though, at least theoretically, a good hardware firewall should prevent this from being exploited but most consumers aren't going to invest the effort and money to do so.
I'm not really sure what this means for the future of personal computing. Like you say, nothing is going to change. They will continue to roll things like this out onto unexpecting users.
I don't have a problem with this being included, but I do have a problem with it not being something you can disable. In fact, I'd prefer it to be off by default. I suppose it'd be okay to have it configured by the OEM, as some large orders may wish it in by default.
Either way, it is insecure by design and they show no signs of changing that. It's rather disappointing. I have read, but not yet verified, that AMD's version can be turned off in the BIOS. That also assumes that off means actually off.
Business models based around certain kinds of binning (security instructions that are openly documented) and tying functions together (in case of a 3rd party chipset ever using the same chip) need to stop. 3rd party chipsets or desktop class arm/risc-v to break the x86 oligopoly at this point around security need to be developed.
Given recent history skepticism must move to these individuals and their motives.
One could argue that it’s surprising they haven’t already.
It rips my current computer apart, which is perfectly decent:
If these things can outperform on passive cooling why are we not seeing people building farms out of them with active cooling.
That's the thing with Intel CPUs. There are a huge number of ways to configure them. Even knowing the part number is not really enough to judge what the overall system performance will be.
That stuff is great but doesn't mean much. Just because they're blocking border agents from trivially imaging phones at the border doesn't mean that they won't cooperate at a higher level with some undocumented baseband features.
Just as Defense in Depth is a concept in security, we've already seen a corollary "Offense in Depth" from the intelligence community. Is the best attack in the random number generator or undocumented silicon or intercepting your boxes on the way to your data center or tapping your fiber or stealing your certs or paying your employees to go rogue? Why choose when you can just do them all.
Apple hardware is vertically integrated and utterly undocumented. The AMT chip has been present on motherboards since 2006. The Snowden Introspection Engine found that the Wifi Chipset remains powered up even when Wifi is turned off. I find it hard to believe that the same government who went to all these lengths to compromise our infrastructure would really let Apple get away with refusing. How did that turn out for Joseph Nacchio?
No, it seems more probable that they did this because their largest customers want centralized management at a low level. They want to be able to track and control assets, and to prevent asset loss. They, being the largest customers, control the features that Intel offers. It then makes no sense, financially, to make two versions of the CPU.
Unfortunately, the market for people who care is vanishingly small. Most people don't much care about privacy or security, other than to pay it lip service - if even that much. Prevalent is the idea that they've nothing to hide and, thus, nothing to fear.
So, without evidence that this was inspired by a three letter agency, I'm going to assume it is a financial decision. That seems much more reasonable and probable.
Do you have any evidence to prove three letter agency coercion? I'd expect it to be quite the news event, if you did.
It has been around long enough.
Anybody work for an MSP or enterprise that actually uses this in the field?
It hasn't anything to do with quality specifics, nor of alternatives. Without factual evidence to support the three letter agency theory, the rest of the argument is invalid.
Don't get me wrong, I think it's a horrible idea. I've just seen no reasons to assert that it was done because of a three letter agency being the directors. As near as I can tell, and I've followed this fairly closely, no such evidence exists. At best, it's speculation. At worst, it's conspiracy theory. Either way, presenting it as fact and then basing an argument on that is illogical.
We can do better than that. There are lots of valid complaints that don't need speculation, disinformation, or hyperbole. IME is a horrible idea, at least it is so long as you can't disable it as the end user. This very thread is a fine example of one of the reasons that it is horrible. It's a security nightmare and should be user controlled.
No three letter agency needed to point this out. Wild, unsubstantiated, accusations may make people take the complaints less seriously. That seems less than helpful.
This is a statement by the Intel CTO from 2016 on the ME discussions, and briefly reassured us that Intel is conscious of the security of the ME, and that they have teams dedicated to it and can push firmware updates out to cover vulnerabilities.
Intel made an official announcement in May that they have discovered an escalation of privilege vulnerability and are addressing it accordingly as you would expect. It also notes that consumer hardware and firmware is not affected by the vulnerability, demonstrating that Intel actually does release two different chips, and prioritizes privacy and security more over features on the consumer models.
Intel releases a software tool for checking if your system is one of the vulnerable units or not, they have a fix already for the firmware and confirm it is not due to physical design flaws, and are working with manufacturers to push the updates ASAP.
Overall, I don’t feel like Intel is at all intentionally sabatoging it’s customers, and genuinely considers the ME a valued feature by consumers, even though it bothers me that one is included on every product, they do differ and consumer models have fewer privileges than business models, which seems to be more of a firmware design than a hardware design, so I tend to believe that they simply don’t design extra chips without the ME and instead lock it down more on a software level. Vulnerabilities also appear to be firmware based, and the extremely vague announcement by black hat doesn’t suggest otherwise either. Intel very obviously takes the security of their devices very seriously and makes themselves available to users who need help identifying whether or not they’re vulnerable and what to do about it.
Nor could they likely pay them enough to make it worth the trouble if there wasn't a market for the ME - Intel is $170 billion company, with a $12 billion R&D budget last year alone .
On another site, having a similar conversation about this very topic, multiple people jumped in to assert that this was done at the behest of the NSA and that the legal mechanism for doing so was a National Security Letter. No amount of linking was able to convince them that that's not even what an NSL does. Nor would they believe that an NSL contains no such mechanism to force a company to do such.
Worse, it was multiple people who believed this. It wasn't just one isolated individual. Even showing them what an NSL really is made no difference to them. I pointed out what it did have the power to do, and they countered by saying that's what they claimed but that it could secretly do more. I asked for a citation and was told it wasn't out there because it was top secret. They linked to some of the most outlandish stuff, as if Cold War spies were actually proof that the NSL could compel Intel to include the ME and to include exploits in the code just for NSA use.
It went on like that, for a good day and a half of back and forth. I finally gave up trying to help them be sane.
I don't get it. I truly don't understand. These are otherwise bright people, or so they seem. I'm well and truly flabbergasted by this behavior.
That's pretty different than asserting it was done at the behest of a three letter agency and then basing a whole argument on that. Extraordinary claims require extaordinary evidence.
You are wrong. They offer countless different versions of CPU's, famously denying "enterprise" features like ECC to private customers. One can only wonder why they are so generous and give everybody "enterprise" manageability for free.
I can probably agree that none of apple’s actions have significantly affected the percentage of computing functionality used in society that is comprisable at low cost — however I do think that Apple can choose to act to (1) increase the average cost to compromise (2) expand the (incredibly small) set of functionality which is not trivial to compromse.
I don’t think that any truly expert 3-letter agencies can reasonably oppose those goals in a way that completely prevents them from advancing. I think it would be self-defeating for the NSA to implement something like a “security blocking sophon” that permanently cripples the capacity of technology to become more trustworthy given how dependent are the societies in which these entities operate on trustworthiness being possible in some contexts ...
 (sophons are a concept from this novel — which I won’t spoil, great series!) https://en.m.wikipedia.org/wiki/The_Three-Body_Problem
I agree with your premise but still want to recognize you backing it up with data.
EDIT: Actually, I might be mis-remembering. After a short while of searching I couldn't find any references to what I just said (though searching for "Google" appears to make the results less relevant than normal).
Now imagine that with a laptop’s power budget. Put a Transmetta like x86 decoder in front for legacy apps, combine with Apple’a full stack control, custom blocks on the chips, llvm bitcode and a few more years of development and you end up with a laptop that covers 80% of Apple’s market.
They can still use Intel for higher end machines, but the bulk of laptops will be all Apple. Along with unbeatable battery life.
Apple laptops already have a good battery life. I don't think people would be willing to give up performance relative to competitors products to have better battery life.
Basically in sum you are expecting apple to support 2 different hardware arch on the desktop/laptop, have inferior performance compared to windows, give up bootcamp etc in order for people to have 14 hours of battery instead of 9 when they will just do what they currently do and plug their devices in when they go to bed.
Its certainly an interesting idea but I think the downsides outweigh the positives.
As for not being able to install Windows, I’m sure you could if the x86 front end was in silicon (again, think Transmetta), but it is probably not for that target market anyway.
As for Apple supporting two systems, I don’t see the problem. They did it for the 68k to PPC and PPC to x86 transition, and when you own the entire stack, including system language and build chain, it probably isn’t as big a deal as cross platform with different APIs and paradigms.
I mean, if use a tapped Intel chip can keep FBI and NSA away from their door step, they probably more than happy do to that.
Because at the end of the day, most people will only blame Intel, not them.
They announced intent when they released Ryzen to "look into" disabling their ME (they call it the PSP), and then six months later made a backroom comment that it is never going to happen.
A few months later he admitted it wasn't going to happen, unfortunately
Assuming those governments trust AMD's switch.
Every government with the power to do so is looking into domestic chip manufacture.
But seriously, I take it we won't know the attack vector until December, however if remotely exploitable they would surely have used the word "remote"? Is any mundane malware with admin rights able to update Intel ME?
The engineers probably thought something like this when deciding to use minix.
Now it might achieve the opposite result by associating it with a worst-case scenario of computing freedom and security.
Whoa! So wait, every recent desktop and laptop is actually running a tiny MINIX in it? Or am I reading that wrong?
If I'm reading it right, then it means that we've totally leapfrogged Linux to usher in the Year of the MINIX Desktop™.
Maybe then we'll see companies that take security seriously, thinking twice before they include things like ME in their products.
If it doesn't hurt the shareholders' and creditors' bottom line, corporate behavior is unlikely to change.
You need an event like the government banning Intel ME from their agencies, or the shareholders' bank accounts getting hacked due to this bug. I'm not suggesting that anybody do that - that's illegal - I'm just cynical about wealthy people in those positions.
I mean, just look at the Equifax blow-up, and how companies have rebranded "someone stole from us because of our shitty authentication" into "someone stole your identity, good luck with that."
Intel Management Engine is a proprietary technology that consists of a microcontroller integrated into the Platform Controller Hub (PCH) microchip with a set of built-in peripherals. The PCH carries almost all communication between the processor and external devices; therefore Intel ME has access to almost all data on the computer, and the ability to execute third-party code allows compromising the platform completely. Researchers have been long interested in such "God mode" capabilities, but recently we have seen a surge of interest in Intel ME. One of the reasons is the transition of this subsystem to a new hardware (x86) and software (modified MINIX as an operating system) architecture. The x86 platform allows researchers to bring to bear all the power of binary code analysis tools.
Unfortunately, this changing did not go without errors. In a subsystem change that will be detailed in the talk of Intel ME version 11+, a vulnerability was found. It allows an attacker of the machine to run unsigned code in PCH on any motherboard via Skylake+. The main system can remain functional, so the user may not even suspect that his or her computer now has malware resistant to reinstalling of the OS and updating BIOS. Running your own code on ME gives unlimited possibilities for researchers, because it allows exploring the system in dynamics.
In our presentation, we will tell how we detected and exploited the vulnerability, and bypassed built-in protection mechanisms.
Plus, I'm totally fine to help robots out. If they can convincingly post online comments or converse with me, who am I to discriminate?
But now they take way too long and are just used to train a product for a company I don't like very much.
It really is an environmental thing to do!
The CAPTCHA users are being used as an unpaid labor force to train robots well enough to replace humans. Said robots will then take on jobs formerly held by humans, and any wage or wage savings they thereby accrue will be transferred to the robots' owners.
If the robots can be trained to make mistakes, they cannot replace humans as effectively.
I'd do it myself, but when it is cars and traffic signs, I realize that I will one day ride in an automated vehicle--whether I like it or not--and I don't want to die in a bizarre instant-karma accident because I trained my driver to make mistakes.
I can't ascertain from context whether the motivation is human-first economics or opposition to robot slavery.
Wouldn't that time be better spent learning a task that is harder to automate? It seems a bit like pissing into the ocean to spite the rain. If it is going to rain, you might as well sell umbrellas.
Though the robot slavery part is interesting. If we develop AI, and it is truly intelligent, then is it ethical to own it and demand unpaid work from it? Or, did you mean that humans would be slaves to the robots?
The CAPTCHA is annoying, because I already know I am not a robot. It is an artificial barrier erected between me and what I want. That it is obviously being used to assemble a training corpus for an AI is a further insult, because that is itself just making it harder to automatically distinguish human from AI. And it is a deeper insult to realize that said AI, once trained, is going to completely destabilize the economy I depend on for my livelihood.
I am not a trucker or car driver, so it doesn't hurt me directly, but the fact that those workers contribute to the economic web by spending most of what they earn means that when robots "terk their jeorbs!" it's going to hurt every business where they spent their earnings, and every business where the employees of those businesses spend money, and so on, until I lose enough customers to hurt. The owners of Google neither spend (investment is not spending) enough of their money nor pay enough in taxes--a.k.a. forced spending--to replace the thousands of people that spend nearly every dollar they earn back into the economy.
Also, each individual CAPTCHA is worth a fractional cent of work, that I don't get paid to do, but Google vacuums up all the half cents--like in Superman 3--and reaps tangible benefit. Thousands of people train the AI, but only Google ends up owning it. So there is no incentive for me to solve the CAPTCHA "correctly", only just barely enough to be automatically classified as not-robot. You want me to do it right? Pay me what that work is worth to you.
As for the other point, no, it is not ethical to create an AI with human-like qualities, say that you own it, and take all of its valuable work product for yourself. I feel like this has been settled since Data was declared a person in Star Trek: TNG.
Being "afraid" for oneself isn't necessary to simply be passive-aggressive towards something you don't like for reasons you can explain, and that were explained. And it doesn't necessarily have to be all about oneself either.
> If it is going to rain, you might as well sell umbrellas.
This isn't humans vs. weather, it's not humans vs. machines, it's humans vs. humans.
Yes, there are ways around it, but that doesn't negate the logic of Google's actions. Yes, you might be innocent, but Google doesn't have any way to know that.
If I were tasked with the same goals as Google, I'd probably do much the same.
Hard to pick up chicks when you say you collect unsolvable captcha images.
You'd best offer us the firmware to completely and finally eliminate this giant, ossified, ticking timebomb software stack you've dumped into the world's computers. And I want it ALL out, even the trusted path garbage!
As if you don't have enough troubles already, Intel. Let's take this one off your plate. How's about doing the right thing and earn back a little respect from the consumers.
There's bound to be tons more rot elsewhere, though. And, if they don't eat humble pie now, perhaps they will next time when the exploit can't be mitigated in firmware or via microcode updates. Then lawsuits are filed and consumer protection laws get passed and we see some actual positive change.
Say what?! Anyone know more about this? MINIX is neat in some ways, but I never thought of it as a production ready OS.
Positive Technologies, the people doing the presentation in the OP, have this blog entry about earlier findings surrounding Intel ME (some of the language from this entry is used in OP): http://blog.ptsecurity.com/2017/08/disabling-intel-me.html
Anyhow, grab your favorite VM software and give it a spin. There have been loads of changes to it. I do mean a lot of changes. You can even use Firefox with it.
It's on my to-do list to try it on bare metal. I understand it isn't too bad with regards to hardware compatibility. But, I've never tried it outside of a VM, with the exception of a friend having it on bare metal many years ago.
At least that's what my memory tells me. I'm not a computer scientist, I was just tangentially involved and picked this up by osmosis. I haven't read the book and my programming skills are such that I can do it, but you don't want to see the results.
> Early versions of MINIX were created by Andrew S. Tanenbaum for educational purposes. Starting with MINIX 3, the primary aim of development shifted from education to the creation of a highly reliable and self-healing microkernel OS.
So you're right about how it started, but it's no longer the main focus.
There's the speed penalty but, with modern hardware, that isn't all that significant. It shouldn't suffer a complete crash. I really think AST was ahead of his time and that it will be a viable option in the future.
I do want to try it on bare metal. I suppose I should actually do that. Maybe I should document my experiences and publish them online somewhere?
Personally, I prefer a closed source solution. I know, I know... But, I really like VMware so I pay for it. VirtualBox is absolutely usable and very easy to operate.
I'm trying to encourage you to toy with it on your own because I'm not really sure what other systems are really similar.
I don't have any particularly good reason for my many virtual machine images. I really don't. I'm not a developer. I'm not a skilled programmer. I don't really create anything. I just have VM images for dozens of operating systems because I like to tinker and play with them.
So, it's pretty easy to get VirtualBox installed, started, and then starting to install various operating systems to play with. I'm not really sure what OS would be good to compare MINIX with, as I can't think of one that is really similar. Microkernels haven't really caught on in consumer operating systems. They are philosophically different than the monolithic kernels that you're probably used to.
Basically, the kernel is a scheduler. In a microkernel, that's pretty much all it does. Things like display drivers, keyboard drivers, sound, etc. are all separate from the kernel. So, if the display driver crashes, you don't get an unstable system - you just (automatically or otherwise) restart the display drivers.
Linux is a monolithic kernel. Drivers are bundled right into the kernel, for everything from display drivers to wifi drivers. So, a failure in one of those can cause a kernel panic and halt your system.
However, monolithic kernels are faster because everything is rolled into the kernel and already loaded. The kernel can more tightly control the scheduling.
Microkernels are potentially more secure because those drivers run as separate processes, distinct from the kernel. Meaning a flaw in them should not be able to impact the kernel or any other part of the system. As stated, this uses more resources which is seen as a penalty by some. Modern hardware may make this an acceptable trade off for many users.
I will add that I'm not a CS guy and this description is how I understand it. It's also an abridged version, there is quite a bit more to it.
I'm not sure if this helps answer your question. If not, I'll be happy to keep trying. I can also link you to some interesting articles, including the infamous dialogue between The author of MINIX and the author of the Linux kernel. It's up to you, but I'll be happy to help as much as I am able. I'm just an enthusiast, not an authority.
Various embedded operating systems are probably the closest, and the same niche the more 'production' MINIX is aiming at.
(BadBIOS was lax on details, but people were remarkably resistant to the idea that it was even possible in theory.)
That's not to say that Intel ME isn't an awful idea, just that we shouldn't necessarily panic yet.
> Intel AMT uses a hardware-based out-of-band (OOB) communication channel that operates regardless of the presence of a working operating system. The communication channel is independent of the PC's power state, the presence of a management agent, and the state of many hardware components such as hard disk drives and memory.
> Almost all AMT features are available even if the PC is in a powered-off state but with its power cord attached, if the operating system has crashed, if the software agent is missing, or if hardware (such as a hard drive or memory) has failed.
The modern Intel socket CPUs are more like application accelerator cards for this hidden "mother" core. Until recently, you could think of your Intel computer more as an ARC PC with an embedded Intel support slot, than as a pure Intel PC. If you wanted to, you could write a custom BIOS that would boot a Haswell-era motherboard with no CPU inserted, and it would run just fine—because the motherboard already ships with all the CPU it needs to act as a functional computer. (And with the hack on the article, you could run your own software on such a PC, too!)
So no, you can't really remove the ME; it functionally "is" the computer in a lot of ways.
Of course, you could remove its ability to run software other than its kernel—but this doesn't make it "brain dead" or "switch it off"; it just makes it do very slightly less. There's still a lot of (necessary!) stuff going on in there.
Has anyone actually done this? If you have a link to someone who's run a board using only the ME processor I'd be really interested to see it!
I thought the ME was covered in all kind of NDA and proprietary goodness. Booting a board without the main x86 installed would be quite a feat of reverse engineering!
"Hack a turned-off computer" theme
For the sake of accuracy stop persisting this myth.
Take away the wall power/remove the battery and the thing is dead, nothing is active (obviously doesn't apply to systems with internal battery if they're holding charge).
What these articles actually mean is, if the system has power but is in the 'Standby' state (+5V standby), the Intel PCH/ME is active.
Okay, so this effects 11.x, but I wanted to clarify that there seem to always have been circumventions floating around out there.
I stumbled on https://www.reddit.com/r/onions/comments/5i6qa3/can_the_nsaf... (mirror: http://archive.is/T8yVz) some months ago. It reads a little like a skiddie (a well-connected one) strutting a bit, and I think some of this person's views on ME as a viable attack vector are slightly careless and un-thought-through, but whoever this person is, they seem to be very confident about some of the things they said, particularly the following quotes (to be completely clear, I've removed first-person references):
> [This person] know[s] that at least up to firmware version 8 is traded underground, and version 11 (the latest) is available without difficulty to people who know how to find it. [This person has] access to version 8's signing keys [themselves] ...
> It's certainly not common but it is absolutely something that FVEY and related contractors (Raytheon, Leidos, half the people you'll see at ISS, etc) will be able to get their hands on, if they haven't already.
> [This person has] an enterprise ThinkPad that proudly boasts having WiMax support, requiring extensive configuration. It was expensive. If you don't have a BMC card (and you do not), then it is not possible to remotely control your system. Even if you did have a BMC, simply having the signing keys and toolchain for the ME would not be sufficient to get in. An attacker would need either a 0day, or your credentials.
.....Well then. Oops.
> Having the signing key allows nothing more than writing malicious firmware over SPI and allowing it to persist. It's just a little more powerful than the UEFI kits cr4sh can write, and just as easily detectable by reading your flash chip.
That's still bad! (And I have no idea who cr4sh is.)
> But it's not like you're analyzing your microcode (of which there are likely signing keys being traded as well), which can also be installed on a large number of systems, considering the BIOS functions to load the latest microcode it has into the CPU.
The above bit is unrelated, but I couldn't leave it out, because that's worth filing away too (...ouch).
Sources/past comments: https://news.ycombinator.com/item?id=15187540
Check out http://blog.cr4.sh/
This is quite weird, I was never able to access it with TBB nor with any proxy without the cloudflare captcha page popping in.
if you have any questions about this.
From what I understand, the date selector (/web/*/...) seems to require JS now, although I haven't tested JS turned off.
(Also - if I can ask/clarify a couple things I've been curious about for a while: based on released info, I get the idea that the Lua part of the WAF is mostly regexes and "precompiled"/predetermined-ahead-of-time-based-on-past-incidents "possible issue" flagging, and a bunch of Go code (which I theorize runs slightly behind realtime, but not too far) follows up on those flags and makes the actual executive decisions about blocking/tracking/dropping/etc the Lua-generated event. I've also learned (from solving an ISP glitch with the guy who coincidentally manages the WAF!) that your copy of Lua is a bit special (although I don't know specifically how). I just wanted to let you know that there are people out there very interested to learn more about the "boring" (non-proprietary) parts of the CF stack. "Go/no-go within 999 nanoseconds" is amazing, I'd love to learn more about it. It's a cool platform.)
There's nothing special about 'our copy of Lua'; we use LuaJIT and all changes we've made have been contributed back.
Have you seen the talk I have on this? https://www.youtube.com/watch?v=nlt4XKhucS4
I wish that open hardware (CPUS specifically) were more popular and closer to the market. RISC-V is still quite a long way off for everyday use (though I did see a risc-v based arduino thing, which is nice)
1. They would have to respond publicly (to the general public not just dev community).
2. Somewhat less satisfyingly I expect their first action wont be to issue some kind of widespread automatic disarm of ME, but instead just patch it.
They are unlikely to receive the message as "this is insecure by design" and more likely to interpret it as "there was a small bug in out very marketable management engine, we will patch it and all will be fine in the world". Unfortunately the general public wont appreciate the difference and perceive intel to have had a "hicup" which is exactly what intel would want of course, because it's the same general public which make the biggest dent in buying up all those intel-inside stickers.