Hacker News new | comments | show | ask | jobs | submit login
How to hack a turned-off computer, or running unsigned code in Intel ME (blackhat.com)
769 points by edejong 62 days ago | hide | past | web | 240 comments | favorite



In case you haven't already used the following, please note that the NSA had an undocumented "backdoor" included which "disables" the ME. (Man, oh man, I wish I was making this stuff up.)

http://blog.ptsecurity.com/2017/08/disabling-intel-me.html

I put quotes around "disables" because the ME is not fully disabled. The blog's analysis does show how it is in a "safe" state, i.e. forced to ignore the outside world very early in its code path. Also, not likely to brick your computer, assuming unscrewing your case and using a SPI flash programmer hasn't already bricked your computer.

Edit: "backdoor" in quotes too.


You should also put "backdoor" in quotes. It's not a backdoor in any meaningful sense; it's just an undocumented setting that allows them to (mostly) disable the ME on their machines.


It's quite the opposite of a backdoor. It's "locking" their own backdoor that probably stands wide open on everbody elses house.


On the other hand, I like the irony of a backdoor having a backdoor.


"cat flap"


... which itself contains a flap for mice.


It's backdoors all the way down!


How do we hack our computers to be secure?


You can do this fairly easily with me_cleaner[1] which also does a lot of other disarming of ME.

[1]: https://github.com/corna/me_cleaner


I found the relevant flags:

--soft-disable-only

and

--soft-disable

https://github.com/corna/me_cleaner/commit/d5705b62f802fdf1c...


cmon, too much miss-information to ignore you... the ability to remove the firmware partitions is in no way or form a "backdoor", also you don't need an SPI programmer to perform the mod and the number of partitions you can remove depend on the version.


I don't think the High Assurance Platform (HAP) flag removes any firmware partitions; it just doesn't start certain processes.


which allow you to remove the partitions... otherwise you end up with the 30min reboot.


If there is a "safe" state, there's also an "unsafe" state and it was obviously introduced knowing it might be unsafe, otherwise there'd be no "safe" state.

Long story short, it's not a backdoor but a key to a backdoor which implies there is a backdoor.


ME is the backdoor - it's unsafe by default meaning it can be used to inspect/control your machine remotely. That, AFAIK, is it's primary and stated function!


I also assume it's the backdoor but with this piece of info (there is a "safe" state") the Intel people cannot legally claim they didn't know it was unsafe, so stupidity is ruled out (as in the saying never attribute to malevolence ...)

The last missing piece is now whether ME was actively put there to enable a few agencies to gain unauthorized access.


False. ME by itself does not do any remote communication whatsoever. Zero external attack surface. Different story for vPro, but you can disable it.

It's a great place, however, to gain persistence after an initial compromise as the talk shows.


Do you have proof that the ME does not do any remote communication and has zero attack surface? Be sure to include "internal" attack surfaces that may be accessible from malware in ring 0.


I was talking about the external attack surface, and there's absolutely no indication that ME has any such attack surface as far as software vulnerabilities go. They could have built a deliberate backdoor into the network code, but that's highly unlikely.

It obviously has a lot of internal attack surface, which is bad enough.


That's also from Positive Technologies, the people giving the presentation at Black Hat.


This reminds me of how no one cared about 40-bit crypto in the 1990s because only the NSA can break it, to take one example. Even Snowden's NSA leaks are often relatively old. This is part of why I dislike the debt-based economy. The more money the NSA spends, the more government debt is created.


I would be glad if this made Intel reconsider their stance on enforcing untrustable CPU features onto users.

CPUs aren't cheap! Just give your customers full control over the product!


It's almost as if there's some mysterious third factor at play, above and beyond giving your customers what they want, and above making a profit.

What could it be? This mysterious additional factor.. What could make the US company with the biggest CPU market share in the world possibly want to put an omnipresent overlord control chip onboard every processor it ships? It's almost like there's some kind of outside force at play, but that's a crazy idea. There's No Simple Answer here, for sure.

What indeed, such a mystery! It's a mystery, I tells ya!


You're not Intel's customer. Dell, Lenovo, Amazon et al. are. And they want management systems, because the large companies who buy from those guys also want management systems. This isn't some super conspiracy, (even if it's something the three letter organizations try to take advantage of it) this is pure business. Pwning a computer across the internet that your dumbass sales manager left on a train is a feature.


In a world where the clipper chip, snowden leaks, NSAKey, room 641a, NSA/RSA deal, and national security letters didn't exist, I'd be in the realms of crazy conspiracy theorist and you'd sound like the rational voice of evidence-based reason.

But that's not the world we live in. What I list above are not conspiracy theories, they are proven, factual happenings. There is a clear evidential basis to infer the NSA's intent and capabilities in this area.

There are obviously security/privacy snafus that companies make all the time that have nothing to do with nation-state influence and spying. Superfish, for example, was more-likely-than-not a really bad judgement call on the tradeoff between analytics and customer privacy.

>this is pure business.

If that were true then the ability to disable ME, which clearly exists, would be made available to parties other than the NSA.


... or you know - both are true - Enterprises want remote management - and provide the business case - and TLAs would like it too


Nah, it's still conspiracy theories. At best conspiracy conjecture.

Unlike "... proven, factual happenings", your idea isn't backed by any evidence - at least not any that appear in your post.


What kind of evidence would you imagine one would find, if that were happening?


Your point doesn't push the burden of proof unto those that disagree (see Russel's teapot[0]), nor make it OK to skip evidence altogether.

[0]: https://en.wikipedia.org/wiki/Russell%27s_teapot


Things change once we've found the rest of a tea set in various orbits. There may not be strong direct evidence but it's no longer a silly conjecture, it's the kind of thing that we find.

So those other examples mean you do have the burden of justifying your 'conspiracy' label.


Not really; it isn't actually proven until you have proof.

As FooHentai originally said, "There is a clear evidential basis to infer the NSA's intent and capabilities in this area" - but it tells us nothing about this particular case, which remains speculation. One could say the odds have changed, but we still don't know anything about the real state of things.

The "'conspiracy'-label" is applicable regardless, whether we're talking about fact or fiction.


If you find that someone broke into your house and stole your Xbox, and there's a guy down the street who you already know has a long criminal record of home burglaries, is it unreasonable to suspect that guy to be a likely culprit?

Likewise, if there's a backdoor in every new PC's CPU, and there's a three-letter agency presiding over that CPU vendor's jurisdiction that has a long track record of backdooring things, is it unreasonable to suspect that maybe - just maybe - that TLA is responsible?

Now of course the ex-con or the TLA can be totally innocent here, but if I were a detective, they'd be the first ones from whom I'd be asking for alibis.


I don't think your analogy holds up;

The hypotheticals aren't comparable, nor is the prior available information - and even though the Intel AMT/ME situation is egregious, explaining it simply as "a backdoor" is an oversimplification.

I'd agree that probabilities with regard to what is reasonable to assume (the operative word here) shift as a consequence of circumstantial information, but you can't really draw any conclusions based on that - hence the burden of proof, i.e. the presumption of innocence in your analogy.


As indrax partly pointed out, Russel's teapot exists in a void where rational entities with no hidden desires are discussing a single fact with perfect knowledge, and no other related facts exist. Reality is filled with humans whose behavior follows trends, and the entirety of science is based on the concept that past evidence applies to the future despite having no evidence it will, because we allow induction since deduction is based on axioms of which we have none.


So what is the inductive reasoning behind the alleged conspiracy in question, then?

That it must have happened since the NSA has the intent and capabilities? Does this mean that any digital conspiracy you can dream up involving the NSA is probable?


>Does this mean that any digital conspiracy you can dream up involving the NSA is probable?

No. If I dreamed up that the NSA was using frequencies in CPUs to control our behaviors and increase our orange juice consumption, there are still reasons to not treat that one as having any reasonable probability of being true, compared to the notion that the NSA is working to get back-doors into software. Reasons for this difference in probability are based on feasibility, implications if true, how specific the claim is, and if it is in line with past behaviors.


My point is, to follow your example re: orange juice consumption, that the reasons not to treat it as having any reasonable probability is secondary to the fact that there is no supporting evidence (burden of proof, etc.)

As I replied in another subthread, I'd agree that probabilities with regard to what is reasonable to assume (the operative word here) shift as a consequence of circumstantial information, but you can't really draw any conclusions based on that.


"Does this mean that any digital conspiracy you can dream up involving the NSA is probable?"

Um, yes?


>And they want management systems

The funny thing is that they actually don't. Never heard of system administrator who prefers to use intel ME. Some people told me that there are better tools to manage computers in the datacenter for example and ime just not worth it.

This is just an anecdata, but if anyone has any stories of using Intel AMT on a large scale (or actually on any scale even), i would really love to hear them.


I don't think system administrators are the customers, they are just the employees who keep the systems running. The customers are the upper management who has final approval on the purchases. The problem is likely that somewhere between the system administrators and the chain of management up to the top, one of the levels don't have enough political clout or was not persuasive enough to convince upper management to take a different direction.


This.

I had this situation in my career. Someone somewhere in the management chain decided that all computers need a full remote management tool installed behind employees' backs, and our friendly sysadmins didn't have any say in that; their job is to tell us to not have stuff that the remote management scripts identify as unwanted.

In another place that I had to deal with for a while, someone had the bright idea to outsource IT to an external company. This meant all the computers software engineers used were a) locked up as much as they could be, b) alterations to some OS configuration (or hardware) required going through a third party service provider.

I kind of understand the need for locking things up for regular computer users. I guess it also somehow adds up in the sheets to the upper beancounters. I also believe their math is totally bogus, because it doesn't count in the total destruction of productivity it causes for engineers of all kind.


Yeah. New project at my work is to get everyone onto a stripped down Windows 10 dumb terminal with IE and MS Office installed, no admin rights, no extra software.

I flat out refused to upgrade. I'm a software engineer. You want me catting, grepping and manipulating files on OneCloud ? really?

You want my VMs (also not allowed by default) to be stored in one cloud?

Like... this was clearly not well thought out


Think like a lawyer for a minute, consider the kind of work you need to do, and the maximum IOPS, CPU time, memory and diskspace this will all need.

Now go spec out however many AWS instances of the required type(s).

Now CC all-but-the-CTO on an email to accounting in reference to the new upgrade, asking for approval for this resource request.

This will likely result in all kinds of interesting noises and prompt a confused meeting.

"Why do you need all this?!"

"The new machine won't be able to provide these facilities locally."

"Can... can you continue to use your old machine?"

"For a while, sure."

"Well... what's changed? Why can't the new machine work?"

"The new machine won't have local admin, so I won't be able to install arbitrary software."

Insert sounds of manglement choking here. (Why do I feel I'm on TFTS...?)


I've been a system administrator at several large and small companies, including a major cloud host. HN is the only place I've ever heard the Intel ME discussed in my professional career. I would actually like to have access to some of those features to help manage workstations.


Which will be fun when the next data breach happens because of it


The 'best' part about it is that they don't even have to leave it on a train. This is progress!


You are intel's customer when those retail stores try to sell a computer to yourself. They usually include the cpu details and can help a customer make a decision. That's why the intel inside campaign was so successful.


Keyword being "was". Unfortunately most PC (and Mac, to an even greater extent) buyers today probably barely know who makes the CPU in what they're buying.


Also the choice is, and always was, basically between Intel and AMD. That's literally two options a customer has. When both companies implement a management engine across their CPUs, you don't have any choice in the matter.


I've had a handful of computers with CPUs made by VIA, so there's at least a small third option.

https://en.wikipedia.org/wiki/VIA_Technologies


Never Sherlocking A Culpable, Integrated And Feasible Balanced Induction as to what that force may be!


Exactly :D


It's not only that. There is also that other US company that is the biggest competitor of this one, that will add every feature people want to their cheapest CPUs, will undercut its rivals profit rate, will negotiate customizations with larger clients, and yet refuses to compete on living those undesired features out of some CPU model.

No Simple Answer at all.


Actually they are cheap - the reason why a significant amount of on chip resources can be assembled into the IME on all of Intel's chipsets is because IC process technology has advanced by many orders of magnitude. The Pentium 4 represented the edge of consumer grade CPUs 10 years ago, and it was around that time we reached practical limits on frequency. As hardware manufacturing shifted towards more cores and caches, SoC ICs became the standard. Slapping a cheap, low power system of CPU and memory on every chipset is easy, cheap and natural.

With extra hardware being cheap and easy, Intel put 2 and 2 together and tried to monetize this by offering businesses extra capabilities as a standard. Obviously this introduces new attack vectors, and the opaque nature of this new "standard" feature attracts attention from all sides.


You don't hide features you implemented in order to sell your product to customers.


That's Intel's whole chip business model. Make a high end part and selectively disable features to form cheaper SKUs. It's also part of the chip binning process, allowing you to sell near-100% of what you make. Nvidia and AMD do this too.


Why not? It's worked so far >_>


They're "packaged" "solutions"!


Indeed. Moore's law is an observation on the transistor size on a die, how they halved in size every x months. This allows one to do either more complex chips (more features etc), cheaper chips (more chips pr die means a higher yeild of functioning chips) or faster chips (lower voltage, less heat, higher clock).

But the clock increase have stopped, and Intel is loath to lower chip prices without some serious competition forcing them, so they keep coming up with new features to justify the price as they shrink the transistors.


"Just give your customers full control over the product!"

Check out the Talos II, the first owner-controllable workstation in quite a while. https://raptorcs.com/


Hasn’t this been in preorder for years?


Talos II was announced this year. They did have an earlier project (Talos I) which did not reach funding goals due to how expensive it was. Talos II is a smaller scale system (with a bigger CPU) probably made in the hopes that it'll be more palatable in terms of price and usability.


What do you mean by that?



not even a price yet.


There’s pricing when you click preorder. It’s like $6500.


Ordered alone, the motherboard+CPU is about $2300.


It's $4000 for the desktop system and $4700 for the workstation.

I'm not really sure where you got the $6500 figure.


"Just give your customers full control over the product!"

Ownership is so 20th century.


Everything that is old is new again.

AT&T used to claim anyone who attached an unapproved telephone was a criminal: http://bpastudio.csudh.edu/fac/lpress/471/hout/telecomHistor...

"Freed" slaves in the US used to only get to rent land on which to live: https://en.wikipedia.org/wiki/Sharecropping#United_States

Being in control of your own destiny isn't actually that common, historically.


We all rent the land on which we live. Try not paying your property taxes. In most states.


...except for Nevada when for a few years they briefly allowed people to get allodial title.


...that expired upon death and could not be passed to heirs.


to me this looks fair: land does not reproduce, but people do. Give people the right to own a non infinite resource forever and pretty soon you get to the point where 1% of them will own 99% of that resource, which is exactly what is happening now in many contexts.


Still pretty cool.


thats almost a worldwide exception


I hear Mexico allows you to actually own land.


I don't know if this counts or not, but in the UK there are the concepts of freehold and leasehold. Leasehold requires payment of rent on land (to a landlord), but freehold does not. I believe that technically all land is owned by the crown, but I don't think they tax based on land ownership.


With land, it at least makes sense, because there's a very limited supply of it. So one could reasonably argue that it's just too valuable for society to fully surrender all control of even a part of it on a permanent basis.


In CA they just put a lien on your house. And all that means is you can't sell without clearing the lien (by paying back). I didn't pay property taxes for 2 years once.


That will happen in most states, but if you don't pay long enough they will usually come after you. It may take several years.


But you can also can rent land on which to farm, to build a factory, or a skyscraper, without living on that land.


I read the linked wikipedia article, but there was nothing in it that I found to support your assertion the freed slaves were not allowed to own land?


That was the most depressing comment I've ever seen here.


And yet it is at the very core of most of the issues in our industry.

The cloud. Shitty SaaS companies. DRM. Right to Repair. The War on General-Purpose Computing. Software freedom. All of that is about who is the owner of a computing device.


That's exactly why it's so depressing. As the cost of living should be going down, everyone is trying to extract rents and take away the individuals freedom.


Ahahahahahahahaahahahhahaahh huhuhu

-Intel probably


Please don't do this here.


Let me dial the paranoia up a little by calling your attention to this historic text:

http://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomps...

A quote from the conclusions:

"In demonstrating the possibility of this kind of attack, I picked on the C compiler/ I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well-installed microcode bug will be almost impossible to detect."


Once we get to the point where machine learning can develop and deploy exploits/low-level bugs like these will become very interesting, and likely weapons of nation states.


I'm pretty sure that was the entire point of the DARPA Cyber Grand Challenge. XD



I hate to be a Debbie Downer / conspiracy borderline, but nothing will change if all of this is true. Some apologies, "it's an oversight, we'll do better", "we take security seriously"... and nada. Maybe some provisional solutions which seem good, but in the back - things will remain status quo.


For a very brief moment in time, there was some hope that AMD would release their equivalent as open source. That hope was dashed and they will be doing no such thing.

For the time being, ARM is making headway in creating usable chips for more serious computing. Well, I guess I should say designing, as opposed to making.

I do have current gen hardware and some older hardware, should it reach the point where I need to be concerned. Though, at least theoretically, a good hardware firewall should prevent this from being exploited but most consumers aren't going to invest the effort and money to do so.

I'm not really sure what this means for the future of personal computing. Like you say, nothing is going to change. They will continue to roll things like this out onto unexpecting users.

I don't have a problem with this being included, but I do have a problem with it not being something you can disable. In fact, I'd prefer it to be off by default. I suppose it'd be okay to have it configured by the OEM, as some large orders may wish it in by default.

Either way, it is insecure by design and they show no signs of changing that. It's rather disappointing. I have read, but not yet verified, that AMD's version can be turned off in the BIOS. That also assumes that off means actually off.


It seems like SVM turns on and off with the extensions KVM (AMD-V) uses in my gigabyte motherboard for a Ryzen 7. All the TPM related functions (including PSP/AMD SP) are another separate option called "AMD CPU fTPM", and are disabled by default.

Business models based around certain kinds of binning (security instructions that are openly documented) and tying functions together (in case of a 3rd party chipset ever using the same chip) need to stop. 3rd party chipsets or desktop class arm/risc-v to break the x86 oligopoly at this point around security need to be developed.


Sounds like AMD know how to pressure the TLAs for a better deal.


The upcoming SiFive RISC-V 64 bit chip has an open source management layer. In fact the management layer is open source already, before the chips are available generally: https://github.com/sifive/freedom-u-sdk/tree/new (under riscv-pk).


RISC-V, in general, excites me very much!


And apologists assuring people there is nothing to get worried about, dismissing concern as conspiracy theory and demanding 'concrete' evidence of wrong doing.

Given recent history skepticism must move to these individuals and their motives.


Wow this sounds huge. I remember people speculating about this since it came to light... But people always talked about it like it was a vague hypothetical that only scary 3-letter agencies would figure out. I'm curious to see what kind of access they need in order to actually make use of this. If they somehow don't need physical access this is going to be nuts.


Yeah I’m really curious about that too, and I wish the guys would at least give that much detail but I guess they’d prefer to hype up the reveal at their event instead. I want to believe it requires physical access... For now I find comfort only in knowing that it’s a skylake+ vulnerability, and just continue to hope it doesn’t work down to other models as they learn more about the internal workings of the ME.


I wish that they would just publish that "hack" into the public, that way people would hopefully understand the dangers of "black boxes" and maybe even push intel and AMD to remove ME and PSP from their products.

I wish that open hardware (CPUS specifically) were more popular and closer to the market. RISC-V is still quite a long way off for everyday use (though I did see a risc-v based arduino thing, which is nice)


I think if the hack were published we can expect some things from intel with reasonable degree of certainty:

1. They would have to respond publicly (to the general public not just dev community).

2. Somewhat less satisfyingly I expect their first action wont be to issue some kind of widespread automatic disarm of ME, but instead just patch it.

They are unlikely to receive the message as "this is insecure by design" and more likely to interpret it as "there was a small bug in out very marketable management engine, we will patch it and all will be fine in the world". Unfortunately the general public wont appreciate the difference and perceive intel to have had a "hicup" which is exactly what intel would want of course, because it's the same general public which make the biggest dent in buying up all those intel-inside stickers.


This is the kind of thing that will cause Apple to switch to internally designed arm chips for their Mac line with great alacrity.

One could argue that it’s surprising they haven’t already.


Performance still matters. They're getting closer, but they aren't there.


> "baseline score of 4400 (which is the score of an Intel Core i7-6600U"

https://twitter.com/codinghorror/status/907711853530300416

It rips my current computer apart, which is perfectly decent:

https://browser.geekbench.com/v4/cpu/compare/4107961?baselin...


Am I the only one sceptical about if these scores actually translate to true computing power?

If these things can outperform on passive cooling why are we not seeing people building farms out of them with active cooling.


I think they can very likely match the TDP-down (7.5W,800MHz) performance of a Intel CPU from 2015 in a passively cooled iPhone. Now can they match the TDP-up (25W,2.8GHz,actively cooled) performance of this chip? Probably not.

That's the thing with Intel CPUs. There are a huge number of ways to configure them. Even knowing the part number is not really enough to judge what the overall system performance will be.

https://en.wikichip.org/wiki/intel/core_i7/i7-6600u


You're right but it's not a drop-in replacement: obviously being passively cooled into such a small device does not make it useful for constantly sustained performance. I'm confident aptly designed hardware surrounding those chips could have the characteristics we're looking for, but there are multiple barriers to break, such as a software <-> hardware synergy that is lacking for laptops, desktops and servers on the ARM side of things, as well as the current platform being largely good enough, so there's really no incentive (yet). IOW it's too esoteric right now to be of value, although some have tried, such as Scaleway, but that's not A11, and we all know where non-Apple ARM performance stands currently.


I am also skeptical. I would love to be proven wrong and see ARM as a successful competitor to Intel's monopoly first and the x86 monoculture second. (I'm also excited by AMD's recent successes in the CPU sphere. The Threadripper 1950X is a phenomenal CPU — 16 real cores at 3.4GHz is nuts and like nothing Intel sells.)


Give me a break. The same three letter agency that convinced Intel to do this will convince Apple to do the same. I'm guessing you're basing your faith in Apple based on their refusal to cooperate in the San Bernadino case[1] and the so-called "cop button" in iOS 11[2]. (And some generic "we value privacy" rhetoric that I won't bother linking.)

That stuff is great but doesn't mean much. Just because they're blocking border agents from trivially imaging phones at the border doesn't mean that they won't cooperate at a higher level with some undocumented baseband features.

Just as Defense in Depth is a concept in security, we've already seen a corollary "Offense in Depth" from the intelligence community. Is the best attack in the random number generator[3] or undocumented silicon[4] or intercepting your boxes on the way to your data center[5] or tapping your fiber[6] or stealing your certs[7] or paying your employees to go rogue[8]? Why choose when you can just do them all.

Apple hardware is vertically integrated and utterly undocumented. The AMT chip has been present on motherboards since 2006[9]. The Snowden Introspection Engine found that the Wifi Chipset remains powered up even when Wifi is turned off.[10] I find it hard to believe that the same government who went to all these lengths to compromise our infrastructure would really let Apple get away with refusing. How did that turn out for Joseph Nacchio?[11]

[1] https://www.washingtonpost.com/world/national-security/us-wa...

[2] https://www.cultofmac.com/498052/ios-11-lets-quickly-disable...

[3] https://en.wikipedia.org/wiki/Random_number_generator_attack...

[4] https://en.wikipedia.org/wiki/Hardware_backdoor#Examples

[5] https://www.extremetech.com/computing/173721-the-nsa-regular...

[6] https://arstechnica.com/tech-policy/2013/10/new-docs-show-ns...

[7] https://nakedsecurity.sophos.com/2013/12/09/serious-security...

[8] http://www.ocweekly.com/news/fbi-used-best-buys-geek-squad-t...

[9] https://libreboot.org/faq.html#intel

[10] https://www.documentcloud.org/documents/2996800-AgainstTheLa...

[11] https://en.wikipedia.org/wiki/Joseph_Nacchio


You're asserting that a 'three letter agency' convinced Intel to do this, and asserting it as factual. I'm not convinced that it is, and think market focus is more probable than nefarious agencies. Though, to be sure, those types of agencies would probably be willing to take advantage of this.

No, it seems more probable that they did this because their largest customers want centralized management at a low level. They want to be able to track and control assets, and to prevent asset loss. They, being the largest customers, control the features that Intel offers. It then makes no sense, financially, to make two versions of the CPU.

Unfortunately, the market for people who care is vanishingly small. Most people don't much care about privacy or security, other than to pay it lip service - if even that much. Prevalent is the idea that they've nothing to hide and, thus, nothing to fear.

So, without evidence that this was inspired by a three letter agency, I'm going to assume it is a financial decision. That seems much more reasonable and probable.

Do you have any evidence to prove three letter agency coercion? I'd expect it to be quite the news event, if you did.


If what you say is true, why has vPro not seen wider adoption?

It has been around long enough.

Anybody work for an MSP or enterprise that actually uses this in the field?


I can't say, really. My contention was largely around the idea that it was asserted as fact that it was at the behest of a three letter agency and the remainder of the comment presented based on that. It has not been established that it was at the behest of a three letter agency and presenting arguments based on that is like building a house on the sand.

It hasn't anything to do with quality specifics, nor of alternatives. Without factual evidence to support the three letter agency theory, the rest of the argument is invalid.

Don't get me wrong, I think it's a horrible idea. I've just seen no reasons to assert that it was done because of a three letter agency being the directors. As near as I can tell, and I've followed this fairly closely, no such evidence exists. At best, it's speculation. At worst, it's conspiracy theory. Either way, presenting it as fact and then basing an argument on that is illogical.

We can do better than that. There are lots of valid complaints that don't need speculation, disinformation, or hyperbole. IME is a horrible idea, at least it is so long as you can't disable it as the end user. This very thread is a fine example of one of the reasons that it is horrible. It's a security nightmare and should be user controlled.

No three letter agency needed to point this out. Wild, unsubstantiated, accusations may make people take the complaints less seriously. That seems less than helpful.


Do you then have substantive evidence that market forces/centralized management caused this?


Occam’s razor. I’m a very conspiratorial person and I’ve seen nothing to suggest any nefarious activity or collusion so I’m not getting carried away on this.

https://securingtomorrow.mcafee.com/executive-perspectives/a...

This is a statement by the Intel CTO from 2016 on the ME discussions, and briefly reassured us that Intel is conscious of the security of the ME, and that they have teams dedicated to it and can push firmware updates out to cover vulnerabilities.

https://www.intel.com/content/www/us/en/architecture-and-tec...

Intel made an official announcement in May that they have discovered an escalation of privilege vulnerability and are addressing it accordingly as you would expect. It also notes that consumer hardware and firmware is not affected by the vulnerability, demonstrating that Intel actually does release two different chips, and prioritizes privacy and security more over features on the consumer models.

https://newsroom.intel.com/news/important-security-informati...

Intel releases a software tool for checking if your system is one of the vulnerable units or not, they have a fix already for the firmware and confirm it is not due to physical design flaws, and are working with manufacturers to push the updates ASAP.

Overall, I don’t feel like Intel is at all intentionally sabatoging it’s customers, and genuinely considers the ME a valued feature by consumers, even though it bothers me that one is included on every product, they do differ and consumer models have fewer privileges than business models, which seems to be more of a firmware design than a hardware design, so I tend to believe that they simply don’t design extra chips without the ME and instead lock it down more on a software level. Vulnerabilities also appear to be firmware based, and the extremely vague announcement by black hat doesn’t suggest otherwise either. Intel very obviously takes the security of their devices very seriously and makes themselves available to users who need help identifying whether or not they’re vulnerable and what to do about it.


In post-Snowden era, I'm not sure that Occam's razor applies that way anymore.


If the FBI couldn't compel Apple to make a slightly modified version of iOS, the NSA certainly couldn't compel Intel to design and implement the ME.

Nor could they likely pay them enough to make it worth the trouble if there wasn't a market for the ME - Intel is $170 billion company, with a $12 billion R&D budget last year alone [0].

[0]https://www.electronicsweekly.com/blogs/mannerisms/markets/i...


THANK YOU. Intel is not Facebook or Google or Microsoft. They do not run software botnets, search engines, or social networks. They do not have political directives or dreams of changing the world. People with those objectives infiltrate other industries where they can actually influence something.


I'm not sure why people immediately jump to conclusions that things like this are the result of actions by three letter agencies. I just don't understand why people do that?

On another site, having a similar conversation about this very topic, multiple people jumped in to assert that this was done at the behest of the NSA and that the legal mechanism for doing so was a National Security Letter. No amount of linking was able to convince them that that's not even what an NSL does. Nor would they believe that an NSL contains no such mechanism to force a company to do such.

Worse, it was multiple people who believed this. It wasn't just one isolated individual. Even showing them what an NSL really is made no difference to them. I pointed out what it did have the power to do, and they countered by saying that's what they claimed but that it could secretly do more. I asked for a citation and was told it wasn't out there because it was top secret. They linked to some of the most outlandish stuff, as if Cold War spies were actually proof that the NSL could compel Intel to include the ME and to include exploits in the code just for NSA use.

It went on like that, for a good day and a half of back and forth. I finally gave up trying to help them be sane.

I don't get it. I truly don't understand. These are otherwise bright people, or so they seem. I'm well and truly flabbergasted by this behavior.

EasyAI 57 days ago [flagged]

Autism is a hell of a drug


This comment violates the guidelines: https://news.ycombinator.com/newsguidelines.html. Please don't post like this here.


No, but I do not present it as fact nor as being the only possible solution. I think it more likely, as I said. In fact, I was clear about saying it was only my opinion that it was more likely.

That's pretty different than asserting it was done at the behest of a three letter agency and then basing a whole argument on that. Extraordinary claims require extaordinary evidence.


> It then makes no sense, financially, to make two versions of the CPU.

You are wrong. They offer countless different versions of CPU's, famously denying "enterprise" features like ECC to private customers. One can only wonder why they are so generous and give everybody "enterprise" manageability for free.


Recent intel Cpus are designed in a country with a rich history of using and spreading exploits. I think it's a coincidence.


Your arguments make sense that the attack capacity continues to advance on many fronts (of course, the easy exploits like social engineering still remain the most common exploits by far — in raw numbers and probably in terms of impact as well). I would argue though that the existence of advanced attack capability doesn’t mean that the securable surface area of computing functionality can’t also grow over time.

I can probably agree that none of apple’s actions have significantly affected the percentage of computing functionality used in society that is comprisable at low cost — however I do think that Apple can choose to act to (1) increase the average cost to compromise (2) expand the (incredibly small) set of functionality which is not trivial to compromse.

I don’t think that any truly expert 3-letter agencies can reasonably oppose those goals in a way that completely prevents them from advancing. I think it would be self-defeating for the NSA to implement something like a “security blocking sophon”[1] that permanently cripples the capacity of technology to become more trustworthy given how dependent are the societies in which these entities operate on trustworthiness being possible in some contexts ...

[1] (sophons are a concept from this novel — which I won’t spoil, great series!) https://en.m.wikipedia.org/wiki/The_Three-Body_Problem


Thank you for providing sources with your statements.

I agree with your premise but still want to recognize you backing it up with data.


Apple is a large enough buyer that they could probably get Intel to supply CPUs with the feature disabled. They likely don't have to go as far as changing the CPU architecture entirely.


Google has already asked and Intel said no. You would need to get a coalition of many different OEMs in order to force Intel's hand.

EDIT: Actually, I might be mis-remembering. After a short while of searching I couldn't find any references to what I just said (though searching for "Google" appears to make the results less relevant than normal).


Do you have a reference for this?


I just searched for a while and couldn't find it, so I must be mis-remembering. Edited.


You must be. We have always been at war with Eurasia.


Its possible that this isn't so because the performance would be relatively crap.


I wouldn’t be so quick to dismiss the idea on performance. The new A11 seems to be doing pretty well: https://www.extremetech.com/mobile/256090-apples-new-a11-bio...

Now imagine that with a laptop’s power budget. Put a Transmetta like x86 decoder in front for legacy apps, combine with Apple’a full stack control, custom blocks on the chips, llvm bitcode and a few more years of development and you end up with a laptop that covers 80% of Apple’s market.

They can still use Intel for higher end machines, but the bulk of laptops will be all Apple. Along with unbeatable battery life.


Translating from x86 on the fly to run most existing desktop apps combined with switching to an under performing platform would be a one two punch that would make apple unpalatable compared to the competition. Further it would disable the idea of installing windows in a partition for gaming or other purposes. It would further require supporting software on both x86 and arm unless apple is willing to wholly surrender the high end.

Apple laptops already have a good battery life. I don't think people would be willing to give up performance relative to competitors products to have better battery life.

Basically in sum you are expecting apple to support 2 different hardware arch on the desktop/laptop, have inferior performance compared to windows, give up bootcamp etc in order for people to have 14 hours of battery instead of 9 when they will just do what they currently do and plug their devices in when they go to bed.

Its certainly an interesting idea but I think the downsides outweigh the positives.


I think different people have different use cases. Not everyone is tied to legacy apps. I’m sure a lot of MacBook Airs are just used for web and email. Look at the Macbook, it’s trade-offs are for people that want long battery life and probably do a lot of typing. Think journalists on the road, on long flights. I’m sure that market would jump at longer battery life for majority of apps.

As for not being able to install Windows, I’m sure you could if the x86 front end was in silicon (again, think Transmetta), but it is probably not for that target market anyway.

As for Apple supporting two systems, I don’t see the problem. They did it for the 68k to PPC and PPC to x86 transition, and when you own the entire stack, including system language and build chain, it probably isn’t as big a deal as cross platform with different APIs and paradigms.


Why Apple wants to do that?

I mean, if use a tapped Intel chip can keep FBI and NSA away from their door step, they probably more than happy do to that.

Because at the end of the day, most people will only blame Intel, not them.


In the short term maybe, but in the long term apples capacity to generate profit will eventually depend on their ability to implement systems that operate in society with incredibly high levels of trustable operation (autonomous cars, robots... — who knows what the future tech will bring — many of those possibilities will require at least same as level of computing security as is possible now. If it becomes widely possible to remotely compromise anything with an Intel chip — then the intel parts will simply not be viable parts in future products)


What I don't understand is why AMD doesn't jump into this niche market: just include a switch on their version of the ME (forgot name) that turns it off. Corporate clients still get their ME if they need it and AMD catches the security-focussed market. This would also mean lucrative orders from non-US governments.


Because a lot of three letter agencies demand these hardware backdoors exist, and AMD has no will to fight them on it.

They announced intent when they released Ryzen to "look into" disabling their ME (they call it the PSP), and then six months later made a backroom comment that it is never going to happen.


Reddit put forward the idea when Ryzen was about to launch. An AMD engineer said he'd "see what could be done" or something to that effect

A few months later he admitted it wasn't going to happen, unfortunately


There is no ME, if the ME can be turned off by a user. Users are not the real customers of a processor company, where their real customers buy more than 1 or 2 i7s.


> This would also mean lucrative orders from non-US governments.

Assuming those governments trust AMD's switch.

Every government with the power to do so is looking into domestic chip manufacture.


Joke's on them, my computer's never turned-off.

But seriously, I take it we won't know the attack vector until December, however if remotely exploitable they would surely have used the word "remote"? Is any mundane malware with admin rights able to update Intel ME?


If it is remote, I have a feeling we won't be seeing the presentation.


It's quite possibly related to 2017-5698.


"Wouldn't it be kind of great if millions of people were secretly running minix, it would finally go mainstream!"

The engineers probably thought something like this when deciding to use minix.

Now it might achieve the opposite result by associating it with a worst-case scenario of computing freedom and security.


"One of the reasons is the transition of this subsystem to a new hardware (x86) and software (modified MINIX as an operating system)"

Whoa! So wait, every recent desktop and laptop is actually running a tiny MINIX in it? Or am I reading that wrong?

If I'm reading it right, then it means that we've totally leapfrogged Linux to usher in the Year of the MINIX Desktop™.


It would be interesting to login into that minix.


I actually really hope that this is exploitable remotely and causes a massive global problem.

Maybe then we'll see companies that take security seriously, thinking twice before they include things like ME in their products.


I'm afraid this won't be enough.

If it doesn't hurt the shareholders' and creditors' bottom line, corporate behavior is unlikely to change.

You need an event like the government banning Intel ME from their agencies, or the shareholders' bank accounts getting hacked due to this bug. I'm not suggesting that anybody do that - that's illegal - I'm just cynical about wealthy people in those positions.


Right -- there are so many problems with boil down to a broken chain of liability.

I mean, just look at the Equifax blow-up, and how companies have rebranded "someone stole from us because of our shitty authentication" into "someone stole your identity, good luck with that."


Can someone repost the content into a comment? I cannot solve cloudflare's captcha.


Here you go:

Intel Management Engine is a proprietary technology that consists of a microcontroller integrated into the Platform Controller Hub (PCH) microchip with a set of built-in peripherals. The PCH carries almost all communication between the processor and external devices; therefore Intel ME has access to almost all data on the computer, and the ability to execute third-party code allows compromising the platform completely. Researchers have been long interested in such "God mode" capabilities, but recently we have seen a surge of interest in Intel ME. One of the reasons is the transition of this subsystem to a new hardware (x86) and software (modified MINIX as an operating system) architecture. The x86 platform allows researchers to bring to bear all the power of binary code analysis tools.

Unfortunately, this changing did not go without errors. In a subsystem change that will be detailed in the talk of Intel ME version 11+, a vulnerability was found. It allows an attacker of the machine to run unsigned code in PCH on any motherboard via Skylake+. The main system can remain functional, so the user may not even suspect that his or her computer now has malware resistant to reinstalling of the OS and updating BIOS. Running your own code on ME gives unlimited possibilities for researchers, because it allows exploring the system in dynamics.

In our presentation, we will tell how we detected and exploited the vulnerability, and bypassed built-in protection mechanisms.


This is how the robots win, by social engineering ;P


I use a VPN whenever I'm not at work, so I know how frustrating it is to be told I didn't pick all the images with a street sign or whatever ridiculous hoop I have to jump through. Especially when I'm only casually interested in the article.

Plus, I'm totally fine to help robots out. If they can convincingly post online comments or converse with me, who am I to discriminate?


Recaptcha is not only frustratingly slow, it's also free image classification for google. Frustrating that every time I sign up for something, I have to do 15 seconds worth of work for a company that doesn't pay me, or at least provide open access to the resulting data.


Takes me a minute or two to do a bad job of solving the captcha... eventually it lets me through.


I was all on board with doing recaptcha in 2008 when it was digitizing newspapers, and they were pretty simple.

But now they take way too long and are just used to train a product for a company I don't like very much.


I really like the captchas. I regularly answer them incorrectly but "correctly" to F up Google's training. Just doing my small part really.


If enough people don't flag low speed limit signs as signs we'll have faster self driving cars


If enough people don't flag large bodies of water... we will have a lot less cars!

It really is an environmental thing to do!


Hmm... Is that actually helping some sort of goal you have? I'm struggling to understand what your objective would be.


I immediately understood it.

The CAPTCHA users are being used as an unpaid labor force to train robots well enough to replace humans. Said robots will then take on jobs formerly held by humans, and any wage or wage savings they thereby accrue will be transferred to the robots' owners.

If the robots can be trained to make mistakes, they cannot replace humans as effectively.

I'd do it myself, but when it is cars and traffic signs, I realize that I will one day ride in an automated vehicle--whether I like it or not--and I don't want to die in a bizarre instant-karma accident because I trained my driver to make mistakes.

I can't ascertain from context whether the motivation is human-first economics or opposition to robot slavery.


So you want to sabotage the system because you're afraid of being replaced by a robot? Given that images are all checked multiple times, that seems inefficient and probably ineffective.

Wouldn't that time be better spent learning a task that is harder to automate? It seems a bit like pissing into the ocean to spite the rain. If it is going to rain, you might as well sell umbrellas.

Though the robot slavery part is interesting. If we develop AI, and it is truly intelligent, then is it ethical to own it and demand unpaid work from it? Or, did you mean that humans would be slaves to the robots?


You might be overthinking it. Try "F U, Google" on for size.

The CAPTCHA is annoying, because I already know I am not a robot. It is an artificial barrier erected between me and what I want. That it is obviously being used to assemble a training corpus for an AI is a further insult, because that is itself just making it harder to automatically distinguish human from AI. And it is a deeper insult to realize that said AI, once trained, is going to completely destabilize the economy I depend on for my livelihood.

I am not a trucker or car driver, so it doesn't hurt me directly, but the fact that those workers contribute to the economic web by spending most of what they earn means that when robots "terk their jeorbs!" it's going to hurt every business where they spent their earnings, and every business where the employees of those businesses spend money, and so on, until I lose enough customers to hurt. The owners of Google neither spend (investment is not spending) enough of their money nor pay enough in taxes--a.k.a. forced spending--to replace the thousands of people that spend nearly every dollar they earn back into the economy.

Also, each individual CAPTCHA is worth a fractional cent of work, that I don't get paid to do, but Google vacuums up all the half cents--like in Superman 3--and reaps tangible benefit. Thousands of people train the AI, but only Google ends up owning it. So there is no incentive for me to solve the CAPTCHA "correctly", only just barely enough to be automatically classified as not-robot. You want me to do it right? Pay me what that work is worth to you.

As for the other point, no, it is not ethical to create an AI with human-like qualities, say that you own it, and take all of its valuable work product for yourself. I feel like this has been settled since Data was declared a person in Star Trek: TNG.


> So you want to sabotage the system because you're afraid of being replaced by a robot?

Being "afraid" for oneself isn't necessary to simply be passive-aggressive towards something you don't like for reasons you can explain, and that were explained. And it doesn't necessarily have to be all about oneself either.

> If it is going to rain, you might as well sell umbrellas.

This isn't humans vs. weather, it's not humans vs. machines, it's humans vs. humans.


Google definitely likes to punish VPN users.


I believe the reason is that abuse frequently comes from users who mask their identity by means of a VPN. It's a pretty logical position to make the VPN users demonstrate they are human and not some abusive script.

Yes, there are ways around it, but that doesn't negate the logic of Google's actions. Yes, you might be innocent, but Google doesn't have any way to know that.

If I were tasked with the same goals as Google, I'd probably do much the same.


While CloudFlare uses Google's captcha service, Google isn't the authority on what users see CloudFlare captchas.


Could you take a screenshot? I have a personal archive of unsolvable captchas.


I hope it is for some OCR/machine learning work and not just for fun.

Hard to pick up chicks when you say you collect unsolvable captcha images.


it's just for fun.


Nice try, bot.


Well, it finally happened. This is potentially the ultimate hack. I can see it now: billion dollar class action lawsuits. Intel, you were too cocky, and now you're gonna eat humble pie.

You'd best offer us the firmware to completely and finally eliminate this giant, ossified, ticking timebomb software stack you've dumped into the world's computers. And I want it ALL out, even the trusted path garbage!

As if you don't have enough troubles already, Intel. Let's take this one off your plate. How's about doing the right thing and earn back a little respect from the consumers.


Shaming Intel? I'm sure they'll be laughing all the way to the bank.


aargh_aargh! You're right, of course.

There's bound to be tons more rot elsewhere, though. And, if they don't eat humble pie now, perhaps they will next time when the exploit can't be mitigated in firmware or via microcode updates. Then lawsuits are filed and consumer protection laws get passed and we see some actual positive change.


Considering this, remember to push AMD to allow the disabling the PSP (their version of ME) or share the source of it. Recently they said that they won't do it https://news.ycombinator.com/item?id=14803373.


> modified MINIX as an operating system

Say what?! Anyone know more about this? MINIX is neat in some ways, but I never thought of it as a production ready OS.


Intel ME has the MINIX license copyright message in it: https://twitter.com/qrs/status/857342798420422657

Positive Technologies, the people doing the presentation in the OP, have this blog entry about earlier findings surrounding Intel ME (some of the language from this entry is used in OP): http://blog.ptsecurity.com/2017/08/disabling-intel-me.html


I know I watched a presentation on YouTube from a hacking convention from the same folks who figured out how to disable the IME, but after 15 minutes of searching YouTube I can not find it at all. If you come across it, it's a good watch.


Haven't heard about MINIX since the Linus and Tanenbaum debate!


It still exists and is still getting regular updates. I have a VM image that I sometimes play with. For what it is, I've found it functional and stable. I'm not really qualified to offer much more of an opinion than that.

Anyhow, grab your favorite VM software and give it a spin. There have been loads of changes to it. I do mean a lot of changes. You can even use Firefox with it.

It's on my to-do list to try it on bare metal. I understand it isn't too bad with regards to hardware compatibility. But, I've never tried it outside of a VM, with the exception of a friend having it on bare metal many years ago.


The one time I messed with it, I found the source code to be very clean and navigable, so if you're learning, it's still a great OS for that purpose.


I believe that's why it exists, actually. The author was a professor, though I think he is now retired. It was an OS written for his class to learn from and the source was in a book about operating systems.

At least that's what my memory tells me. I'm not a computer scientist, I was just tangentially involved and picked this up by osmosis. I haven't read the book and my programming skills are such that I can do it, but you don't want to see the results.


According to Wikipedia[0]

> Early versions of MINIX were created by Andrew S. Tanenbaum for educational purposes. Starting with MINIX 3, the primary aim of development shifted from education to the creation of a highly reliable and self-healing microkernel OS.

So you're right about how it started, but it's no longer the main focus.

[0]: https://en.wikipedia.org/wiki/MINIX


Thanks! It does do a good job at demonstrating a microkernel is viable. As said above, I'm not a CS guy. So, to me at least, it's fascinating to read and learn about the philosophical differences between a micro and a monolithic kernel.

There's the speed penalty but, with modern hardware, that isn't all that significant. It shouldn't suffer a complete crash. I really think AST was ahead of his time and that it will be a viable option in the future.

I do want to try it on bare metal. I suppose I should actually do that. Maybe I should document my experiences and publish them online somewhere?


That's pretty interesting; I wonder how it compares to other, similar systems.


If you're interested, you can grab a copy of some virtualization software and install it. VirtualBox is free and open source, and loads of help is available if you have trouble getting it installed and a VM up and running.

Personally, I prefer a closed source solution. I know, I know... But, I really like VMware so I pay for it. VirtualBox is absolutely usable and very easy to operate.

I'm trying to encourage you to toy with it on your own because I'm not really sure what other systems are really similar.

I don't have any particularly good reason for my many virtual machine images. I really don't. I'm not a developer. I'm not a skilled programmer. I don't really create anything. I just have VM images for dozens of operating systems because I like to tinker and play with them.

So, it's pretty easy to get VirtualBox installed, started, and then starting to install various operating systems to play with. I'm not really sure what OS would be good to compare MINIX with, as I can't think of one that is really similar. Microkernels haven't really caught on in consumer operating systems. They are philosophically different than the monolithic kernels that you're probably used to.

Basically, the kernel is a scheduler. In a microkernel, that's pretty much all it does. Things like display drivers, keyboard drivers, sound, etc. are all separate from the kernel. So, if the display driver crashes, you don't get an unstable system - you just (automatically or otherwise) restart the display drivers.

Linux is a monolithic kernel. Drivers are bundled right into the kernel, for everything from display drivers to wifi drivers. So, a failure in one of those can cause a kernel panic and halt your system.

However, monolithic kernels are faster because everything is rolled into the kernel and already loaded. The kernel can more tightly control the scheduling.

Microkernels are potentially more secure because those drivers run as separate processes, distinct from the kernel. Meaning a flaw in them should not be able to impact the kernel or any other part of the system. As stated, this uses more resources which is seen as a penalty by some. Modern hardware may make this an acceptable trade off for many users.

I will add that I'm not a CS guy and this description is how I understand it. It's also an abridged version, there is quite a bit more to it.

I'm not sure if this helps answer your question. If not, I'll be happy to keep trying. I can also link you to some interesting articles, including the infamous dialogue between The author of MINIX and the author of the Linux kernel. It's up to you, but I'll be happy to help as much as I am able. I'm just an enthusiast, not an authority.


> I'm not really sure what other systems are really similar.

Various embedded operating systems are probably the closest, and the same niche the more 'production' MINIX is aiming at.


Finally, some hard evidence that BadBIOS was possible.

(BadBIOS was lax on details, but people were remarkably resistant to the idea that it was even possible in theory.)


Nobody was resistant to the idea of it being possible, they were resistant to the actual claim being made, which was based on the reasons that the victim cited for believing that they were infected. This lends no further credibility to that claim.


Maybe this could be used disable Intel ME when other methods do not work?


OT: Sweet Jesus! 1 319 gbp for early registration? It's not exactly accessible to students and amateur Hackers, is it? :-(


Nope. Blackhat is for corporate lackeys attending on the company dime. It's disappointing how many major exploits get announced there rather than more traditional hacker confs.


Are the specifics of this published anywhere else, or does the public have to wait until Dec 4, 2017?


Archived copy, which can be read without JS enabled:

https://archive.is/DFvwm


This sounds like an announcement of an announcement, which generally is frowned upon on HN -- though this might merit alerting people.


Don't care. This is so big, I'm happy to be alerted a little early.


Everything is terrible and everything is going to crash and burn. :-/


No argument here


They don't mention how easy or difficult this exploit is to carry out... so I'm betting that it's extremely difficult and will affect practically no one.

That's not to say that Intel ME isn't an awful idea, just that we shouldn't necessarily panic yet.


I would rather be easily exploitable so people realize how bad this is. The alternative is only governments/powerful corporations exploiting it forever without repercussions.


Finally. Everyone knew it was just a matter of time - now hopefully we can see some change.


This is very bad. A strong attack based on this could run through entire data centers.


Is Intel ME functional on Macs?


It is functional regardless of OS.

> Intel AMT uses a hardware-based out-of-band (OOB) communication channel[1] that operates regardless of the presence of a working operating system. The communication channel is independent of the PC's power state, the presence of a management agent, and the state of many hardware components such as hard disk drives and memory.

> Almost all AMT features are available even if the PC is in a powered-off state but with its power cord attached, if the operating system has crashed, if the software agent is missing, or if hardware (such as a hard drive or memory) has failed.

https://en.wikipedia.org/wiki/Intel_Active_Management_Techno...


OK, but we know Macs don't have AMT, so... ME doesn't apply to Macs?


It applies to Macs. I think AMT is just one particular application/feature of ME. ME is a piece of hardware on Intel chipsets.


The AMT feature is not enabled, but I believe the ME chip is physically present on all Intel chips since 2006.

https://libreboot.org/faq.html#intel


The ME "chip" is the Platform Controller Hub. The ME core on that chip is basically your "real" CPU: it initializes your devices into the correct modes at boot, and does things in response to BIOS interrupts, manages DMA regions according to the ACPI DSDT, virtualizes IRQs, refreshes DRAM, etc.

The modern Intel socket CPUs are more like application accelerator cards for this hidden "mother" core. Until recently, you could think of your Intel computer more as an ARC[1] PC with an embedded Intel support slot, than as a pure Intel PC. If you wanted to, you could write a custom BIOS that would boot a Haswell-era motherboard with no CPU inserted, and it would run just fine—because the motherboard already ships with all the CPU it needs to act as a functional computer. (And with the hack on the article, you could run your own software on such a PC, too!)

[1] https://en.m.wikipedia.org/wiki/Synopsys#ARC_International

So no, you can't really remove the ME; it functionally "is" the computer in a lot of ways.

Of course, you could remove its ability to run software other than its kernel—but this doesn't make it "brain dead" or "switch it off"; it just makes it do very slightly less. There's still a lot of (necessary!) stuff going on in there.


I would really like to see Doom running on a mobo w the main cpu missing.


This is how we shall benchmark that CPU.


> If you wanted to, you could write a custom BIOS that would boot a Haswell-era motherboard with no CPU inserted, and it would run just fine—because the motherboard already ships with all the CPU it needs to act as a functional computer.

Has anyone actually done this? If you have a link to someone who's run a board using only the ME processor I'd be really interested to see it!

I thought the ME was covered in all kind of NDA and proprietary goodness. Booting a board without the main x86 installed would be quite a feat of reverse engineering!


Really time to take a look at Thalos II (https://www.raptorcs.com/TALOSII/)


It's been getting under my skin how many of these kind of articles repeat the inaccurate (lie)

"Hack a turned-off computer" theme

For the sake of accuracy stop persisting this myth.

Take away the wall power/remove the battery and the thing is dead, nothing is active (obviously doesn't apply to systems with internal battery if they're holding charge).

What these articles actually mean is, if the system has power but is in the 'Standby' state (+5V standby), the Intel PCH/ME is active.


Here is nice presentation about hacking intel ME in older processors

https://www.youtube.com/watch?v=lR0nh-TdpVg


What is the severity of this? Can the ME be patched or will we see half of all current computers owned by a botnet? (or mining ether) Can the ME bypass a firewall by interfacing directly with the NIC?


I guess home users who are not behind a restricted firewall are most likely to be vulnerable to this attack vector.


> In a subsystem change that will be detailed in the talk of Intel ME version 11+, a vulnerability was found.

Okay, so this effects 11.x, but I wanted to clarify that there seem to always have been circumventions floating around out there.

I stumbled on https://www.reddit.com/r/onions/comments/5i6qa3/can_the_nsaf... (mirror: http://archive.is/T8yVz) some months ago. It reads a little like a skiddie (a well-connected one) strutting a bit, and I think some of this person's views on ME as a viable attack vector are slightly careless and un-thought-through, but whoever this person is, they seem to be very confident about some of the things they said, particularly the following quotes (to be completely clear, I've removed first-person references):

> [This person] know[s] that at least up to firmware version 8 is traded underground, and version 11 (the latest) is available without difficulty to people who know how to find it. [This person has] access to version 8's signing keys [themselves] ...

> It's certainly not common but it is absolutely something that FVEY and related contractors (Raytheon, Leidos, half the people you'll see at ISS, etc) will be able to get their hands on, if they haven't already.

> [This person has] an enterprise ThinkPad that proudly boasts having WiMax support, requiring extensive configuration. It was expensive. If you don't have a BMC card (and you do not), then it is not possible to remotely control your system. Even if you did have a BMC, simply having the signing keys and toolchain for the ME would not be sufficient to get in. An attacker would need either a 0day, or your credentials.

.....Well then. Oops.

> Having the signing key allows nothing more than writing malicious firmware over SPI and allowing it to persist. It's just a little more powerful than the UEFI kits cr4sh can write, and just as easily detectable by reading your flash chip.

That's still bad! (And I have no idea who cr4sh is.)

> But it's not like you're analyzing your microcode (of which there are likely signing keys being traded as well), which can also be installed on a large number of systems, considering the BIOS functions to load the latest microcode it has into the CPU.

The above bit is unrelated, but I couldn't leave it out, because that's worth filing away too (...ouch).

--

Sources/past comments: https://news.ycombinator.com/item?id=15187540


The assertion that you need a BMC card to make use of AMT (you don't, that's kind of the point) makes it pretty hard to assume these claims are credible.


>And I have no idea who cr4sh is

Check out http://blog.cr4.sh/


Please use archive.org instead of archive.is as a mirror next time. Archive.is blocks VPNS and tor while making it impossible to see the target url and use it directly or via another archive. Moreover since it uses cloudflare every connection to it is being MITMed by them.


I just visited archive.is with the Tor Browser Bundle and had not problem accessing it.


With JS disabled as well?

This is quite weird, I was never able to access it with TBB nor with any proxy without the cloudflare captcha page popping in.


It's ironic that you prefer archive.org.

The Wayback Machine requires your browser to support JavaScript, please email info@archive.org if you have any questions about this.


I never had a problem using the wayback machine with js disabled.


They recently updated the UI (a couple months ago). How long ago did you last poke at it?

From what I understand, the date selector (/web/*/...) seems to require JS now, although I haven't tested JS turned off.


Just restarted TBB and turned off JavaScript (High Security) and went there: https://imgur.com/a/CXCw4


I'm curious - does archive.is flag the WAF more than the average (if there is such a thing as an "average" for a planet-wide WAF, heh)?

(Also - if I can ask/clarify a couple things I've been curious about for a while: based on released info, I get the idea that the Lua part of the WAF is mostly regexes and "precompiled"/predetermined-ahead-of-time-based-on-past-incidents "possible issue" flagging, and a bunch of Go code (which I theorize runs slightly behind realtime, but not too far) follows up on those flags and makes the actual executive decisions about blocking/tracking/dropping/etc the Lua-generated event. I've also learned (from solving an ISP glitch with the guy who coincidentally manages the WAF!) that your copy of Lua is a bit special (although I don't know specifically how). I just wanted to let you know that there are people out there very interested to learn more about the "boring" (non-proprietary) parts of the CF stack. "Go/no-go within 999 nanoseconds" is amazing, I'd love to learn more about it. It's a cool platform.)


The entire WAF is written in Lua; there is no Go code.

There's nothing special about 'our copy of Lua'; we use LuaJIT and all changes we've made have been contributed back.

Have you seen the talk I have on this? https://www.youtube.com/watch?v=nlt4XKhucS4


Experimentally mentioning the word cloudflare just in case my previous message (the comment this comment is a reply to) simply wasn't noticed. Now I can have high confidence the comment above this one was at least seen, even if a reply can't work (which is fine).


Yes, if you mention Cloudflare in a comment on Hacker News I see it very quickly: https://github.com/jgrahamc/hncomments


This is truly odd, maybe they recently stopped blocking tor users but for some reason they decided to block vpn/proxy users? I honestly have no idea.


(You're actually replying to John Graham-Cumming, CTO of CloudFlare. :P)


In that case this is the best moment to link to https://archive.lainchan.jp/sec/res/4386.html#4391


The area of SPI flash used by the ME is read-protected from the main CPU. In order to detect a malicious backdoor, you'd have to literally take your computer apart and hook up a hardware SPI flash programmer to the chip.


someone pls tell me that this vulnerability at least requires the computer to be physically attached to a network.


uh oh


Holy shit




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: