Hacker News new | comments | show | ask | jobs | submit login
“We have obtained fully functional JTAG for Intel CSME via USB DCI” (twitter.com)
760 points by cryogenic_soul 8 months ago | hide | past | web | favorite | 392 comments



One way to think of ME is, we all woke up one day and discovered we have had high resolution night vision spy cams installed in our bedrooms.

The next realization is there is no way to turn them off or remove them. It’s posisble even moving won’t help.

And yet we really don’t seem to care much. Lesser issues generate national outrage and high volumes of press coverage. Why?

HN may be uniquely positioned to show us the answer. Take a community of people with generally above average interest and/or knowledge in this stuff, and the comments are filled with questions asking what the hell ME even is.

Apparently, ME is the perfect combination of opaque, obtuse, and obscure. It’s not rocket science, but complicated enough it’s hard to explain well quickly. It’s easy to be a highly technical person yet never have the need to cross paths with the subject. There has been some press, some activity, but all of that is simultaneously dampened for the same reasons.


> And yet we really don’t seem to care much.

I know we're used to "Internet speed" and the tweet happened an entire 24 hours ago, but give it a bit of time before declaring it dead. Wired and Vice need a second to write it up, and see if it hits the mainstream before declaring the issue ignored.

Not saying it will get picked up, though I sure hope it does, but as you point out, it's a bit obscure and takes some explaining.

Still, there are numerous reports that the Facebook app on your phone listens to every word you say to show you ads for things you were talking aboutm but on HN and other web forums, there are always an element of disbelief. We may believe that it must be something else - correlation to web searches from the same IP, but there's never been proof one way or the other. (A Facebook exec's claim doesn't count as proof.)

So there's a section of people that believe they already have high resolution recording devices that we take into to bed. What's one more?

(Typing this, very gratefully, from an ARM Chromebook.)


Plenty of people were still saying the Snowden revelations were old hat when they came out, we all knew it was happening just didn't have proof, etc. The novelty and seriousness tends to be out-of-whack with the amount of news coverage. It's a poor way to measure the importance of existing news coverage or lack of it for that reason. What matters is that it gets out and incentivizes developers, manufacturers, company policies, and behavioural change.

There are plenty of hot news scare pieces after boring security leaks all the time bringing awareness to issues that technical people have known well. For ex, I'm hearing about friends older parents putting tape over their laptop cameras.

We love to trash bugs that having marketing brands but these exploits, and more importantly obvious attack surfaces like ME or cellphone modems, often just need the right amount of human-interest story, practical real-life example, and wrapped in good news friendly explanation for regular people to care.

It tends to happen a lot more randomly and without a rational order of priority, but it's still happening more and more often. At the end of the day it's always going to come down to the time and effort of a security researcher caring and the tech community putting the effort in to make the journalists care.


Prior to the NSA contractor Edward Snowden's revelations in 2013, Room-641A had already been exposed by an AT&T employee-turned-whistleblower Mark Klein. The EFF sued the government in 2006 over it.

Tape over laptop cameras isn't just a "parents-of-friends" thing, it's a good idea. Buy a set of stickers and support the EFF: https://supporters.eff.org/shop/laptop-camera-cover-set

Anyone know somebody at Wired?


> Tape over laptop cameras isn't just a "parents-of-friends" thing, it's a good idea. Buy a set of stickers and support the EFF: https://supporters.eff.org/shop/laptop-camera-cover-set

Support the EFF! But I hate the stickers.

Everyone puts a sticker on their webcam and completely ignores the hot mic. But you get that false sense of security…


I unplugged the microphone on my daughter's iMac years ago. She did not use it, and was ok with that.

I did not break her phone in that way. Her phone is more dangerous.


> I unplugged the microphone on my daughter's iMac years ago.

Please note that a speaker may also act as a microphone if configured so (at software level). This is especially true for speakers/headphones connected via the jack.


It's likely an internal microphone and speaker; the sound card may not support input through that interface. What you're saying can be true for many other modern computers, though.


There was an article (a while ago) showing that at least one major brand of onboard sound cards has undocumented features to switch the audio-out jack port to input. Cheap headphone/earbuds function as a microphone too, so that's a thing. Only if you leave the earbuds plugged in, of course. It can't listen from an empty jack port.

I (personally) think that intelligence/privacy-wise the mic on many devices is a lot more troubling than a camera. Camera is very directional and (again, personally for me) would at most result in embarrassment? While a microphone can record all conversations in a room/appartment, transmit everything with fairly negligible bandwidth, and taping over it won't do much good. You actually need to open the device and disconnect/cut a wire.


OT: @lightedman you're dead. Apparently due to some heated discussion on the term regression. But from a glance I couldn't find anything that stood out enough to warrant a ban. Might want to send an email and have your account unbanned.


Exactly. I've given up on tape on camera. Frankly my mug just isn't that important compared to the key strokes I make.


audio only nudies are usually slightly less exciting.


Most people’s hacked nudes will come from pictures they take themselves, not webcam snaps.


We should start demanding physical shutters for laptop webcams. Does anyone make those yet?


Even better is a hardware kill switch, especially for the mic.


Librem Purism labtop has hadware kill switches.


I managed to unplug the mic in my Thinkpad and my Dell XPS laptop with about 5min of work for each. It's still possible to do at the moment if you don't mind relying on plugging in headphones w/ a mic to use one.

Of course a switch would be nice, similar to the older Thinkpads which had a hardware switch for the network devices on the front, originally for use on airplanes.


> I managed to unplug the mic in my Thinkpad

Any chance you documented the procedure or have links to relevant documentation?


For Thinkpads the documentation you're looking for is the "Hardware Maintenance Manual" (HMM). E.g. here is one for the X260, it clearly describes where the camera/microphone assembly is, how to get to it and where its wire connections go: https://ok2.de/ThinkPad/HMM/x260_hmm_en_sp40j72016.pdf I haven't taken apart one of the more Ultrabook-style ones, so not sure how accessible those are, but the traditional ones are very serviceable.

I wouldn't be surprised if the better Dell models have similar documentation available.


This is easy in almost all laptops: the microphone is attached to the mobo via a 2-wire tiny plug. Disassemble your laptop, look at the other side of the keyboard pane, a bit away from the speakers. Some laptops might also carry mikes in the screen bezel or multiple mikes, so check all cables that go away from the mobo.


But now you have to trust the switch to really deactivate the mic. I'm a recursive paranoiac !


The usual* way to handle this is to have a small LED next to the mic/camera that shows when it's on -- and have it wired up to the device in series, such that it's software-impossible for the device to be powered without the light being on.

*: it's what we did on the One Laptop Per Child laptop, and I'm sure others have too.


Thank you for inventing the netbook and showing the ODMs how to make low cost PC notebooks.

Any idea where to get a CLI-from-boot notebook for teaching kids programming and encouraging a hacker ethic?


Thanks for the kind words! I haven't used it personally, but a programmer in my Twitter feed recommended this one that they've been using with their kids recently. (I don't think there's a heavy emphasis on CLI.)

https://tksstgiftguide.tumblr.com/post/167274832422/our-kids...


At least you only need to verify a switch once, it won't be compromised by software changes.


Assuming the switch does really cut the mic signal and doesn't instead act on a i/o pin which calls a function to disable the mic input. The second solution would be hackable.


That's a very good point. We have it for wifi already, so...


I 3d-printed a crude shutter and super-glued it to the bezel: https://youtube.com/watch?v=oOkPP_5bjhs

I think it would be about as easy to make a cardboard one.


Indeed they do. The security team at my work ordered hundreds of them and hand them out to anyone that asks. They even have the company logo on them.


There's been a Kickstarter called Nope for cams for a few years now, they exceeded funding by a large multiple each time. I got the second version a year ago, they are now on version 3 and added a headphone jack blocker.

https://www.kickstarter.com/projects/bungajungle/nope-sound-...


Not really sure about the headphone jack blocker; I'm not aware of an OS where the sound source can't be trivially switched in software, even when headphones are plugged in.

The likelihood is that if an attacker has the ability to record audio, they also have the ability to bypass this device.


My Asus EEEPC has one.


C-slide dot com, the iPad one (round with a pivot) is my favourite


Indeed, bags of lens stick on lens covers (that allow you to open when you need the camera) is a product idea I thought of ages ago, but too lazy to actually do.


it exists already, got one as swag at a conference from Nvidia (???). strange gift, I like the sentiment though.


Yes, got mine as a swag too (not Nvidia), works perfectly. Can be found by looking for "webcam cover slider" on Amazon or I assume any web store that deals with such things.


> Tape over laptop cameras isn't just a "parents-of-friends" thing

Not sure why you took that statement as deriding the practice because some older people are doing it? I noted it merely as an indication of how far the behavioural change has spread as a result of news stories... Of course it's good security hygiene. Not an ideal solution though, as others have already mentioned, compared to a hardware switch or built-in cover.


Ah, apologies; The way I read it was that your friends don't but their parents do.


That's funny, because I noticed something during a recent ~10k device audit.

Across the mobile IT we manage, the groups who covered or unplugged their webcams, internal or external, most consistently were middle to older age women, with the least common as women in their 30s or younger. Men covered randomly across all ages, but still less than women. And while they'll cover their laptop camera, most who didn't have a folding case did not also cover their tablet or phone cameras.

I don't know if it's comfort or awareness, but the ones who covered acted like it was common sense, and the ones who didn't hadn't considered it or felt like they weren't likely to be a target of interest. A few women said they chose their laptops because of the physical shutter over the camera.

I don't know if trust in electronics is more generational or age related, but it's definitely a gradient.


I definitely agree about tapeing the camera. Zuckerberg does it too

https://www.theverge.com/2016/6/21/11995032/mark-zuckerberg-...


While I don't care one way or another what people choose to do to their own devices and am happy to accept that it's probably a good practice, it does always cause me to chuckle when the "Zuckerburg does it" argument is thrown out.

I think the threat profile faced by "the rest of us" is probably just a little less intense than someone as well known, wealthy, famous and influential as the CEO of Facebook, but perhaps that's just me.


The money for that tape came from selling our privacy.


> Wired and Vice need a second to write it up, and see if it hits the mainstream before declaring the issue ignored.

You're right about this new development of the functional JTAG being very recent and needing time to get around. However, security folks and others have been decrying the potential of the IME backdoor for years.

That they've been largely ignored for years lends some credibility to the thought that "we don't seem to care much." It's hard for most laypeople to understand, and it's difficult to get people excited about something they don't understand (unless they are convinced that they do understand, of course).


> A Facebook exec's claim doesn't count as proof.

How about an official statement from Facebook itself over a year ago[1]? If it was true, people could find out by decompiling the app and make Facebook look absolutely horrible.

[1] https://newsroom.fb.com/news/h/facebook-does-not-use-your-ph...


> How about an official statement from Facebook itself over a year ago

That also doesn't count as proof; it carries barely more weight than the Facebook exec's statement.

However, you're quite right about the decompiling argument. And chances are that security researchers have done just that.


It's one thing to lie, it's another thing to lie about something that could easily be found out and possibly get them sued. The risk/reward ratio seems way too high for Facebook to lie (unless they were forced to.) That's obviously not proof, but it seems like pretty strong evidence.


Can you use an ARM Chromebook without it constantly leaking data to Google? I tried to use the C201 without Chrome OS, but with libreboot, and Debian with mainline Linux. I didn't succeed.


A little late to the party, but I also have a libreboot C201, and have successfully installed mainline Debian on a USB drive connected to it. I'm going to put up a comprehensive guide + shell scripts in December when I'm on break from school, but until then, check out these links:

https://github.com/atopuzov/c201/blob/master/debian-install....

https://archlinuxarm.org/platforms/armv7/rockchip/asus-chrom...

The main thing I haven't gotten working yet is the touchpad. For wifi, I'm using an atheros dongle with open firmware.


The gallium os guys have everything working perfectly on my asus chromebook.


Yes, but gallium doesn't support arm [1]

[1] https://wiki.galliumos.org/Support/ARM


See https://johnlewis.ie/custom-chromebook-firmware/rom-download...

Doesn't work for every Chromebook, but it does for many. Working fine on my Chrome Box.


I already had flashed libreboot onto the device. That wasn't hard. The problem was and is mainline Linux support.



Thanks for the link. The C201 is listed as work-in-progress.


> "Still, there are numerous reports that the Facebook app on your phone listens to every word you say".

> "(Typing this, very gratefully, from an ARM Chromebook.)"

Have you installed a non-stock OS on that Chromebook? Google aren't exactly known for being strong advocates of user privacy.


The most basic analysis will show that Facebook isn't listening to everything you say, even just at the level of realising that either doing speech recognition of everything you say or uploading an audio stream would be both impractical and easily detected by anyone with the inclination.


"opaque, obtuse, and obscure" is a red herring.

Imagine, Intel were a Russian company. Tomorrow, there would be a simple and clear [screaming] headline similar to "Russians hacked the election" (general public doesn't need to know or understand how the network, computers or elections actually work). The day after tomorrow it would be illegal to buy anything Intel.


Elections have almost certainly been hacked - the security on electronic voting machines is abysmal - and no one seems to care, so I think your 'Russians hacked the elections' example doesn't say what you meant.


A successful SQL injection probe that does not intentionally or otherwise result in data modification would be unusual, beyond the simple 1 == 1 methods.

The DREs don't seem to have been a target, though the physical and OPSEC vectors are quite well known at this point.

I'm not commenting on the information operations or the loose and ambiguous langauge that has been used to describe these events.


There are economic sanctions against Russia. There are no sanctions against Intel. I meant what I said.


The Russian trade sanctions are about their activities in Ukraine, not about the last US presidential elections. Besides, there's plenty of evidence of domestic tampering with US election results and comparatively little done as a result. If election fraud was a big enough issue to effectively stop Moore's Law dead in its tracks then you'd expect much more activity around election fraud without that level of economic impact.


What do you think the headline about Intel be in Russian media?


Intel ME is not new. You don't need to imagine it, you could google it.


Most likely they will be quiet because Russia is 100% dependent on foreign technology.


Not the military.


They must be decades behind, then. I don't think in-house Russian engineering capabilities have been competitive let alone ahead of the consumer electronics curve going back at least a decade in the of fabrication. Maybe when it was 1997 and everything was DIP.


I'll negate this by taking the opposite position, and similarly having no facts or references.


MCST make stuff used by russian military. see e.g. Elbrus chips https://en.wikipedia.org/wiki/Elbrus-2S%2B


The idea I commented on was to ban Intel chips. Russia cannot do that, military or not. The military accounts for a fraction of a fraction of tech that Russia depends on.

And the military are also dependent on foreign technology. If not for CPUs inside tanks and planes, then for CPUs inside command center computers. (and not just CPUs)


> And yet we really don’t seem to care much.

I do care, a lot. I have decided to avoid Intel (and AMD) hardware like the pest. I will not buy any Core iSpyOnYou or AMD equivalent anymore. I'm an advocate of economic and judicial sanctions from the political level against Intel (and AMD). I tell people around me about the problems and explain how it is an issue of privacy, security, national sovereignty, and market power abuse.

I'm looking for a desktop computer for my day to day use that comes without compromised hardware. The Raspberry Pi 3 works quite well, but doesnt work without non-free software. I successfully installed Debian onto my A20-OLinuXino-MICRO, but the GUI doesn't start. I'm looking for Hardware without Intel Hardware, without non-free software, mainline Linux support and preferable Debian compatible. The OpenPower system by raptor engineering is simply too expensive. I cannot afford it. Any ideas?


Librem laptop is reasonable powerful, and they have effectively neutered the ME.

Libreboot replaces firmware for some decade-old server machines lacking a ME (although noisy, they work quite well as a desktop).


The Librem laptop are not an option because they use Intel chips. Buying Intel products supports and finances their current way. They did not even address the grave accusations. There is no way I can give them money.


When Librem was first announced, I had grave concerns too, and I stuck with my Libreboot computers. However, it was just this past month when they showed proof of concept for their Purism phone running KDE plasma on an ARM board without any firmware/driver blob concerns and also showed how they isolated and disabled the Intel ME that I took them seriously, at which point I ordered a labtop and have been very happy.

The amount of money that Intel gets off these chips can't be significant because they are currently over two years old anyways (Q3'15).


I don't get how their work on the Purism phone makes their laptop any better. Care to explain the relation?

That one can maybe fix a hardware backdoor in Intel chips does not make buying them any better. Intel does not get my money but should instead get clear-worded, sanctioned letter from the authorities. I think they should be banned from trading and selling their backdoored stuff. I will not buy even old products of them. They messed up big time and don't even explain themselves.


"how their work on the Purism phone makes their laptop any better"

So you are obviously right that proof of concept of fully FLOSS ARM phone doesn't directly translate to x86.

But it does show that they know how and want to build fully FLOSS systems. Whether they don't quite succeed 100% with the x86 system (as the management engine is still in my computer, albeit isolated and disabled), they have been making great strides in getting close to %100 and I am willing to reward them for that.

Regarding the morality of buying Intel chips, I do share the reluctance to support Intel. I hadn't bought a new Intel computer or CPU since 2007 with Core2Duo for that reason. But sometimes morality decision can't be made in a binary all or nothing manner. In this case I am aware of the damage done by purchasing an x86 system, but it makes up for in being able to have a productive labtop which I can more effectively work to make FLOSS applications with.



Your best bet will be with NXP then. They still haven't released the i.MX8 cpus, but i.MX7 boards are available (up to dual core 1.2GHz), and, as far as I know, they are the only (decent, omap3 & 4 don't have enough power these days) with full opensource support for the entire SoC, even the GPU


Thank you for pointing these out. The i.MX7 and i.MX8 SoCs seem very interesting, but I have a hard time finding information about available SBCs, mainline Linux and official Debian support. The only SBC I found is the Nitrogen7[0] and it doesn't seem to geared towards desktop use as there is no DP nor HDMI nor VGA, though the PCIe expansions is gold. I wonder why other SBC don't offer PCIe. They might not have the highest performance, but I don't need that.

[0] https://boundarydevices.com/product/nitrogen7/


Could you not just buy a Macintosh? Macs lack the AMT chip so the ME in the CPU can't do anything.


It's still Intel hardware. But besides that, does the Macintosh work without blobs and mainline Linux? Can I install Debian on it and have stuff working? I'm just tired of uncooperative manufactures.


> Can I install Debian on it and have stuff working?

Depends on the kind of Mac you buy. I can only comment on their mobile offerings:

- Best supported are the current MacBook Airs. Everything except the Webcam should work out-of-the-box. The webcam needs the out-of-tree bcwc_pcie driver (https://github.com/patjak/bcwc_pcie)).

- The Retina MacBooks need some manual work before being usable (e.g. need to compile out-of-tree keyboard & touchpad driver (https://github.com/cb22/macbook12-spi-driver)), but should work fine as well.

- The MacBook Pros before October 2016 are also quite good supported, the support for newer ones is still quite incomplete (check out https://github.com/Dunedan/mbp-2016-linux for details), although it's possible to use them as daily driver if you're aware of the limitations.


I've never tried but Debian has a lengthy (if out of date?) wiki page about running it on Macbooks:

https://wiki.debian.org/MacBook

https://wiki.debian.org/MacBookPro


The latest macbook pros essentially don't work well with linux. I think wifi isn't a solved issue still, for example.


None of them work well with Linux, despite the Linux folks sometimes bending over backwards to make it work. https://lwn.net/Articles/707616/

All of the Macs I have require binary firmware blobs for WiFi, for the open source driver to work. And even in that case none of them can do even 802.11n. I have to use the proprietary WiFi driver to get 802.11n or 802.11ac. And yes each model varies in this regard, which makes it something of a Choose Your Own Adventure book.


Depends on the model. On the non-TouchBar models WiFi works fine, on the TouchBar models it doesn't. For more details check: https://github.com/Dunedan/mbp-2016-linux


What’s AMT? Is ME on Macs innocuous?


Active Management Technology is the backdoor that's sold as a feature.

> https://en.wikipedia.org/wiki/Intel_Active_Management_Techno...

Not innocuous, but probably less dangerous.


I heard https://beagleboard.org/black can boot and run Linux w/o any blobs either in bootloader or kernel (provided you're OK with a sub-par screen resolution and not using the onboard GPU)

https://news.ycombinator.com/item?id=12584880 and others might have details on WiFi and such


The BeagleBoard X15 seems very interesting. The onboard GPU is definitely a problem though. Why didn't they go for a useable GPU? And no, if it does not have open Mesa drivers, it is not usable.

I really wonder why the GPU situation is such a huge mess. Very few are supported by free drivers and the closed drivers are, besides being closed, often of very low quality. Is patent law holding this situation stable? Isn't patent law there to promote advance? I feel it isn't working.


Your best bet is probably a tablet or smartphone with a fast ARM processor. Those don't have the management engine and can run surprisingly fast.


While it's true that they don't have MEs, the basebands in smartphones almost always have direct memory access, and their own proprietary firmware, so the same problems apply :-(


As a desktop computer for day to day use? Can I run it without non-free software? Can I natively install Debian with mainline Linux to actually get work done? Can I connect a monitor and other accessory?


Should not trust a phone or tablet. Are the radios off? Are you sure?


> I think HN is uniquely positioned to show us the answer. Take a community of people with generally above average interest and/or knowledge in this stuff, and the comments are filled with

I think it's even more sinister: I would argue that a higher percentage of users on HN might be sworn to secrecy about any knowledge they might have anyway.

So you end up with very smart people who're either sworn to secrecy or who aren't; those who aren't are asking questions (and very few have answers, and those answers are partial or incorrect). Those who are can't answer them honestly or fully.


In the past discussions of the ME here and elsewhere, there have always been people making self-assured poo-pooing noises about what a trivial nonissue it is, make deceptive claims about exposure, and then dumb claims about how you can't trust any hardware. They never reply to particular questions that might point out how deceptive the arguments are.


I'm one of the people that claims you can't trust hardware. Care to elaborate why that's not the case. How does one trust a chip with 14nm transistors? Are you claiming that one can 'simply' decap the chip and examine it with a microscope on a Saturday night? How do I then trust that the chip I have in hand is of the same architecture as the one you decapped and examined?


You are of course correct that in the general case for threat models above a risk threshold, one cannot trust hardware. But that is not the argument under discussion. The argument being made is that it doesn't matter if the ME is a big fat target, because (f'instance) your NIC could also be a big fat target we just don't know about.

That is the argument I am asserting is dumb. And it is obviously dumb; in adversarial contests, you don't leave weaknesses exposed just because you might have other weaknesses[1]. It also ignores the presence of differential threats; I may not care about hypothetical compromised NICs because my use case my not require a network, but need anti-evil-maid defenses.

Bottom line: in the context of discussing whether or not the ME is dangerous in the general case, other potential hardware threats are irrelevant, and I believe the argument is one used to intentionally muddy the waters.

[1] Putting aside deeper strategies; I'm not going to argue about game theory here.


This is how I feel -> I've treated every device I've had for the last decade as if it were compromised, because who can prove to me otherwise? I certainly don't have the expertise to verify for myself.


You're only responding to the part after "dumb claims," right?

And your (valid) refutation of that part in no way implies spending less time reverse engineering and disabling ME, nor being less excited about this tweet. Correct?


Call them out on it.


This is what I think, too. My professional knowledge doesn't encompass anything like ME, but occasionally areas I am expert in do come up. Unfortunately I can't honestly do much more than just watch -- I'm afraid that I might give away things I shouldn't without realizing it.

I'm sure that I'm far from the only one.


If you are in this position, you may be able to take secretly actions:

- create tools to help fight what you deem contrary to things that are important to you

- dispatch leaks in subtitle ways so that humanity is not left in the dark

- lead anonymous and discrete community of people sharing your believes, your skills and passion to improve things

Be careful. And best of luck.


Late to the party here and I doubt you'll ever see this.

But I haven't seen anything that worried me. Besides those NDAs are legally binding promises, I take those seriously.


Previously, explaining what the author of the tweet did months ago:

https://www.digitaltrends.com/computing/intel-kaby-lake-skyl...

"As shown in the presentation by security researchers Maxim Goryachy and Mark Ermolov, one way of accessing the JTAG debugging interface" "is to use a" "hardware implant" "running Godsurge" "which can exploit the JTAG debugging interface. Originally used by the National Security Agency -- and exposed by Edward Snowden -- Godsurge is malware engineered to hook into a PC’s boot loader to monitor activity. It was originally meant to live on the motherboard and remain completely undetectable outside a forensic investigation."

Emphasis mine.

But this all was in January:

"The claim was made during a presentation" "which showed how hackers could use a cheap device to gain access to a debugging interface embedded in hardware."

What's the news now compared to then?


That we have one more person capable of exploiting this in the wild.


Who?


The guy who made the tweet...


No, he's the same guy who made the news in January, Maxim Goryachy.


I've been warning friends and family about it for years back before it was well known, my first email about it was 2015. Their reaction has been one of three things; you're too paranoid, who cares so what, and stop talking to me. I've always made a variety of suggestions and updates on general security issues, even beyond the Intel ME problem, to friends/family via email.

It is not just a matter of apathy. They get violently ill at being told to think about protecting themselves. If you tell them not to put their debit card into an ATM without giving the card reader a tug to see if its real first for example, they'll do it wrong anyway just to spite you. It isn't that giving the tug is hard or that it isn't wise, but they simply do not want you to be right or have to think about it.

Admitting that you are right about that one little thing means they have to deal with all the other issues that you brought up as well. There is probably an interesting field of psychology to be researched with just that phenomena alone. I don't think this is just the people I know personally because I've encountered a lot the public who has this mindset as well. It is how we got ourselves into these problems in the first place with no recourse.

I stopped sending the emails about a year ago because they asked me to. The reason stated is that no one cares about security and a number of them had already auto-forwarded me to the junk folder. Or, told me they saw my name and would skip my emails and not open them. I don't even feel like bringing it up again now that it is a going concern. They will continue to not care.

I find it frustrating.

EDIT: I've disabled Intel ME on my machines. I would offer to do it for others but they'd have to acknowledge it's something that concerns them to want that help, and they won't.


> One way to think of ME is, we all woke up one day and discovered we have had high resolution night vision spy cams installed in our bedrooms.

Not really how I think of it. Seems more similar to waking up one day and realizing Tesla controls your Tesla car remotely. Or Microsoft can push bad updates Windows. Or Google can push bad updates to Chrome.

> And yet we really don’t seem to care much. Lesser issues generate national outrage and high volumes of press coverage. Why?

On my end it's because I have not seen a single shred of evidence that it has been used for spying, and because I figure the moment anyone becomes aware of that happening, people would probably find a way to block the network traffic at the router or somewhere else.


The point of ME is that it's invisible. And until then, few people had access to it.

Now I can't wait for rogue monero miners to use ME to propagate :)


It doesn't matter if it's visible or invisible. The point is, it cannot go undetected while being used:

- If it were to periodically "check in" with an external server to see if it needs to do any kind of spying -- admins would notice the network traffic.

- If it needed to be contacted externally to "initiate" any kind of spying at all, that would mean anyone behind a NAT would be safe, and furthermore, the the moment anybody notices such a thing, it would make the headlines and get blocked on networks, so this capability would need to be kept secret and turned off except for ultra-high-value targets... which most people do not view themselves as.


You are not giving anywhere near enough credit to those who would be your adversary.

The NSA routinely intercepted Google internal traffic. Did Google, who are presumably running the most advanced network on the planet and staffed by people who don't suck, notice the intrusion? They did not; they got informed via PowerPoint.

While the sophistication of the attackers decreases as you move from NSA to random hackers, so does the sophistication of the network as you move from Google to mid-sized businesses.


> The NSA routinely intercepted Google internal traffic

How are these even similar? Did the NSA ever send traffic of their own on the Google network infrastructure? Citation needed if so, because I recall they merely listened in on existing traffic using external network equipment.


You assume it would be used for mass spying.

Not at all.

When you have something that good, you use it for specific targeting. You get a guy with a work laptop at home, you infect him, then you use the machine to get one closer to your objective. Slowly. With time between the events. Without being a beacon in the network.

Or you just use it to spy on a guy you suspect.

Or to get access to secrets of somebody you wanna black mail.


> You assume it would be used for mass spying. Not at all. When you have something that good, you use it for specific targeting.

Did you even read what I wrote? Specifically the last sentence?


Woops


"If it were to periodically "check in" with an external server to see if it needs to do any kind of spying -- admins would notice the network traffic."

Admins would use network dedicated hardware with network chipsets driven by closed source firmware. In a crazy but technically doable scenario, that closed source firmware could contain instructions to "send packets containing magic word X to address a.b.c.d" and not reporting or counting that traffic, even to applications opening the device in promiscuous mode, with all routers in between to obey the same instructions. Not a single byte would be reported, counted or sniffed unless someone sticks a digital analyzer on the network cable.


> Admins would use network dedicated hardware with network chipsets driven by closed source firmware.

No, I fully expect there are enough admins out there running dedicated hardware whose design & source code they have access to. That is sufficient.


Given that the ME has full access to the NIC, outbound traffic could be concealed onboard traffic that is already outbound. If the adversary has also compromised network routers, the traffic could be observed and decoded without explicitly being sent anywhere.

Similarly inbound control signals could be delivered by modifying inbound traffic that the ME observed and decided.

Depending on your throughput needs the signal could be delivered subtly by for example modifying the timing between packets in a way that would be very hard to identify as a signal.

I’m hoping the ME firmware Now gets dumped and studied closely. I’m betting there are some surprises in there


It's still possible to monitor that traffic, especially at the corporate firewall level, or use a Raspberry Pi, or use an old, pre-ME computer.

Until there is evidence, this is technically just a government conspiracy theory.


It isn't a conspiracy when the feared idea has been confirmed. There is a separate os running on the cpu to monitor and control each and every single one of new intel machines.


Right, but there's no confirmation of any remote access or spying going on.


Are you so sure about point 1?


Yes? I'm not claiming every single admin would notice it, I'm just saying some competent admins somewhere would notice it. I hope I'm not proven wrong, but I don't expect to wake up one morning and read "Breaking news: No admin has noticed this strange this IP traffic to Intel/NSA/whatever for the past decade".


I agree. No one has ever observed ME sending unexpected traffic. (Feel free, anyone, to point to a counter example.)


It's never been secret. It's been advertised by Intel as a feature so enterprises can control the computers they own.


Secret no. But if somebody uses it to own you, you can't see it.


> "Apparently, ME is the perfect combination of opaque, obtuse, and obscure. It’s not rocket science, but complicated enough it’s hard to explain well quickly."

The way I see it is Intel ME is a processor-level application that has full control of all computer activity and cannot be blocked or disabled and can be accessed and controlled remotely.

Would like to hear other people's opinions of what it is.


The mnemonic "opaque, obtuse, and obscure" is a valid denotation of targeted qualities, not unlike the qualities targeted for drone activity are related, with the mnemonic "dull, dirty or dangerous."

So, the three O's is a good conceptual utility, for understanding the combination of traits that can exploit the realities of human psychology, and explain why no one seems to care about how technical paralysis can stifle apparent relevance.

It's not only a means of explaining sneaky cloak and dagger stuff. You could describe lots of niche tech staples with this mnemonic.

When something is difficult to explain, that's often enough to derail any casual conversation with simple mental fatigue.

  Opaque:
Any number of qualities can render subject matter opaque. A poorly indexed, large volume of data is opaque, but a recursive function may also be opaque in it's behaviors.

  Obtuse:
Intel ME's position in the stack is placed at a key choke point, requiring highly abstract approaches to utility. In other words, it is obtuse.

  Obscure:
It's readily available, but sequestered as a feature behind interfaces non-technical users rarely visit, without advertising to promote, train or recommend it's use. Is it supposed to be an easter egg? For who?


It's a pretty big thing and parts are necessary, parts are useful, parts are scary, and too much of it is shrouded in secrecy.


Let's be pragmatic. Does anyone know if ME blockers work? Can you please post one if it does? Can we start a list? Are the destination ips it can be controlled from hard coded, can it be blocked via simple firewall rules?

EG tool: https://github.com/corna/me_cleaner

List? https://github.com/ransom1538/intel_me_cleaners/


You can use me_cleaner, and it's better than nothing, but the ME is still required for booting the motherboard.


It's not that "we really don't seem to care much"

It's that even those of us that care vehemently, have no recourse. It is impossible to fight a secret police state.


Well with that attitude it is...

Part of winning is to even begin to believe you can fight...


There's ongoing work to fight against this, it just takes a very long time because the problem domain is incredibly complex.

RISC-V [0] is an open ISA that looks promising. Unfortunately, the privileged instruction set is still a draft, which is important for getting a full modern OS up and running.

lowRISC [1] is as non-profit open hardware company that has been working on a fully open RISC-V-based SoC that can run Linux. In their about page they claim it'll be ready this year.

There's also the stuff from SiFive. They have an arduino-comparible microcontroller, and have ongoing work for a 64-bit quad-core. I don't think their hardware is open, though.

[0] https://en.wikipedia.org/wiki/RISC-V

[1] http://www.lowrisc.org


To be fair, many of us noticed the giant camera in the corner a long time ago. ME has been a holy grail in the security researcher community for a long time, frequently the subject of presentations at conventions. And we have been pestering Intel about this since its inception. But the fact is that it doesn't matter how much outrage you or I may have. It will take enterprise-level shifts away from Intel products to get them to offer chips without ME.


I would rather hope that it would take a brave judge or the EU to forbid this spy-device.

Commercially there's no alternative. Enterprises even use Microsoft over Linux/BSD. Why would they get rid of Intel? For Anti-US spying a firewall should be enough.


What about AMD? Which chips does the sec-comm recommend for not getting pwnd


AMD and ARM have 'TrustZone', which is not the same, but if you don't trust it maybe you're in trouble.


> One way to think of ME

I think I am a bit out of the current state of events. Is ME short for management engine? If that is the case, what is the problem with it?


> And yet we really don’t seem to care much.

Most people do not have any concrete notion about what management engines are. People weren't that skeeved out by Alexa, and that was a pretty easy-to-understand system. There's no way that people will have any kind of personal connection to something that they barely are aware of and don't understand.


>One way to think of ME is, we all woke up one day and discovered we have had high resolution night vision spy cams installed in our bedrooms.

Sure, if by one day you mean - for the past 10 years? And if by spy cams you mean - A product with official documentation from the vendor, and similar to other products from other vendors. And these products can be purchased by anyone.

You have some valid points, but they are clouded by your sweeping generalizations and needlessly polarizing language.


At first it looks nice "oh now we can get rid of it" but it also opens up a very scary near future security-wise.

We've now entered a realm where an attacker could simply plug a device on an usb port of your computer for a few seconds to have it access your cpu's ME through USB JTAG and take over it, allowing him to have full access and control over what you do/read/open/type over the network, without you ever knowing it since you can't see it. And the only way to get rid of it for sure would be to pretty much throw that cpu away and buy a new one.

Or am I being overly paranoid and there is something I haven't considered that makes this scenario impossible ?

EDIT: given the answers I think my main concern wasn't well expressed above. I'm not saying this as in "ME is making it easier to be compromised". That may or may not be true, but that's not my point.

My point is, we all know that once compromised, you can't clean it and need to burn it all and start from scratch: recover from backup (not files on the compromised machine), format everything, reinstall. Due to the nature of the ME, this is not a solution here. The cleanup needs to be done at the hardware level. Unless I misunderstood something, once it happens, your cpu is done for, period. And 'using a hack to cleanup the hack' is still in the realm of cleaning up rather than start from scratch, it's not a solution for the same reason than cleaning up your comprised linux box is not one and you need to start from scratch.


The 'evil maid' attack is well known, and states that once someone has physical access to your computer, all bets are off. Anything that has DMA enabled (e.g. Firewire or Thunderbolt) offers an external device direct access to the system RAM that is very difficult to defend against, or they could attach a keylogger or modify your bootloader, basically unleash all manner of havok. USB JTAG is really no different from a security POV.

The concern with the Intel ME is that it has a native network adapter. You can bet efforts are currently underway to discover how to exploit the ME remotely. THAT'S when things get scary.

Your paranoia is not unjustified. Personally, I am nervous that some of my systems have the ME. When attention turned to it about a year ago, i knew it would only be a matter of time before someone broke into it.


> The concern with the Intel ME is that it has a native network adapter.

Yep, this is the big deal. After I "discovered" the ME, my first stop on my home network was the switch, to block all that crap. (And I found my storage server, equipped with a Supermicro all-in-one motherboard, helpfully grabbed an IP for the ME to listen on with an 'admin/admin' password.)

I just wish the empire builders at the NSA would care about something other than their own little power center. They knew this would happen - it always does. The NSA is probably the biggest security threat to the U.S. people[1] at this point, because they keep building concentrated, high-value targets and then lose control of them.

[1] Not to be confused with 'U.S. government interests'.


Am curious how and what exactly you blocked?! What precautions can be taken to make systems more secure?!


As usual, it depends on what exactly you have. Not all chips have the AMT enabled, for instance.

This is a useful document for understanding what exactly you're dealing with and what to do about it:

https://www.blackhat.com/docs/us-17/thursday/us-17-Evdokimov...


The BMC is listening on that IP, not the ME.


https://www.supermicro.com/products/nfo/IPMI.cfm IPMI / BMC != ME. Intel’s is basically the version of this that you can’t disable, that works through the same PHY (most BMCs have their own), that you’re not allowed to use. https://en.m.wikipedia.org/wiki/Intel_Management_Engine


I know that the BMC isn't the same as the ME, but in his case that's the BMC getting an IP and default web login for admin/admin. It's not the ME.

BMC doesn't always use a dedicated physical port, and it's commonly bridged in sideband to the other NICs on a server.


> Anything that has DMA enabled (e.g. Firewire or Thunderbolt) offers an external device direct access to the system RAM that is very difficult to defend against

IOMMU effectively solves the "DMA is completely broken" problem, as far as I'm aware.

Evil Maid attacks are mostly worrisome because even UEFI cannot protect you against some bootloader attacks (what if you disable UEFI or reflash the firmware and then have a bootloader that just looks like a UEFI boot). There are some usages of TPMs that seem quite promising (they revolve around doing a reverse-TOTP-style verification of your laptop to ensure that the TPM has certified the entire boot chain).

It's quite a hard problem, made significantly harder by the fact that every fucking hardware vendor seems to want to make our machines even less secure.


The problem is hard mostly because the entire architecture of the personal computer made absolutely no provision for security. Everything is patches upon patches to add superficial security. Fundamentally, a computer is dumb, it will perform whatever task it is told to do, and all our security measures revolve around stopping a malicious actor from telling the computer to do something 'bad'. Eventually, someone gets around the bouncer or in through an open window and here we are.


Oh so....every port on my laptop? Fuck Apple


It's a pity that law was passed that forced people to buy Apple products.


My point here was not about it coming from a USB JTAG, but by it targeting ME AND having full debugger access, meaning it isn't limited to reading nor to RAM/volatile memory.

Through this attack, they could compromise the ME longterm, which means the long accepted "nuke it from orbit" solution to security breach (unplug everything, format everything, start from scratch) still wouldn't be enough; that entire chip is done for. And 'using a hack to cleanup the hack' is still in the realm of cleaning up rather than start from scratch, it's not a solution for the same reason than cleaning up your comprised linux box is not one and you need to start from scratch.


I remember following a tutorial along the lines of:

https://www.howtogeek.com/56538/how-to-remotely-control-your...

A couple of years back, and being absolutely horrified at the remote management available on my second-hand lenovo t420s - including management over wlan.

Sure the features are gated by price/cpu "brand" - but I think it's safe to assume a) this is complex software and will have bugs with security implications b) once it's well enough understood - it seems likely it can be "upgraded" (similar to how you today can eg: replace the bios with coreboot).

The conclusion is that we need new platforms - perhaps power5 will help.


The physical access required for an evil maid attack is very different from the "physical access" required to give you a malicious USB device. In that sense this is a lot more scary. As are aforementioned Thunderbolt and Firewire attacks; without an IOMMU, those are a security nightmare too.


An important aspect of an evil maid attack is that it requires at least two instances of physical access, once before and once after use by an authorized user.

If the attack can he pulled off with only one time access, it’s worse than an evil maid attack.


People have been trying to break ME for years, some have been paying attention to it for a long time. There are just more people in the game now


It's by far not the first time that a highly-priviledged "security" component turns out to actually reduce security, because it is a large and gainful attack surface.

I can't help but to think of all those exploits that target anti-virus software.


"I know, let's examine some suspicious code in a highly-privileged process that the user explicitly trusts to keep them safe."

When you think about it, "it seemed like a good idea at the time" can explain most tragedies in human histories.


Well, a few hours ago we had a thread on HN about eradicating a whole specy of insects. And we had in comments intelligent educated people that though that "it seems like a good idea".

So if mass disruption of the very system that support your life can have supporters among a community composed of smart and actively debating people, "it seemed like a good idea at the time" probably happens every week at gov agencies.


Lots of things are done because there's a broad consensus opinion that it's a good idea. There is nothing intrinsically bad about that, and needless or baseless skepticism is often counterproductive and paralyzing, leading to inaction even in the face of widespread consensus.

What leads to failures, and I suspect happened at Intel, was that they mistook a very localized consensus for a broader one. There's a word for this, it's called "groupthink". A group of people can talk themselves into doing something very stupid (or evil) while still thinking they're doing the right thing, given enough time and motivation.

There was no widespread consensus, outside of Intel, that the IME was a good idea. If they had solicited opinions from outside their organization, they would doubtless have gotten horrified reactions. But they didn't, or if they did they must have dismissed those concerns, because they went through with the bad idea anyway.

The apparent secrecy with which they developed the IME is also a cause for alarm; groups of people who operate in isolation are particularly prone to groupthink, and so even if their motivations are good ones, the fact that they are working without continuous feedback from anyone on the outside raises the chances of a perverse outcome.


People usually won’t do something if they think it’s a bad idea. Tragedies begin with “it seemed like a good idea at the time” because everything does.


Except it never seemed like a good idea to anyone with a clue of IT security.


From what I understand, the justification wasn't about security, but rather about remote administration. Which is even worse, because that is ACTUALLY a backdoor, just one that is supposed to only be used by the legitimate owner of the machine.


It enables DRM technologies, too.


The vulnerability exists, wether someone reveals it to the public or not. These people found it with Intel keeping the working of the system under wraps as much as possible. You can imagine the kind of access available to people who did get to see the source code.

At least now that everyone can see the problem people can make informed decisions.


I would think those people are under constant threat of being kidnapped for their information


Given the complexity of the thing, they need a LOT of high profile people to create such a stuff. It's very unlikely none of them have been either:

- bribed

- threaten

- felt guilty about it and decided to make amend

My money is that exploits have been on the blacks market for a while now. We just have an official public demo now.

Rule of thumb: when something that catastrophic is made public, the worst already happened and you are late to the party.


Many people will now start to dig in. War is started and I hope somebody will find a way to totally remove/replace(with a stub) Intel ME before some critical vulnerability will be discovered in the Intel ME's network stack.

In white hats we trust :)


Here's hoping. Intel did a great job hiding the thing and making it all but impossible to remove (at present if you nuke the firmware, the CPU will totally fail to initialise. Thanks Intel!). That said, we're talking about an embedded device with very low-level code, and any 'disabling' code is probably going to be distributed in binary form. Stands to reason somewhere along the lines, someone is going to turn that around for their own benefit.

In tin-foil hats, we trust, more like!


I just get tired of being ridiculed and then 10 years later vindicated.

In Faraday cages we trust.


2013 was the greatest 'WE TOLD YOU SO' in history for the tin-foil-hat brigade...


Why 2013 specifically?



Yep, this exactly.


Imagine how Stallman feels.


There's a whole subreddit dedicated to this https://www.reddit.com/r/StallmanWasRight/


You'd think after being right again and again people would start to trust you. Or at least distrust entity that have a track record of behaving badly.

And yet...


me_cleaner already exists[1], and it takes advantage of several flaws in Intel ME's signing to remove large sections of the code thus neutering it. Some code still exists, but Intel ME cannot actually fully initialise on "cleaned" systems. Older machines used to have a bug where if you filled the first half of the Intel ME firmware with zeros the machine would boot but ME wouldn't start at all.

But yes, I hope that with this it'll be possible to completely remove the remaining few hundred kB of Intel ME code remaining.

[1]: https://github.com/corna/me_cleaner


Is this enough to block this attack? That is, is a "cleaned" system vulnerable to a USB device?


> That is, is a "cleaned" system vulnerable to a USB device?

of course it is, because the USB DCI attack is one level below the Intel ME. Even if it is deactivated via HAP, which basically simply puts the ME code into an infinite loop or a CPU halt state - both can be reversed by JTAG.


I don't know, because there is very little detail about what this attack is and how it works. It looks like they managed to thwart whatever protections exist in the USB DCI (Direct Connect Interface)[1] which is a debugging system for Intel chips.

If they have full debugger access to what's running in Intel ME then removing the code from the firmware probably doesn't make a difference (assuming they can run un-trusted code in that context). If they cannot write their own code and so an attack requires ROP gadgets then removing the code might make it harder (or impossible) to do, but I doubt it.

[1]: http://www2.lauterbach.com/pdf/directory.pdf#M8.newlink.DIR6...


DMA/Firewire over USB makes pretty much every systen vulnerable to USB attacks (ME aside.)


DMA-based attacks are blocked by the IOMMU, which is present in all modern machines (and has been for a few years). Linux has preferential enablement of DMA such that the IOMMU is initialised first, so even plugging in a device in early boot will not be able to exploit DMA.


Does it? To use the IOMMU for VFIO, I had to explicitly enable it via a kernel parameter.


Until they fix it. Or use something else.

Huge corporations backed up by gov agencies with a lot of time, money and skilled people VS a few people working for free because they believe they should. Not a fair fight.


Is it still true that USB devices can be used to execute code on a target's system without user interaction?

I thought this behavior disappeared ~10 years ago.


They shouldn't be able to. Firewire devices can because they used DMA without memory protection. I don't think Thunderbolt has the same flaw.

However there are bugs in USB stacks, especially now you can do so many alternate protocols over USB-C.

https://www.jefftk.com/p/malicious-usb-sticks

There's nothing fundamental in the USB spec that lets devices execute code on the host.


I think you're being overly paranoid. If the attacker has physical access to the machine, chances are you're compromised anyway, even before this vulnerability.


I don't think thats the right attitude. There's a difference between being able to open a machine to install malicious hardware / steal hdd's or just plugging in a generic USB stick to pawn it.

I know some of you might argue that even generic USB sticks can do damage and, whilst I agree, this attack is still a degree worse than most of those.

Thus far the most damage an unknown USB stick could do was type commands as a fake keyboard (visible to user) or exploit a driver vulnerability to silently cause mischief.

Correct me if I'm wrong, but those cases could still be trivially stopped by a security policy set by administrators disallowing unknown USB devices. (Of course, how many places would use this is another matter, but it's still very important for places where this does matter.)

This attack on the other hand would seem to be able to completely bypass your operating systems restrictions on USB ports.


From a technical perspective there is a difference now that this is public, but from a security stance, physical access is physical access.

Why? Security knows there are always bugs in software, and assumes they exist. Thanks to @h0t_max, the rest of us know this particular bug exists, but this bug has been around for a while - who's to say evil hax0rs didn't find this bug years ago and have been exploiting it since?

Or put another way - you say an unknown USB stick could exploit a driver vulnerability to silently cause mischief, and then claim that this is easily stopped if an administrator sets a security policy to disallow unknown USB devices. (I assume you mean a Windows GPO enforced policy or similar, and not a written social policy.) Who's to say the code that enforces the policy doesn't have bugs that's exploitable? What if the driver for a known USB stick has exploits?

Physical access is physical access, and while there are mitigations for the evil maid attack (like an encrypted drive and shutting down -not just suspending, when the machine is out of sight), there simply is no way around the fact that physical access is game over.


> while there are mitigations for the evil maid attack (like an encrypted drive and shutting down -not just suspending, when the machine is out of sight),

That mitigation is useless against Evil Maid. There are much more sophisticated mitigations (using a TPM to measure the boot, and then do something akin to TOTP in order to allow the user to actually verify the state of the machine) which actually could protect against Evil Maid almost completely (assuming you don't have something like Intel ME that cannot be verified by the TPM).

"Once you have physical access it's game over" is a very common response to these discussions, and I find it incredibly defeatist. Of course physical access means that the "clock is ticking" until your data is compromised, but sufficient protections can dampen the impact or increase the difficulty.

For example: IOMMU protects against DMA-based attacks, something that was impossible to protect against several years ago. This doesn't mean that someone cannot launch other attacks, but it does mean that the trivial "just plug anything into a USB port and you have DMA" attack is no longer possible.


Evil Maid refers to the broad spectrum of attacks a hypothetical maid could do with physical access and ranges from a drug addict simply looking to resell the laptop at a pawn shop, to a CIA agent, and not being defeatist involves differentiating between their objectives and capabilities in order to make a sensible decision.

Encrypted drive + shutdown is a defense against a specific Evil Maid attack, cold boot attacks. It is not a very expensive attack to run; for the cost of a can of compressed air, and a USB drive, anybody sophisticated can run this attack. https://en.wikipedia.org/wiki/Cold_boot_attack

Sorry to sound defeatist, but if you had been relying on IOMMU to save you, the trivial "plug anything into a USB port and you have CPU JTAG access" attack has always been possible. (Never mind that IOMMU implementations aren't guaranteed to be bug free.)

In the face of that, what do you do?

With this knowledge, all I really can do is stay up to date and patch-patch-patch. Have a travel Chromebook for leaving in hotel rooms, but ultimately I just have to know that it's not enough, especially against a CIA-grade Evil Maid, or an Evil Maid that's able to factor 4096-bit prime numbers. (That last one's not theoretical, either. It was revealed a few weeks ago that TPMs in Chromebooks and other hardware was generating weak keys, leading to cloud-factorable 4096-bit RSA keys.)

A less-sophisticated Evil Maid can still physically steal my laptop for pawning, and even if they can't get my data, I've still had my laptop stolen. Not-being-defeatist, I backup my data, although that has a totally different set of security concerns over the Internet.


The TPM is actually implemented as an Intel ME applet on a lot of PCs... >.<


Right, but there is a TPM pin-out standard -- so theoretically you could swap out the TPM of any device (which has a physical TPM obviously) with any other manufacturer's TPM and it would still "just work". While it might be implemented in Intel ME, there are a lot of laptops that have physical TPM chips.


Some manufacturers just don't pay more for an LPC or SPI-based TPM, and just use the fTPM one running as an ME applet (my Kaby Lake laptop does this)


> From a technical perspective there is a difference now that this is public, but from a security stance, physical access is physical access.

Access to a USB port is not physical access. USB is a network interface that is commonly used to connect host computers to small portable embedded systems, often tiny NAS units of other people also known as USB flash drives.


Yes, it's just like the example in the first season of House of Cards, where the journalist for some reason is conned into putting the USB into a server. That plot was really stretching to find a way to kill off a good character but still it shows how a "guest" could try to discreetly manipulate a system.


So you're saying things are so bad anyway that this one vulnerability probably doesn't make any difference?

This interpretation of "you're being overly paranoid" is new to me ;)


Come on, you can already catch aids, why do you worry about cancer ?


If the attacker has physical access to the machine, chances are you're compromised anyway, even before this vulnerability

Can you not see the difference between an attacker opening the case and stealing your HD, vs inserting USB key for a few seconds, then going away, and exploiting later at their leisure?


I wonder how long it will take until an attack over the network is found.



As I said above; I think I didn't make my point clear enough: my concern was not about it making it easier to be comprised, but about it making the clean up pretty impossible, on a hardware level.

Software do-over is a very well accepted solution (don't bother cleaning the rootkit, just format reinstall), but hardware do-over (change the cpu) is going to be a hard pill to swallow.


This was a problem even before Intel ME. Modern server motherboards (and several workstations) have a second Linux installation on your motherboard (known as a "baseboard management controller" or BMC) that cannot be removed. There have been many exploits found in the software running on BMCs, and if you want to "clean up" an infected server then you have to throw out the hardware if you want to be 100% certain.


The bmc situation is changing. There's an effort to support the ARM SoCs upstream, and there are a number of companies working on open source BMC stacks.

https://github.com/openbmc


For now. You need a first step before the next. I'm waiting for them to own ME remotely now. It's unlikely they won't succeed.


Companies like Intel, who are complicit in helping CIA or any intel agency (government, rogue or otherwise) infiltrate and exploit our systems - need to be held accountable by the market.

Intel ME and the (assumed [0]) partnership with CIA to design and build this system - should be an absolute travesty blow to the integrity of their business long-term. Will you, as lead engineer or sys admin for your mission critical business now continue to choose Intel products to help build your infrastructure?

Unfortunately it seems that our modern market has not yet evolved enough to punish companies involved in such reckless behavior. I suspect the reason is primarily the ease of which governments can mass tax and create fiat currency. Perhaps there is some alternate decentarlized currency system that would limit government's ability to tax, print and award juicy big-brother contracts to these companies.

Anyway, for now at best - and perhaps somewhat encouraging - is the subsequent brain drain of engineers and hackers alike who want nothing to do with faceless corporations like Intel, Google, Facebook, IBM, et all who routinely deceive/exploit and work against the best interest of their own customers.

[0] https://twitter.com/9th_prestige/status/928740294090285057


> Intel ME and the (assumed [0]) partnership with CIA to design and build this system

I worked at Intel on ME and the things that came before it until around 2013. I can tell you two things --

1. No, Intel ME wasn't born out of a desire to spy on people nor was it -- to the best of my knowledge but I honestly believe I would know -- created at the request of the US government (or others). It was an honest attempt at providing a functionality that we believed was useful for sysadmins. If it was something done for the CIA, I believe it would probably have been kept secret instead of marketed.

2. It was initially going to be much "worse". Early pilots with actual customers -- such as a large british bank -- were going to run a lot more stuff -- think a full JVM -- and have a lot more direct access to the user land.) Security concerns scrapped those ideas pretty early on though.

In retrospect, I personally believe the whole thing was a bad idea and everybody is free to crap on Intel for it. But the thing was never intended as a backdoor or anything like that.


Right. ME does make sense as a feature for sysadmins. Except . . . . Well, can you shed light on the following:

1. Why did your team deem it necessary to deny the end-user the capability to disable this feature?

2. Why did your team decide to enable ME on ALL consumer grade chips? You could have only enabled it on, say, Xeon, as a value-add - exactly like you do for ECC support. You could have made more money this way. But . . . you didn't.

Without legitimate, sensical answers to the above questions, there is no reason for anyone to believe your team did anything other than design a backdoor for the Feds. Sorry.


Having been a sys-admin once upon a time (2006-2008), these answers are straight forward. Servers used to have discrete ME cards which were paid add-ons. Competition in the early 2000s drove these ME cards to be integrated in the motherboard in order to better compete on the low end of the market. I’ve had servers I was only able to remotely fix due to the out of band management interface (more than once). They pain they fix is real.

The same techniques for managing server farms are useful for managing hundreds/thousands of corporate desktops. Being able to power up a desktop (“lights out” management) and re-image it at 3:00AM is very useful for example. You could also install 3rd party security products to the ME to provide higher level threat detection that’s hard for a rootkit to hide from. So once the work of getting an integrated management engine production ready was complete, it made perfect sense to use it in corporate desktops. It’s expensive to produce chip variants, so doubtless that further cost pressures on Intel lead to them putting the ME their core shared across all products. Plus, now IT admins can let the VP of Sales get the laptop she wants knowing they can leverage their System Center/OpenDesk/etc. console to manage it via ME.

So no, they aren’t a Fed backdoor. Those of us who worked in IT 10 years ago remember how the market drove Intel to add the ME. That is doubtless why many are silently conflicted. They don’t want to take a big step back. They likely are expecting/hoping Intel will “fix” the problem.


The problem isn't the existence of the ME as such. Servers have a BMC which implements similar remote management functionality. People could order servers without BMCs, since they're discrete chips, but they don't.

Even Raptor's high-security Talos II has a BMC; the issue isn't having a BMC, the issue is that it's not owner controlled and it's not auditable.

What's wrong with the ME is that

a) it only accepts Intel-signed code; I can't replace the ME firmware with an implementation (e.g. of remote management functionality) that I trust. I also can't repair vulnerabilities in it without the cooperation of both Intel and the vendor (which is often not forthcoming).

Consider the Authorization header bug in the ME's webserver and multiply it by how many machines you claim use this remote management functionality. That's horrifying.

b) it has DMA access to main memory, which is insane.

Look at the fact that every server nowadays has a BMC, in addition to the ME. On a client device the ME would be used to implement similar functionality, so the BMC is actually a wasteful duplication - but server vendors have to use a BMC because they can't program the ME to implement the remote management functionality they need, because only Intel can program the ME. This is stupid.


> It’s expensive to produce chip variants, so doubtless that further cost pressures on Intel lead to them putting the ME their core shared across all products.

Would it be possible in future CPU designs to put a jumper in, e.g., the ME power path? Closed by default (and possibly forced closed in enterprise-targeted devices), but the option exists to disable the ME without requiring an additional CPU variant.


From a hardware perspective, it’s an easy problem to solve. This is a wetware problem, however.

Back when ME’s were discrete, you would inevitably have some with, some without. Someone would order a bunch of machines without them to “save money” or they bought a model that just didn’t have an ME add on offered by the OEM.

That meant that occasionally you had to actually have the machine in your presence to service it. You end up designing two processes/procedures based on whether you are remote or not. Lack of ME’s actually increased labor costs by reducing the number of machines a tech could manage (on average).

Having an CPU fuse essentially winds the clock back to the discrete ME days. Someone will place an order order for SKU ENCH-81-U instead of EMCH-81-U and you end up with 500 machines with the ME fuse blown. Inevitably there will be a big enough restock fee that someone in accounting will say “just use them.”

(The same applies to things like having/not having a TPM module, etc.)


I'm not OP, but to respond to question 1, allowing users to disable the feature would also allow attackers to disable the feature. If you're relying on ME to provide remote access so that you can clean and repair infected machines, then it's game over for you if the attacker can disable ME.

It would have made much more sense to require you to enable it before first use, and ship it as disabled from the factory. Enablement should work like blowing an eFuse where it's never off once it's turned on, but if you never turn it on it doesn't exist. Then I don't have to worry about the feature unless I know exactly what it is and how to use it.


"I'm not OP, but to respond to question 1, allowing users to disable the feature would also allow attackers to disable the feature."

Older products had jumpers, physical switches, or software mechanisms for securely updating firmware. The first two are immune to most types of remote attacks if done in hardware. Intel already uses signed updates for microcode which people aren't compromising remotely left and right. Intel supporting a mechanism like those that already existed in the market for disabling the backdoor would not give widespread, remote access to systems. If anything, it would block it by having less privileged, 0-day-ridden software running.

I'll also note that the non-Intel market, from OpenPOWER to embedded, has options ranging from open firmware (incl Open Firmware itself) to physical mechanisms to 3rd-party-software. Intel is ignoring those on purpose for reasons they aren't disclosing to the users that also probably don't benefit them.


> Why did your team decide to enable ME on ALL consumer grade chips?

Can you please provide a reference? I've been trying to enable ME forever for my consumer-grade i7 with Intel motherboard for remote management, and I can't seem to be able to.


The core of Intel ME is enabled on all chips since 2008. But features like remote management aren't. Intel ME is used for things like DRM (see PASP) or hardware bring-up and power management.


The ME is already enabled. Maybe you're referring to AMT [0,1] (which might not be included)?

[0]: https://en.wikipedia.org/wiki/Intel_Active_Management_Techno...

[1]: https://www.intel.com/content/www/us/en/architecture-and-tec...


I looked into something similar recently. The following is based on plenty of research, but Intel's information can be a bit ambiguous and incomplete at times, so I can't promise I have every detail right.

FOR OTHERS: Note that you might be able to disable ME's remote access simply by ordering a computer with "No VPro".

1. ME is a platform with many applications that run on it; AMT (Active Management Technology) is one of them.

2. AMT has many components; remote management is one of them.

3. AMT comes in multiple 'editions' (my word, not Intel's) with different features. The Small Business Technology (SBT) edition does not provide remote access by design, with the idea (AFAICT) that small businesses don't want to setup and manage management servers and therefore remote access is insecure.

4. If in MEBx[0], you see "Small Business Technology", then there's no remote management - unless there's another remote management function in ME that is independent of AMT. Also, the first reference below provides the official method of identifying SBT implementations (via a flag in some table). I discovered it on a system ordered with a "No VPro"[1] network card (I'm still not sure why that's a spec of the NIC and not the processor).

Here are a couple of useful references:

* SBT: https://software.intel.com/en-us/documentation/amt-reference...

* MEBx on i7 processors (the title also specifies a chipset; I'm not sure how much that matters): http://download.intel.com/support/motherboards/desktop/sb/in...

.............

[0] MEBx is Management Engine BIOS Extension: the text-mode, pre-OS console UI for configuring ME

[1] VPro is not a product or technology. It's merely branding for, AFAICT, an ambiguously defined group of products that IT professionals might be interested in. It includes AMT (which is also part of ME and often marketed independently), TXT (Trusted Execution Technology), and more.


  to the best of my knowledge but I honestly believe I would know
Honestly, if a three letter agency was working with a tech company to produce a back door, the last people I would expect to know would be most of the engineers involved in the implementation.


> Honestly, if a three letter agency was working with a tech company to produce a back door, the last people I would expect to know would be most of the engineers involved in the implementation.

Who would the first people be then?


If I were a 3 letter agency with a large budget, I would insert someone as a project manager at the target company. If I couldn't do that, I would work with their C level folks.


Preferably no one in the implementing company. Work using customer pressure, say an important bank. And later you swoop in and get exclusive access during a nice and cozy dinner with one of the Cs. It's more like judo than brute force arm twisting.


Kudos for speaking up about it, understandably with a throwaway account - which unfortunately doesn't help prove what you say is in any way truthful. But you probably still work for them and enjoy a nice salary. So can't blame you at all there. But I do just wish more people would be willing to put their careers on the line to say the right thing. This is one of the underlying problems: when smart people go along with bad things, very bad things can and will happen - if on the contrary smart people speak out about bad things then those bad things will be less likely to unfold on a large scale as we see them happening so often in SV.

To your points though, I mean it is a great perspective and helps to illustrate a sliver of possibility of innocence here on Intel's part but it's a little weak given that the operation would have been compartmentalized and political objectives/partnerships therof obviously not part of the system's technical development.


> But you probably still work for them and enjoy a nice salary. So can't blame you at all there.

Why not? Is a salary a good ethical justification for mistreating other people?


Well, you're right salary is not justification for mistreating other people but I can at least sympathize with OP who has maybe rationalized going along with something shady because he himself was mistreated/deceived and perhaps wants to believe ME is not a tool for exploitation; for many 'ignorance is bliss' is a way of life that works for them so it is what it is.


OP's explanation is that this was a shitty decision, made for decent reasons. So... no? But that's not a relevant question?


Man, anybody with a remote idea of how IT works would have said it was a very bad idea. I can't believe in genuineness here. Nobody smart enough to design that system is dumb enough no not understand the consequences. So it's been knowingly decided to create this monster and ship it to the entire world.


> Nobody smart enough to design that system is dumb enough no not understand the consequences.

Do you remember the plain text password leaks from Yahoo? In the real world nothing has to be true/good/secure. All it has to be is that users should be felt so, doesn't matter what the reality is.

As long as the focus is on earning more money/power/control, this is always going to happen.


Unfortunately, in the real world, many times those of us who do have a remote idea suggest that things are "very bad ideas" but nonetheless get ignored by those who actually make the decisions.


Why doesn't Intel offer their chips without an ME, as an option? The mandatory nature makes it malicious.


Saves money to have only one assembly line of chips. Hell, the i5 chips they make now are just i7s with some of the features disabled. So if they make an i7 and there's an error in some part of the chip that's specific to the i7 features, they can disable the i7 parts and sell it as a working i5. At least this is what I recall from an article a year ago on here about that.


This is known as the “Silicon Lottery”


What percentage of people would really want that? The vast majority of consumers don’t know/care. Or it’s sold as a feature.

Most businesses probaby WANT the feature. See other comments in this discussion about lights-out management.

I’m guessing t wouldn’t be economically worth it.


If you’re not using it, then it’s a big unpatched vulnerability. I’ve never worked anywhere that uses it, although I’m sure a few places do.


There are parts of ME that you need (like the BUP module for configuring on boot).


Why would they do something as ridiculous as telling you its true purpose?


They wouldn't. As I said, this is to the best of my knowledge.

However, I believe I would know because it's not like one day the CEO came to us with a folder filled with requirements to be implemented. This is something that started very small ("find a way to force reboot a PC remotely if it's non-responsive") and evolved from there over months/years. I endured way too many meetings were design decisions were made. Unless there were secret CIA agents disguised as my colleagues, I really believe it was designed by Intel engineers all the way through.

I have no issues with people criticizing the product for its failures. I agree with them. But every time I see someone claiming this was a CIA thing, it actually hits me personally.

Then again, I'll never be able to convince anyone of anything. I just felt like saying something this time.

I guess I'm having a bad morning :)


Having worked for Intel (in the open source org) I trust you. I've seen first hand how a cool, small, simple feature is blossoming into something dr. Frankenstein would be proud of.

Also, I think people here severely underestimate the red tape and huge efforts needed to implement something mildly complex, Intel scale. Developing ME under wraps with full CIA-like functionality is staggeringly difficult - I've seen the effort needed getting the BIOS to work on the prototype boards without crashing or destroying the HW; pulling ME to work reliably on all boards would be one order of magnitude harder; making it spy CIA-style - add two more orders of magnitude. I think people don't really understand how difficult is to get something that close to the metal work reliably; able to poke inside the memory of a running OS - forget about it.

Also, I think the readers of HN severely overestimate the effort CIA needs to spy on the internet users - why even try to bug the firmware when people actively share their privacy via apps that they themselves install???


> I've seen first hand how a cool, small, simple feature is blossoming into something dr. Frankenstein would be proud of.

Complete aside, but the whole story of Frankenstein is about how Dr. Frankenstein is repulsed by his actions the moment that he brings the monster to life. So he most certainly wasn't "proud" of his actions, he was horrified by them. But I agree that this is likely how some of the engineers who worked on Intel ME would feel too.

> why even try to bug the firmware when people actively share their privacy via apps that they themselves install???

We know (thanks to Snowden and WikiLeaks) that the NSA and CIA have programs like this, so it's actually more incredible that you don't believe that the CIA or NSA would invest resources in adding backdoors to things like Intel ME. I don't buy that they designed it, but given that we know they intentionally sabotage internet standards it's very likely they sabotaged it in some manner. Or at the very least they have security vulnerabilities they are not disclosing, so they can exploit them.


> So he most certainly wasn't "proud" of his actions, he was horrified by them.

In the end, yes. But the novel starts with him being so proud of the golem that he takes it home with disastrous results. Hmmm, maybe the comparison to ME isn't that far-fetched.

> I don't buy that they designed it

Yep, this is what I'm saying - it's unlikely that they ever told Intel "put this in there".

> it's very likely they sabotaged it in some manner. Or at the very least they have security vulnerabilities they are not disclosing

Absolutely, yes. They would be vastly incompetent not to have them, in fact. What I don't agree about with HN crowd is the threat profile of such an exploit.

I have trouble believing that they use them on a mass-scale. There are so many people looking at the ME, that using any exploit on a massive scale would disclose it almost immediately, and allow the 'enemy' to develop protections. Given the extraordinary capabilities of such an exploit giving it a very valuable status, they probably need to protect it, and will use it only when absolutely necessary; such as the vast, vast majority of the HN users would not ever be subjected to such an exploit.

On the other hand, if your person is interesting enough to NSA to deploy such an exploit against your devices, probably you have vastly more significant problems, like trying to stay outside the visual range of a Predator drone. If any Three Letter Agency will deploy such an exploit against your PC, you can be absolutely sure that they have already bugged your phones, and not with a Stinger device, but tapping directly into the data feed at the phone exchange. Probably you have to incinerate your trash because the garbage men are spooks - this is the kind of threat that I assume you're facing if a TLA is trying to bug your ME.


> In the end, yes. But the novel starts with him being so proud of the golem that he takes it home with disastrous results.

We must've read very different novels. In Chapter 5[1] (when he finally recounts how he brought the golem to life, after talking about his life and his studies up to that point) it's clear that he instantly regretted it.

> I had worked hard for nearly two years, for the sole purpose of infusing life into an inanimate body. For this I had deprived myself of rest and health. I had desired it with an ardour that far exceeded moderation; but now that I had finished, the beauty of the dream vanished, and breathless horror and disgust filled my heart.

And he didn't take it home with him. He leaves his laboratory and heads back home. The golem lives in the forest for a long time, and finds a family living in a cottage. While hiding from them, he learns to speak, and tries to talk to them. They shun him, and he is filled with anger at his creator for creating him and leaving him alone. So he finds Frankenstein's home and then kills his family.

Maybe some adaptions of the book have different stories on this topic (I've only ever read the original) but I would argue that a depiction which shows Frankenstein regret his decision much later (and the golem's murder of his family being something other than revenge against his creator for abandoning him) is missing the point of Shelly's story.

[1]: https://www.gutenberg.org/files/84/84-h/84-h.htm#chap05


The concern isn't so much that the CIA will use it, but that someone else will find it and use it before the CIA does--a problem that is avoided by having the CIA disclose the vulnerability instead of keeping it for a rainy day.


How about this scenario.. there is now a common unified interface for all computers. If a vulnerability is discovered then all systems are vulnerable and must be patched. How are those patches delivered? It's protection may lie with a single signing-key. How well controlled is that signing key?


I believe you.

But looking at the International Obfuscated C Code Contest (http://www.ioccc.org/) entries, and knowing how much I have to force my eyes not to glaze over whenever a college sends me a 700 line pull-request, if one of your colleagues waited until the deadline to send a massive pull-request for their part of the project, can you say that the deadline is pushed back until every single line is meticulously analyzed by hand, to assert that nothing nefarious could possibly happen with their code?

Just one of your coworkers would need to believe in a greater purpose, for king and country, and grown up in a large family with a brother or cousin who's a part of the intelligence community.

It sounds far-fetched, but so does the Bay of Pigs.


> a folder filled with requirements

That's not how the intelligence agencies operate.

> evolved from there over months/years

THIS is how they influence standards and design choices. We know that in 2013 the NSA budgeted at least $250M to programs such as "BULLRUN" that which intended to "Insert vulnerabilities into commercial encryption networks, IT systems, and endpoint communication devices..."[1].

For an example of how this works, see John Gilmore's description[2] of how the NSA influenced IPSEC. They don't use a folder of requirements; instead they gain influence over enough people to complain about "efficiency" or other distractions and occasionally add a confusing or complicated requirement that just happens to weaken security.

PHK gave an outstanding talk[3] that everyone should see about the broader subject of how the common model most people have about how the NSA works is obsolete.

[1] http://www.nytimes.com/interactive/2013/09/05/us/documents-r...

[2] https://www.mail-archive.com/cryptography@metzdowd.com/msg12...

[3] https://archive.fosdem.org/2014/schedule/event/nsa_operation...


> Unless there were secret CIA agents disguised as my colleagues

That's a thing actually.

> I'll never be able to convince anyone of anything.

I believe you. Conspiracy theories are fun but ultimately I know that secrets are hard to keep secret.


Do you know technical details - like how many processes are even running under the Minix OS?

Also which internal or external groups lead the code development of those processes?

Is the code accessible to any employee/engineer with a technical relationship to IME?


Ok, however: sometimes to hide a thing, all you've gotta do is...you just hide it in plain sight and give it a slightly different "color scheme"


> > Intel ME and the (assumed [0]) partnership with CIA to design and build this system

Then why do you need security clearance to work on Intel ME?


Think about it logically for a second. Regardless of the decisions that led up to the inclusion of Intel ME in Intel CPUs (i.e. regardless of whether the CIA was involved or not), compromising Intel ME is still a security risk for Intel customers, so of course they're going to limit access to a select few that they trust, and that the government can trust.

I'm fairly certain similar restrictions will also apply to those who are granted access to knowledge about CPU microcode, which is an equally large security risk that nobody with even a basic understanding of how CPUs work is blaming on the CIA.


Security though obscurity doesn't work.


Do you have a citation for that? That sounds interesting


The "citation" is a Twitter post [0] that included a screenshot of an anonymous post to 4chan by a supposed Intel employee who claims to have worked on the Management Engine team for the last three years. It was linked upthread.

[0]: https://twitter.com/9th_prestige/status/928740294090285057


Thank you. That is worrisome.


Why not make a physical turn off switch on motherboard to disable it?


regarding 1): hiding in plain sight is sometimes a valid strategy. So is heavy compartmentalization.


wow, thought the exact same thing haha


It should've only been sold on a special "business class" series of CPUs. Intel already loves having dozens of variants, as evidenced by recent market offerings; and it's not like they don't already have dedicated business-class CPUs for workstations. Simply only sell ME as an "addon" tier, and that limits the potential damage.

Incidentally, did you hear about Silent Bob is Silent? What are your thoughts on that vuln?


> If it was something done for the CIA, I believe it would probably have been kept secret instead of marketed.

If that is the case, we would have doubted about this too much early than it would otherwise (because there is a dedicated hardware).

So some people friendly feature have to be dubbed along with the anti-features. This may not be the real case though, but one of the possibilities.


> were going to run a lot more stuff -- think a full JVM

Is a JVM really a lot more stuff than Minix OS?


"The market" is only going to "punish" you if..

- The masses actually care

- There is an alternative

Neither is the case here. Most people couldn't care less about things like ME and AMD and Intel are a oligopoly. If you want a modern x86-64 CPU you only have those two choices and both do this. That is the problem here, not fiat currency.


I'd say that there's a weird dependence between your two points; people often seem to care because there is an alternative.

Examples of this might be Fair Trade coffee, or energy saving light bulbs. Prior to their marketing, I doubt that vague ethical considerations were on the 'top 10' list of consumer wants from a new product, if they registered at all.

But when people are presented with a choice, if you can, why not get the better stuff?

Another analogy might be something like the TPM chips on iPhones. I very much doubt that focus groups or surveys at Apple found TPMs in the list of requested new features. However, things like TPMs get written up, and add to the things that journalists can describe around the vague theme of relative security and relative privacy; important concepts to consumers. Once this is internalized, when making a comparison between phones, a motivated consumer might consider the absence of a TPM a problem.

I doubt that Intel would start marketing _No Backdoor™_ chips, but I could imagine a consumer-facing hardware vendor coming up with some kind of comparison-based branding for avoiding the ME. There's a reasonable chance that Apple may continue to integrate vertically and get away from Intel over the next few years. And I was extremely surprised that Purism (a company basically founded on resentment towards the ME) could crowd-fund millions of dollars in the way it has.


Agreed. See my pie in the sky reply to majewsky in sibling comment - but yeah, no easy way for the average man to 'punish' big bad Intel it would seem.

Perhaps another technique to punish them is via class action lawsuit - with all the companies potentially affected by this and what now seems like evidence of intent forthcoming it maybe is solid basis for a case but I'm no lawyer.


> Companies like Intel, who are complicit in helping CIA or any intel agency (government, rogue or otherwise) infiltrate and exploit our systems - need to be held accountable by the market.

At the same time as "buy American"? You're aware that any American chipmaker will be gag-ordered to help the CIA?


Fair point, but let's not paint such a bleek picture. Gag-orders are an unfair (unconstitutional?) weapon of tyrannical regimes and should be condemned as so. Aside from taking political action to remove that tool from big brother's arsenal we as hacker/entreprenuers can build systems and strategize on how to mitigate and avoid gag-order scenarios altogether.

Perhaps this is pie in the sky but a future where open hardware is as ubiquitous/accessible/easy to use as open source software would make it easier to change chips or gut your laptop and re-build it with hardware that you can trust.


The landscape is of course complex... but I think that companies exposing their clients to such risk will only learn to protect their clients rights to privacy and to self-determination once organized groups of clients fight back on the court against this kind of practice. This is a question of human rights, not just a technical feature. Companies need to take legal responsibility over their decisions in any case where the security, freedom and free will of clients are at risk.


Seriously? A 4chan post?

While the ME is worrying for many reasons, there's absolutely zero evidence that the Intel ME contains a backdoor.

Backdoors don't stay hidden forever.


That's a bad argument. Firstly, it's my understanding that there have already been root-access 0 days discovered in the ME (and since patched since exposed). AND The USB jtag backdoor is the whole point of this post.

Secondly, a security hole and a backdoor are interchangeable these days. So we'll never be able to prove which new 0-days are deliberate, and as far as impact it kinda doesn't matter if they're deliberate.


We're talking about deliberate government backdoors, and it's my opinion that those are highly unlikely.

The ME is a really bad idea because it introduces massive, unnecessary attack surface and vulnerabilities are inevitable, but no conspiracy.


How can you prove a vulnerability isn't deliberate?

We've seen deliberate security vulnerabilities before (DUAL_EC).


You can't, but let me point out that DUAL_EC was a "nobody but us" backdoor that required their private key to use.

(and yes, it backfired)

If they're introducing regular vulnerabilities, they're also making themselves vulnerable, given that the US government is one of the biggest Intel customers.


I remember back in the good old days of cryptography export restrictions when the NSA had a much simpler "nobody but us" approach: you encrypted data with a xx-bit private key, half of which was shared with the NSA. Should they need to break content, the other half of the key could be brute-forced at costs that were economically feasible (for targeted use, not blanket suviellance) to the NSA but the full-length key would be unbreakable (in theory) by anyone without prior knowledge of that other half.


4chan has been popular for leaks/reverse engineering because of its anonymity and the fact that it's seen as (whether or not this is true, and I would wager that it isn't) as a "hacker haven". For example, a guy on 4chan reverse engineered Google's new captcha system almost as soon as it came out, leading Google to eventually hire him in exchange for his deleting the GitHub repo he was using.


4chan is also well known to fabricate evidence, and everything in the post (minus the alleged backdoors) has been publicly known.


Some diamonds, lots of rubble. Some land mines:)


Some would argue the entire design of ME is evidence that it IS a backdoor.


AMT (server grade ME) is definitely a door of some sort, but as an advertised feature, I don't know that Intel is hiding it in the back.

https://www.intel.com/content/www/us/en/architecture-and-tec...


> AMT (server grade ME)

AMT is an application that runs on ME, and it's on very many (most? all?) Intel-based desktop/laptops.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: