Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Can we trust our CPUs?
87 points by freeduck on July 11, 2013 | hide | past | web | favorite | 56 comments
There is allot of buzz around darknets and encrypted mesh-nets. But can we trust our hardware? A modern computer contains allot of different CPUs or CPU like circuits. Is there any way to determine if any of these chips are "Phoning home"?

Actually it's not the CPU you need to be worried about.

Earlier this year at 44Café in London I did a talk in which I dropped about 16 bugs in SuperMicro's IPMI BMC implementation (through the medium of a drinking game), some of which were picked up by Farmer and Moore's recent research into IPMI, some not[1]. The Baseboard Management Controller (BMC) is a completely separate computer, often running unmaintained Linux firmware that has full South-Bridge and i2c access to your computer's memory. Basically it has Direct Memory Access (DMA) but your computer doesn't appear to going the other way around (although I haven't investigated this yet).

The board I looked at ran an ARM chipset and a custom Linux distro built by an OEM called ATEN[1] and customised by SuperMicro. It's not that the system appears to be phoning home, it's more that there are a lot of bugs and defaults in the implementation, and compromising this allows you to compromise the underlying server.

For desktop and laptop systems you don't usually have IPMI, so no BMC. Instead you have intel's iAMT which is very similar in some respects. There's some really fantastic research done in this space by Patrick Stewin and Iurii Bystrov[3] who have implemented a hardware keylogger. I've been in contact with them and they've updated their work since publishing the paper and intend to present the results at the 44CON[4] security conference in London this September.

Again it's not a case of these chips phoning home per se but a non-well documented nor well-publicised attack surface with real-world implications for espionage and malware.

Disclaimer: I'm one of the co-founders and co-organisers of 44Con.

[1] - http://www.wired.com/threatlevel/2013/07/ipmi/

[2] - http://www.aten.com/IPMI.htm

[3] - http://stewin.org/papers/dimvap15-stewin.pdf

[4] - http://www.44con.com/

There are also similar implications for the baseband firmware (the part that deals with the signaling towers etc) in cellphones. Not a single phone on the market has an open source baseband OS.

Isn't it a common practice to keep IPMI out of reach of the Internet? At the time I worked with an ISP all management interfaces were connected to a separate network and the only means of accessing it remotely was through a VPN...

Especially in a large company, it's not that hard for a determined attacker to get something plugged into a network jack. If the management network goes to employee desks, then you can plug whatever you want into it.

Unless vPro is authenticating with 802.11x and you're actually using different passwords for every management interface, a professional cold probably find his way onto that subnet.

From what I've seen, most of the newer ipmi gear including the dedicated port ones include a standardized i2c interface between the platform controller/ec side (the main server) and the BMC - while it has in most cases similar authentication requirements as the typical ipmi over lan, once you've gotten past that you pretty much can run any ipmi commands, including getting raw access to its private i2c bus which I would assume attaches to its bootstrap flash. Once you're that far in bridging between the two nets would just entail writing some (non trivial) software.

It's sound common sense to keep this stuff from the Internet but looking through the Internet census I found hundreds of thousands of candidate matches for SuperMicro BMC instances. It seems to be popular in hosting circles, which might explain why it shows up so much.

If you don't connect it, they tend to default to sharing an interface with your primary LAN connection. They also default to DHCP, so if you're unaware of the need to use/secure it they will be exposed to the internet.

I think concern about other points of compromise that would render encryption less useful is very, very valid. Not sure about the CPU itself per se, and perhaps it could be a combination of components, not to mention the firmware, software, etc.

With all of the emphasis on endpoint encryption as a solution in particular, I have been raising this concern. The NSA has already carved out an exception that allows them to keep encrypted data forever as they attempt to crack it. So, that reveals their determination to defeat protective measures. Why wouldn't they attempt to build in other back doors at the hardware/firmware/software level?

It just seems to me that too much emphasis on technical solutions to government intrusions is not the right way to go. As a back-stop, technical solutions like encryption are fine. But, why should we have to play cat-and-mouse with our government to protect our privacy? This posture lets them off the hook and esentially says, "if they can get your information, it's fair game". We shouldn't have to look over our shoulders this way.

Instead, their activity should be outlawed and there should be legal protection, such that whistleblowers like Snowden are considered heros instead of criminals.

I agree that the law should be the first defense. But the NSA aren't the only criminals we have to worry about - there's run of the mill crooks too. Like the disgruntled network engineer at your job, the guy baiting his neighbours with open wifi, the dodgy stalker dude who happens to work for an ISP. Laws wont protect you from people who are willing to break them.

> "if they can get your information, it's fair game"

It's more an acknowledgement that, if they can get your information, they will. It's human nature. They will push the boundaries of any laws, and overstep them, if it is technologically possible.

Finally someone said it.

Its also a game that we would loose against our own government/s, since we are paying for all that - in effect working against ourselves from the start.

This is a sociological problem that cant be solved by technological means. If people are fine with with their government fucking them over, no amount of tinfoilhattery will suffice to change anything.

Stockholm Syndrome is the norm, not the exception. I'm open to suggestions on methods to fundamentally alter human nature.

You have to build a culture where this is unacceptable through laws that are enforced with sufficient vigor. And as we've seen, you (the public) have to keep constant vigilance for oversteps and "correct" those oversteps through a process of holding government and private individuals accountable.

There's a microcontroller with ARCompact architecture inside Intel chipsets, which has access to all devices, to the RAM, has its own network stack and running custom real-time OS (ThreadX). All technologies actively advertised by Intel such as Active Management, AntiTheft, Identity Protection, Rapid Start, Smart Connect, Protected Audio Video Path - are powered by this controller.

More info: https://ruxconbreakpoint.com/assets/Uploads/bpx/Breakpoint%2...

So basically every desktop motherboard or notebook has a chip which runs unknown software and has full access to your RAM and network interface. There's something to worry about.

Any system of significant complexity be it hardware or software cannot be trusted unless it is 100% open source both from the hardware level to the software level and all systems that are used to manufacture it are open source as well and the whole process has open oversight.

Even if you tape out a CPU and ship it to the fab, they could still add stuff before it is packaged.

In conclusion, no you can't trust anything we use today. Even Stallman's open-everything laptop is open to compromise.

Pen, paper, box of dice, OTP or accept these facts.

A pen could be compromised with a micro camera as well, if you want to be really paranoid.

I can do a complete security audit of my pens in less than a minute. I can't do the same with my computer hardware.

That is why NSA has their own factory in which they produce their own chips and computers, clean and (electronically) isolated place.

I believe any country worth its salt, such as China/Israel/Russia, has such a capability - to produce their own chips and stack to work with, and they have the capability to compromise/backdoor foreign/target hardware, as well as methods to detect attempted tampering in their own for example military hardware.

It would be too damn funny if a countries communications and/or military hardware suddenly started acting "weird"/against them in case of a war?

>NSA has their own factory in which they produce their own chips and computers

Citation or evidence?

While they do have their own bespoke fab, the description given there is wildly more impressive than I've ever heard it described by people not giving speeches.

For obvious reasons they are always going to need their own fab to some extent, because some applications are just too low volume or would leak way too much project specific information. Wherever the truth lies, I'm sure nothing coming out of there is general purpose computing meant for typical nsa run systems.

It's not too hard to figure out who they're using for mainstream chips and mid size run custom fabs. Look for well established strong us based companies that use us designers that have good analog and small cmos process digital fabs that are located in the US - especially new york, texas and oregon.


That's from 1998 , when building fabs was cheap. I wonder what manufacturing process their latest factory uses.

Bamford mentions it briefly in one of his books, either The Puzzle Palace or Body of Secrets. They've had chip fab capabilities for some time before the 1990's.

As of a few years ago the federal government has started a "trusted foundry" project to sign on and create new domestic fabs. I don't know where it's at, but the intention has been stated.


Since most chips come from China how hard would it be for the chinese government to embed back doors in the chips firmware/logic?

Not hard at all, in fact FBI or some other letter-agency found their switches and routers had hardware backdoors in them back in 2009 I believe, it was kind of a big deal back then.

Since then US stepped up their cyber-warfare capabilities by forming a cyber-warfare department within Pentagon and such, probably also increasing the sampling rate and testing methods of their procured devices from foreigners.

It's funny to me that they try to get back-doors baked in to all sorts of hardware and software, but somehow appear to believe that nobody else has done the same.

I'm not sure which they you mean, but I assure you nobody is under the impression that only some people do it. Core routers and core cellular gear is quickly becoming an industry that every nation state that can is looking to jumpstart their own homegrown industrial suppliers.

The number of global intelligence agencies that believe cisco (or Huawei or Alcatel etc) core routers are free of side channel attacks or dodgy opaque asics can be safely assumed to be 0.

Short answer is you can trust building block type components like CPUs if they're designed by a company that is in the camp of the same nation state/alliance that you align with as well. Very similar to the thought process you would use when deciding if you can trust that guy over there with a gun.

Theoretically the answer is no if you're talking about gear (say highly integrated Socs) designed and fabbed in a country that has demonstrated a trust issue or two with the folks that issue your passport.

Practically though this is one of the last things you should be spending time worrying about assuming you're not currently engaged in global politics or things that have a blast radius.

You can pretty much hide a semitrucks worth of nastyness inside any modern chip these days. And while it wouldn't be impossible to find, it requires a well financed effort to try.

But the real answer is you didn't really ask the right question. Computers (and phones/etc etc) are so inundated with security holes between the endless streams of bugs, opaque supply chains, exploitable design errors and a pervasive belief that better security = less sales that there's simply no need to go after the cpu, it's far cheaper and provides credible deniability to all involved.

While I have no doubt there are at times intentional flaws introduced into big name chip designs, any use of such things would be limited to extremely unique circumstances as the blowback if discovered would be pretty damn apocalyptic if you're talking say intel/ibm/oracle.

Anybody that's going to get at your data is ether going to convince you to give it to them, or spend an hour or two and beat your software stack.

Even when the NSA testifies in congress to convince them to block telecom mergers unless they get a clause barring zte/huawei gear it's primarily the software stack that they're worried about. Even listening devices need point releases from time to time.

I'm not sure about CPUs. They don't have direct connections to the outside world. If you want to distrust any hardware, it makes sense to focus on communication peripherals such as network cards, wifi/gsm modems etc.

For example the baseband processors in phones contain a complicated firmware of 16 MB+. This contains many hidden diagnostic modes on various subsystem levels. Baseband processors have been known to be exploited remotely, and as they have direct connection to the main CPU, giving full control over the device, including GPS, camera etc.

There have also been bugs in wired networking hardware in which specially crafted packets resulted in low-level crashes. It was not exploitable in the cases I remember, but I'm sure someone persistent enough and with the right skills may be able to find some.

None of these examples actually "phone home" in the classic sense, but my point is that these peripherals have proprietary firmware that is hardly under public scrutiny, and anything can be hidden in them.

Also, if connected via PCI, networking hardware has direct memory access. AFAIK most GSM modems are USB devices, though.

Most GSM modems are part of SoCs. Indeed for USB dongles for your PC or laptop it's different, but that's not the "baseband processor" I was talking about. It would be somewhat harder to exploit the parent system through USB (at least if the driver can be trusted), on the other hand it could temporarily pretend to be a keyboard and storage device but that'd be very OS dependent.

You might be interested in https://blogs.oracle.com/ksplice/entry/hosting_backdoors_in_... - they show that devices connected to the PCI bus can potentially modify your kernel at boot time, without changing its on-disk signature.

I'm glad you raise the question freeduck. I was actually going to post something like it once the more important discussion about opposing surveillance by social and political means has got going properly (technical solutions won't fix the deeper problem of authoritarian tendencies in society). A few years ago I read about remote control by hardware in this article: http://www.tgdaily.com/hardware-opinion/39455-big-brother-po...

Be sure to check the video linked at the end: http://www.youtube.com/watch?v=wlj7u3tOQ9s

I don't know much about this myself but I'll be very interested to see the HN community's take on to what extent we can trust out hardware.

This is a copy-paste of my comment from an older thread [0]:

http://www.xakep.ru/post/58104/ (use Google Translate)


The author has found an undeclared software module (backdoor?) working as a hyper-visor in the System Management Block chip on the South Bridge working with Intel CPU with VT virtualization technology.

[0] - https://news.ycombinator.com/item?id=4462782

"Is there any way...?"

Generally, if you mean phoning home over the internet, yes! If you mean over cellular networks, no.

To get things started, there should first be some "buzz" around kernelspace packet filters. Because that is what you need to start using.

Computers running packet filter software are sometimes called "firewalls". But that terminology does not promote much understanding among users who are not also career network administrators.

If you are concerned with what your device is communicating over the public internet, then you can monitor these communications first to confirm your suspicions. And then, if necessary, you can exercise some control over it.

How? Run packet filter software in your OS's kernel. Numerous open source, free OS's allow you to do this. And what does it cost? Nothing! With commercial, proprietary OS's that are sold for money (which are often just modified versions of open source free OS's) it may not be so easy. In fact, they may make it impossible to do. Go figure.

With a packet filter, you can view and, if desired, block packets entering and leaving your machine, according to your rules. Assuming you can get this set up easily (and indeed you can), why would anyone not want to do this? You can even use an old computer repurposed just to do packet filtering. Have your new devices use it as a gateway.

For the avoidance of doubt, popular "firewall" software like ZoneAlarm or whatever are not what I'm talking about. Those are userspace software.

"Can we trust hardware?"

In general, I'd say the more bundled it is, the less trustworthy it is. If you cannot even open the enclosure let alone run your own OS (hello Apple), that's not going to help users who want to "trust, but verify". Building your own computer (think something like RaspberryPi) gives you freedom and more peace of mind.

Wireshark or similar network sniffer. Any phone-home is going to require using the usual channels to get out of your location to the general internet. Things like TEMPEST require external sniffers; an autonomous agent will have to use what is going to be available, and your modem isn't going to be transferring custom protocols... and if it is, then the ISP's equipment probably wouldn't. And if all three units are, well, things are worse than we could have imagined, but you'd be talking about a conspiracy that would require the silence of an incredible amount of people.

This is definitely a pure conspiracy theory, but there have to be hundreds of readily usable side-channels for transmitting information out of a compromised box that wouldn't show up in Wireshark unless you knew exactly what to look for. Think biases in timings or not-so-random bits in client-generated TLS values.

Don't know why you were downvoted, it's a valid concern. It won't be "phoning home", though -- you only can phone the so called "global adversary" this way (because you can't send packets to arbitrary addresses).

Also, messing with TLS probably won't work at all. Messing with e.g. MTU sizes, IP sequence numbers and perhaps TCP options, on the other hand, could probably work.

Scariest subthread in months. Steganography that doesn't depend on packet contents, but on metadata that seems ephemeral, insignificant, and/or arbitrary. If all of the traffic on a few large pipes has to pass through you, I don't see how this wouldn't be both straightforward to implement and undetectable.

If this hasn't been done, it will be.

This is a good point, but remember also that those packets have to go somewhere specific - the information may not be encoded in the bits, but in the timing instead. This being said, the information still has to arrive at its destination.

We're talking about an adversary that potentially taps and stores all traffic at major endpoints worldwide. They packets have to go further than a few hops, but not to a specific destination, not necessarily.

Think taping a playing card to the spokes of a deaf person's bike. You won't automagically hear it when they're riding it in the basement, but if they ride through the streets, you'll know.

There's nothing quite like a good conspiracy theory to start the morning, I read your comment and thought, "Gosh, encoding phoning home spyware that way would be straight forward to implement on top of an automatic updates feature."

I won't say I am a bit more paranoid, rather I am amused by the thought that I could be, but for the understanding that the technical details are so raw that there is no avoiding such exploits once anyone is granted my trust.

I don't think you could fit enough transistors into a CPU die to do any form of monitoring - you would need to basically have an incredibly complex software package do it (which you could store on a ROM on the die). The only problem with that is you would need to translate a set of machine code instructions into an intent in real-time: we can't even do this offline (the best we have is IDA, but that needs crazy amounts of human intervention). So as far as the physical chip goes it's irrationally paranoid to worry about it.

However, Realtek, as a good example, love to home-grow their own [shitty] protocols and hence always require drivers - so if you want to realistically question whether or not your hardware could monitor you, looks at its inseparable twin: drivers. They run in the kernel and have access to pretty-much everything else on your system: no amount of UAC, sudo or whatever else not will keep your data safe from it - and most users won't think twice when installing them.

Can someone please explain if this is true?

"Computers with particular Intel® Core™ vPro™ processors enjoy the benefit of a VNC-compatible Server embedded directly onto the chip, enabling permanent remote access and control."


Because if it is true, then your CPU is a potential backdoor given that they have a masterkey or masterpassword.

And I really don't understand how this can work when the computer is turned off, any ideas??

When you turn off computer through OS motherboard still stays powered. Later it can wake up itself by bios alarm or wake-on-LAN or this VNC thing.

"turned off" and "unplugged" may be two different things. even older systems have 'wake on lan' capability.

You can always use a second device with different hardware to monitor your own network traffic. If you don't fully trust that hardware use yet another device with different hardware behind it :P

It's certainly interesting to think about, at least: http://theinvisiblethings.blogspot.ru/2009/06/more-thoughts-...

Given the huge downside of a CPU exploit leaking or being detected, I'd expect them to be used very sparingly if at all. This XKCD comes to mind: https://xkcd.com/538/

I think you cannot trust anything that is not behind an air gap and powered by a source not connected in the general grid.

That's the absolute truth. And given the vast array of high frequency oscillators, high end dacs and software configurable pin reassignment, you really need to include a pretty effective rf shield (say ~-40db over ~10kkhz-1.xghz) or go for something closer to an air moat than an air gap.

By the same token the only 100% trustworthy human is a dead one. But we trust folks constantly and we're overwhelmingly better off for it.

The answer is clearly, and obviously: no you can't trust CPUs, memory, compilers, disassemblers, microcode or anything else.

The better question is: what is being done to address this? What can be done (are you going to trust an "open source" CPU producer who swears they have no back doors?)?

Intel actually sells the capability of most post-1st gen Intel processors phone home in a way that is transparent to the end user as a feature.


I've always wondered about those dirt-cheap USB devices sold on eBay. Seems like an easy target for a malicious device masquerading as a USB hub.

The only solution is opencore cpu based hardware


Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact