Earlier this year at 44Café in London I did a talk in which I dropped about 16 bugs in SuperMicro's IPMI BMC implementation (through the medium of a drinking game), some of which were picked up by Farmer and Moore's recent research into IPMI, some not. The Baseboard Management Controller (BMC) is a completely separate computer, often running unmaintained Linux firmware that has full South-Bridge and i2c access to your computer's memory. Basically it has Direct Memory Access (DMA) but your computer doesn't appear to going the other way around (although I haven't investigated this yet).
The board I looked at ran an ARM chipset and a custom Linux distro built by an OEM called ATEN and customised by SuperMicro. It's not that the system appears to be phoning home, it's more that there are a lot of bugs and defaults in the implementation, and compromising this allows you to compromise the underlying server.
For desktop and laptop systems you don't usually have IPMI, so no BMC. Instead you have intel's iAMT which is very similar in some respects. There's some really fantastic research done in this space by Patrick Stewin and Iurii Bystrov who have implemented a hardware keylogger. I've been in contact with them and they've updated their work since publishing the paper and intend to present the results at the 44CON security conference in London this September.
Again it's not a case of these chips phoning home per se but a non-well documented nor well-publicised attack surface with real-world implications for espionage and malware.
Disclaimer: I'm one of the co-founders and co-organisers of 44Con.
 - http://www.wired.com/threatlevel/2013/07/ipmi/
 - http://www.aten.com/IPMI.htm
 - http://stewin.org/papers/dimvap15-stewin.pdf
 - http://www.44con.com/
Unless vPro is authenticating with 802.11x and you're actually using different passwords for every management interface, a professional cold probably find his way onto that subnet.
With all of the emphasis on endpoint encryption as a solution in particular, I have been raising this concern. The NSA has already carved out an exception that allows them to keep encrypted data forever as they attempt to crack it. So, that reveals their determination to defeat protective measures. Why wouldn't they attempt to build in other back doors at the hardware/firmware/software level?
It just seems to me that too much emphasis on technical solutions to government intrusions is not the right way to go. As a back-stop, technical solutions like encryption are fine. But, why should we have to play cat-and-mouse with our government to protect our privacy? This posture lets them off the hook and esentially says, "if they can get your information, it's fair game". We shouldn't have to look over our shoulders this way.
Instead, their activity should be outlawed and there should be legal protection, such that whistleblowers like Snowden are considered heros instead of criminals.
It's more an acknowledgement that, if they can get your information, they will. It's human nature. They will push the boundaries of any laws, and overstep them, if it is technologically possible.
Its also a game that we would loose against our own government/s, since we are paying for all that - in effect working against ourselves from the start.
This is a sociological problem that cant be solved by technological means. If people are fine with with their government fucking them over, no amount of tinfoilhattery will suffice to change anything.
More info: https://ruxconbreakpoint.com/assets/Uploads/bpx/Breakpoint%2...
So basically every desktop motherboard or notebook has a chip which runs unknown software and has full access to your RAM and network interface. There's something to worry about.
Even if you tape out a CPU and ship it to the fab, they could still add stuff before it is packaged.
In conclusion, no you can't trust anything we use today. Even Stallman's open-everything laptop is open to compromise.
Pen, paper, box of dice, OTP or accept these facts.
I believe any country worth its salt, such as China/Israel/Russia, has such a capability - to produce their own chips and stack to work with, and they have the capability to compromise/backdoor foreign/target hardware, as well as methods to detect attempted tampering in their own for example military hardware.
It would be too damn funny if a countries communications and/or military hardware suddenly started acting "weird"/against them in case of a war?
Citation or evidence?
For obvious reasons they are always going to need their own fab to some extent, because some applications are just too low volume or would leak way too much project specific information. Wherever the truth lies, I'm sure nothing coming out of there is general purpose computing meant for typical nsa run systems.
It's not too hard to figure out who they're using for mainstream chips and mid size run custom fabs. Look for well established strong us based companies that use us designers that have good analog and small cmos process digital fabs that are located in the US - especially new york, texas and oregon.
That's from 1998 , when building fabs was cheap. I wonder what manufacturing process their latest factory uses.
Since then US stepped up their cyber-warfare capabilities by forming a cyber-warfare department within Pentagon and such, probably also increasing the sampling rate and testing methods of their procured devices from foreigners.
The number of global intelligence agencies that believe cisco (or Huawei or Alcatel etc) core routers are free of
side channel attacks or dodgy opaque asics can be safely assumed to be 0.
Theoretically the answer is no if you're talking about gear (say highly integrated Socs) designed and fabbed in a country that has demonstrated a trust issue or two with the folks that issue your passport.
Practically though this is one of the last things you should be spending time worrying about assuming you're not currently engaged in global politics or things that have a blast radius.
You can pretty much hide a semitrucks worth of nastyness inside any modern chip these days. And while it wouldn't be impossible to find, it requires a well financed effort to try.
But the real answer is you didn't really ask the right question. Computers (and phones/etc etc) are so inundated with security holes between the endless streams of bugs, opaque supply chains, exploitable design errors and a pervasive belief that better security = less sales that there's simply no need to go after the cpu, it's far cheaper and provides credible deniability to all involved.
While I have no doubt there are at times intentional flaws introduced into big name chip designs, any use of such things would be limited to extremely unique circumstances as the blowback if discovered would be pretty damn apocalyptic if you're talking say intel/ibm/oracle.
Anybody that's going to get at your data is ether going to convince you to give it to them, or spend an hour or two and beat your software stack.
Even when the NSA testifies in congress to convince them to block telecom mergers unless they get a clause barring zte/huawei gear it's primarily the software stack that they're worried about. Even listening devices need point releases from time to time.
For example the baseband processors in phones contain a complicated firmware of 16 MB+. This contains many hidden diagnostic modes on various subsystem levels. Baseband processors have been known to be exploited remotely, and as they have direct connection to the main CPU, giving full control over the device, including GPS, camera etc.
There have also been bugs in wired networking hardware in which specially crafted packets resulted in low-level crashes. It was not exploitable in the cases I remember, but I'm sure someone persistent enough and with the right skills may be able to find some.
None of these examples actually "phone home" in the classic sense, but my point is that these peripherals have proprietary firmware that is hardly under public scrutiny, and anything can be hidden in them.
Be sure to check the video linked at the end: http://www.youtube.com/watch?v=wlj7u3tOQ9s
I don't know much about this myself but I'll be very interested to see the HN community's take on to what extent we can trust out hardware.
http://www.xakep.ru/post/58104/ (use Google Translate)
The author has found an undeclared software module (backdoor?) working as a hyper-visor in the System Management Block chip on the South Bridge working with Intel CPU with VT virtualization technology.
 - https://news.ycombinator.com/item?id=4462782
Generally, if you mean phoning home over the internet, yes! If you mean over cellular networks, no.
To get things started, there should first be some "buzz" around kernelspace packet filters. Because that is what you need to start using.
Computers running packet filter software are sometimes called "firewalls". But that terminology does not promote much understanding among users who are not also career network administrators.
If you are concerned with what your device is communicating over the public internet, then you can monitor these communications first to confirm your suspicions. And then, if necessary, you can exercise some control over it.
How? Run packet filter software in your OS's kernel. Numerous open source, free OS's allow you to do this. And what does it cost? Nothing! With commercial, proprietary OS's that are sold for money (which are often just modified versions of open source free OS's) it may not be so easy. In fact, they may make it impossible to do. Go figure.
With a packet filter, you can view and, if desired, block packets entering and leaving your machine, according to your rules. Assuming you can get this set up easily (and indeed you can), why would anyone not want to do this? You can even use an old computer repurposed just to do packet filtering. Have your new devices use it as a gateway.
For the avoidance of doubt, popular "firewall" software like ZoneAlarm or whatever are not what I'm talking about. Those are userspace software.
"Can we trust hardware?"
In general, I'd say the more bundled it is, the less trustworthy it is. If you cannot even open the enclosure let alone run your own OS (hello Apple), that's not going to help users who want to "trust, but verify". Building your own computer (think something like RaspberryPi) gives you freedom and more peace of mind.
Also, messing with TLS probably won't work at all. Messing with e.g. MTU sizes, IP sequence numbers and perhaps TCP options, on the other hand, could probably work.
If this hasn't been done, it will be.
Think taping a playing card to the spokes of a deaf person's bike. You won't automagically hear it when they're riding it in the basement, but if they ride through the streets, you'll know.
I won't say I am a bit more paranoid, rather I am amused by the thought that I could be, but for the understanding that the technical details are so raw that there is no avoiding such exploits once anyone is granted my trust.
However, Realtek, as a good example, love to home-grow their own [shitty] protocols and hence always require drivers - so if you want to realistically question whether or not your hardware could monitor you, looks at its inseparable twin: drivers. They run in the kernel and have access to pretty-much everything else on your system: no amount of UAC, sudo or whatever else not will keep your data safe from it - and most users won't think twice when installing them.
"Computers with particular Intel® Core™ vPro™ processors enjoy the benefit of a VNC-compatible Server embedded directly onto the chip, enabling permanent remote access and control."
Because if it is true, then your CPU is a potential backdoor given that they have a masterkey or masterpassword.
And I really don't understand how this can work when the computer is turned off, any ideas??
Given the huge downside of a CPU exploit leaking or being detected, I'd expect them to be used very sparingly if at all. This XKCD comes to mind: https://xkcd.com/538/
By the same token the only 100% trustworthy human is a dead one. But we trust folks constantly and we're overwhelmingly better off for it.
The better question is: what is being done to address this? What can be done (are you going to trust an "open source" CPU producer who swears they have no back doors?)?